And, I'm looking for an EFFECT. I want to see a statistical difference between the responses of those who heard the music ads and those who heard the non-music ads. If I don't find that effect I'm disappointed--and I'm primarily upset because I know that without finding such an effect it is highly unlikely that the manuscript of the study will get published. That's because scientists (and journal editors, and journal reviewers) like to see that their experimental manipulations have effects.
Of course, what I forget...as do other scientists, journal editors & reviewers is that a LACK OF EFFECT is still knowledge that we didn't have before. If theory gives me every reason to expect that music will lead to better liking of the ad...and music in fact doesn't then that should lead me to wonder why? Is there something about the particular music I chose? Or, maybe even more importantly, is there something wrong or imprecise about the theory I've used? This lack of effect is something that other scientists should know about so they can begin wondering about the possible reasons for it. But, without the ability to get the LACK OF EFFECT paper published...no one but my grad students will know about the result. [And, as an aside, they will indirectly learn that it isn't as important as the EFFECT paper].
Now...this bias toward wanting to show that something caused and effect is, arguably, of no harm at all in the study of mass communications. But, consider details of a report I heard on the radio the other day. Apparently, a group of medical doctors have investigated both the reports submitted to the FDA and the manuscripts published associated with 74 studies conducted on the efficacy of 12 antidepressant drugs.
Figure this: Of the 74 studies, the FDA found 38--just about half--that were positive...showing those EFFECTS that scientists love so much. Of those 38, all but one were published.
Now, consider the 36 other studies...the ones that didn't show effects. Only three of those were published as "negative." There were eleven others that, according to the authors of the study, were published as finding "positive" effects...exactly the opposite conclusions arrived at by the FDA.
Now, if you figure that doctors read these journals...and, perhaps, drug companies refer to the conclusions of presented in these journals...then this is a big concern. The conclusion from the article is that this led to an increase in presumed effectiveness of 11-69%!!
Read the whole article here.
2 comments:
It's like mass media, who are looking for sth unusual to attract eyeballs. uhm, we sometimes misundersand the outside world through media.
Post a Comment