I am frequently disappointed by the way I see science being taught in the US- no idea if it’s the same or different in other countries. What disappoints me is a tendency to focus almost exclusively on “how to design an experiment” with little to no education on how to understand and interpret scientific results.
The only time I actually really got any focus on this at all was in college, and even then it was sort of a haphazard focus. Students are somehow expected to intuit a lot of this knowledge on their own.
I’m not really making this post to specifically remedy the issue, but I do hope that some of these suggestions will help you think more critically about how and why research is used.
Anecdotes are Useless
Humans really like anecdotes. Tumblr in particular is very anecdote-friendly, but it’s not the only place: the internet is brimming with testimonials, online reviews, and personal stories.
Assuming that most of these individuals are telling the truth, what’s the problem? We’ve all relied on a friend’s description of a product or program every once and a while before buying it (or into it) ourselves; it seems, in theory, a very good way to test what we’re getting.
I’m not going to tell you to stop asking your friend’s opinion on things or to stop reading online reviews (hell, I still do it, regardless of what I know) but these things have a much better chance of influencing our perception of the item in question than telling us the truth about the item itself.
Here is an example: a brand new study comparing clinical trials for diet and fertility programs to Amazon reviews of the same programs. I’ll sum up the results.
- In three separate clinical trials of a certain diet, only about 27% of participants reported significant weight loss, while on Amazon, 94% of reviewers reported significant weight loss under the same diet.
- The same went for the fertility treatment the authors looked at: people with very positive experiences were more likely to leave reviews than people with mediocre or negative experiences.
- Unsurprisingly, more people wanted to try the diet/treatment when they were shown the Amazon reviews rather than the actual clinical results.
And it’s not just the decision-making that’s affected. Reading about a lot of positive experiences can certainly influence our own experience via the placebo effect: if we are expecting something to turn out positive, we usually perceive it to be more positive than we would have otherwise.
This isn’t necessarily a negative thing, as the above study shows: it can make things like therapeutic treatments a hell of a lot more effective than they would have been otherwise. But it can be harmful when it causes people to stop seeking science-based treatment in favor of “alternative” treatment. A placebo is not going to be able to cure something like, say, a tumor.
Another major issue with relying on anecdotes is that they have a high chance of misattribution and that they rarely look at long-term effects. This is known as regression to the mean. If, say, I am a werewolf, and I take Dr. Quack’s Miracle Anti-Lycanthrope Drug the night after the full moon, I will notice an abrupt cessation of werewolf symptoms (becoming very hairy and eating small children). Immediately I will post a positive Amazon review of the drug: It worked! It worked!
Then the next month on the full moon I go back to my follicular, child-eating ways, because of course as everybody knows the moon is the scientifically-validated cause of lycanthropy, and no drug is going to change that. However, I don’t go to Amazon and take back my review of the drug, because I am busy eating children. And anyway, even if I did, the damage is done.
No Source, No Go
So now we should understand that reviews and testimonials are not really meaningful. So what is meaningful? Research, supposedly.
So let’s say I’ve heard about a new miracle diet for cats: the butter-and-mayonnaise diet! So healthy, so pure! My friend has her cat on this diet and gushes about how beautiful it makes her cat’s eyes and smile look.
Knowing what I do about the reliability of anecdotes, I turn to the internet for a bit of research. I type into google “cat butter-and-mayonnaise diet,” thinking that this will give me a good overview of the pros and cons of such a diet.
The first four hits are butter-and-mayonnaise diet sites, all loaded with information and Science Talk about why the butter-and-mayonnaise diet mimics specific nutrients that cats have lost for thousands of years during their domestication, and how the ancient Egyptians used to feed their cats butter-and-mayonnaise as a means of worshipping them like gods, and don’t you want to worship your cat like a god?
We’d all like to think that we wouldn’t fall for something as stupid and as manipulative as that, but it is surprisingly easy. Especially if all the information we’re finding seems to suggest that butter and mayonnaise really are the only things we should be feeding our cats.
However, if you look closely at all of these supposedly reliable websites, you’ll notice one thing: for all of their self-assured talk of Science! and Facts!, they are not actually referring to any real research. They will tell you plenty of things, like how cats without butter in their diet are perpetually wasting away from Salt-free syndrome, which sooooo many vets misdiagnose as cancer, of all things, but as soon as you put butter in your cat’s diet you will notice a specific LACK of cancer…
It goes on and on. But for every statement the website asserts as a fact, there is zero actual, scientific research to back it up. Or even if there is, it is entirely peripheral. Here is data on the actual fat and salt concentrations within butter and mayonnaise! Your cat needs these! Why? It just does!
And then, of course, come the inevitable flood of testimonials and anecdotes assuring you that the butter-and-mayonnaise diet is the best thing that has ever happened to their cat.
I think that most people do know, on some level, that these sites are obviously biased, even if they are actually well-meaning. But the problem is that legitimate research is a little harder than just typing a few key words into google, and most people feel more comfortable believing the first thing they read than trying to dig a little deeper.
(And once they believe it- because these sites are engineered in a way to encourage the least possible critical thinking in their “just-so” construction- it is damned hard to convince people otherwise. If someone is absolutely assured of the validity of something, with no room for error- that is when your bullshit meter ought to start buzzing.)
In the end, the best way to actually research something is to go through and read the primary sources for yourself- that is, the body of scientific research that exists on the matter. Unfortunately, as I mentioned above, the way to actually do this isn’t really taught in schools, and it is intimidating for someone who isn’t in a scientific field to try and do this.
So here are a couple of tips for finding good sources, and weeding out informative sites from the “just-so” sites:
- Credentials of the author. This is a bit shaky, because even ‘highly educated’ folks are susceptible to all kinds of cognitive dissonances, and that ever-alluring scent of money. I think even the fellow that argued that humans evolved from chimp-pig hybrids actually has a PhD. (I’m not going to link to it, but if you want to read a lot of terrible science, look it up.)
- Actually linking to sources. As opposed to just telling you things as though they are established facts, good websites will link you to published research that supports their claims, usually with a “references” section. Reference sections that only contain links to other websites, pro-whatever books, et cetera don’t count.
- Sources actually have to do with what they’re talking about. Anyone can link to a scientific paper. At least read the abstracts.
- Lit reviews. Literature reviews are published scientific papers that collect and analyze a lot of published research to come to a conclusion. These are great tools for those who might not feel confident searching through primary sources. They’re not infallible, of course, but they’ll leave you much better informed than most sources you find through a basic google search would. Good online articles will more or less be more colloquial forms of lit reviews.
It irritates me to no end how much easier it is to find resources on writing literature reviews than it is to find resources on just finding literature reviews. Look, a lot more people are going to need research than are actually going to be doing research, so why is our educational system so focused on teaching how to write scientific resources rather than teaching how to read them? Grump grump grump.
Anyway, the Annual Reviews website has a searchable database of a great number of lit reviews, so it is a good place to start. Many research databases have a “document type” search refinement option, and you can usually check a box saying lit review, bibliography, meta-analysis, etc., etc.
If you want to search for literature reviews in google scholar, a good way to do this is to add the phrase “intitle:(literature review OR survey of literature OR Bibliography OR Bibliography)” after your keywords. Just remove the quotes.
For example, I decided to search for lit reviews of research on video games and got these results. You’ll probably have to tweak your keywords quite a bit, but it can be a start.
Again, lit reviews aren’t infallible, and it’s good to read a lot of them, and to read a lot of the literature they are citing to boot. But it is at least better than relying on an unsourced online article. You probably realize this, but anyone can write anything on the internet.
So Published Research is Flawless, right?
It hurts my scientific little heart to say this, but there is such a thing as Bad Science. In fact, there is such a thing as Catastrophically Bad Science.
This doesn’t mean that science as an institution is a complete failure, it just means that, again, we have to work a little bit harder during our research. I’m sorry, thems the breaks.
Differentiating between Bad Science and Good Science is an art unto itself and a whole other article which I just don’t want to write at the moment. At least this is taught to some extent, in college, if you managed to take a research methods class. (“Small sample sizes!” still rings around wildly in my head.)
But aside from bad science itself- which does happen, but does not invalidate the good science that exists- I think that there is a problem with public misinterpretation of science.
If you are involved in science at all, you are rolling your eyes, because yes, obviously, there is a BIG problem there. The biggest problem is in how the media itself portrays scientific results, because most news media now is focused on getting as many views or clicks as it can. So it is going to try to make that research seem sensational.
In this study, a group of online reviewers and physicians surveyed articles on medical science that appeared in popular news media for seven years straight. They found, unsurprisingly, that in the vast majority of cases, the scientific research was covered poorly. (I love this paper because it has “The Tyranny of the Anecdote” as the title of a subheading.)
For 7 years, our media watchdog project has established that health care news stories often emphasize or exaggerate potential benefits, minimize or ignore potential harms, and ignore cost issues. (from the linked paper- and, as we like to say in the science community, NO SHIT)
News stories tend to downplay the costs or limitations of new medical findings, and they especially tend to forget to compare them to older findings and medical methods (“New test detects breast cancer 65% of the time! Old test detected it 63% of the time!”).
The dreaded “correlation is not causation” mantra also applies to these stories. People who eat raw kale are more likely to get hemorrhoids with x, y, and z contributing factors? The news will report this as “EATING KALE CAUSES ASS BOILS.”
Even if the media does a good job of accurately reporting scientific advances (I’m told it can happen), there are still problems, because people still don’t know how science is actually applied. I’ve faced quite a few of these issues myself, from people commenting on my writing, usually along the lines of “You said that x animal does y because of z research, but I’ve observed q happening! The research is wrong!”
See above for the issues with those pesky anecdotes- but this isn’t quite the same as an anecdote, because this could be your personal experience, something that you’ve observed or even experienced and that you know is true. How can you believe in research that says it isn’t?
You Are Nothing But a Number
I once knew a fellow who was very fond of reading science news headlines (and little else). One morning he ran up to me and stated, “Koryos! You’re left-handed, aren’t you? I just read about this paper that said left-handed people are more likely to be serial killers!”
“Ok,” I said, “but I am not a serial killer, so that study actually has no application to me.”
He was very confused, and I don’t blame him. Statistics are funny things.
Let’s imagine that a researcher wanted to study whether or not it was common for humans to like eating chocolate. She gets together a pretty big sample size across the US for her survey- 1,000,000 people! Great job!
The researcher tallies up her results, and finds that when asked the question, “Do you like to eat chocolate,” 99% of respondents said yes. Those are pretty compelling results. The media naturally latches on to her study and reports that “NORMAL HUMANS LIKE CHOCOLATE.”
Now, if you are like me, you are in the 1% that doesn’t particularly enjoy eating chocolate. Does this make you, and I, abnormal humans?
While we may be statistically more uncommon than chocolate-lovers (in the US only, remember) that doesn’t automatically imply anything is wrong with us OR with the research. In fact, if only 1% of a sample of a million humans responded that they didn’t like chocolate, that actually adds up to 10,000 chocolate haters.
If you were at a chocolate-hating rally and you saw all 10,000 of these haters moshing wildly together as the band blasted the timeless classic “Down With Count Chocula,” you would probably feel like that research was completely wrong. Look at this massive crowd of chocolate-haters! How could they be statistically uncommon?
The problem with the way we interpret research is that we immediately want to see how it applies to us. If it doesn’t apply to us, or appears contradictory to what we know about ourselves, we often want to dismiss it as wrong. But the fact of the matter is that research is not looking at individuals like you or me. Research is looking at populations. You and I, our experiences are just a blip on the scientific radar.
Why Most Research Doesn’t Apply to Your Daily Life
The eventual point of research is usually to improve our lives (and the lives of others, as well, such as animals). And it does get there. But in order to get there, research has to go through several degrees of abstraction.
I think one of the biggest issues in science education now is not teaching students to differentiate between theoretical and applied research. Both are important, and both can’t exist without the other, but both can’t be used in the same way.
Theoretical research is where everything starts. You’ve observed that you feel happy after eating tomatoes? Design a preliminary study. After feeding tomatoes to a group of humans, they self-report that yes, all but Grumpy Jimmy over there felt happier afterwards.
This is not enough evidence to say that eating tomatoes will make you a happier person (though the news media will inevitably report it that way). Certainly we can’t draw a solid conclusion from just one, limited, study. Maybe several more will get published, with varying but mostly positive results (let’s pretend for a minute that publication bias doesn’t exist). A couple mouse studies get approved, and it is shown that mice with tomato added to their diet produce more dopamine, a happy chemical, in their brains. The evidence seems to be stacking up!
You might feel ready to grab a tomato and chow down, but this is actually not yet applicable research. Sure, mice make dopamine and humans self-report feeling happier, but this is in controlled lab conditions with nothing else going on. As you no doubt realize, there are a hell of a lot of other factors going on in your daily life.
This is when theoretical research passes the torch to applied research. Without that theoretical basis, the applied research couldn’t actually claim that there was a link between tomatoes and happiness, but now that there’s good evidence to show that there is, it’s the applied research’s job to… well, apply it to us!
“Clinical trials” is what this point in medical and drug science is called. This is when whatever treatment, having gone through rigorous lab testing, now gets put back out into the public environment and tested on the people who actually need it. Then, finally, the research might show whether or not a treatment might help YOU, dear reader.
Why not skip the theoretical bit entirely? Well, aside from differentiating what’s a placebo effect and what isn’t, theoretical research can be good for showing, you know, whether a thing is safe. I’d rather get sick in a lab surrounded by scientists and doctors than on my way to work, right?
Theoretical research also has the benefit of being able to broaden its view: like, with enough experimentation, researchers find the exact chemical in tomatoes that makes people feel happy, and this chemical exists in carrots too! Cool! They can send that off to the applied people now to work on making it actually mean something to us.
Conclusions, More or Less
Guys, research is hard, and everybody gets tempted to take the easy way out. And gosh, are there a lot of news media and independent sites trying to convince us that the easy way is Right and True.
Unfortunately, things tend to be complicated, and there are exceptions, and there are conflicting results, and there are really long, boring papers with lots of numbers in them.
Add this to the fact that you have to be critical of the papers themselves, and it can all get really frustrating.
I think things could get a great deal less frustrating if we were taught as much about critically interpreting science as we were taught about methods and controls and proper citation formats, but there you go. It’s up to us to teach ourselves all this on our own.
All I can say is that once you do, things don’t get less frustrating- but they can get, strangely enough, much richer and more rewarding.
To view a list of all my articles, head to the Nonfiction section.