The most common problem with understanding science is the problem of cherry picking studies. Cherry picking refers to the act of selecting a handful of studies that agree with us instead of focusing on the (larger) group of studies that don’t.
This problem is common because you have to be aware of ALL of the studies on a certain topic in order to know if you're even cherry picking or not. Most people don't read all the studies on a topic.
Let’s say I want to know if butter is healthy for me. So I do a quick search on pubmed (the US database of studies) for butter. I find a study that tells me butter is unhealthy. Problem solved! But not so fast - it doesn’t work like that.
Cherry picking is a problem because this single study may not be representative of the entire body of data. Let’s say that there are 10 studies out there that all study butter in a similar way. Maybe 8 of those say that butter has no impact on health, 1 of them says that butter is healthy, and 1 of them says that butter is unhealthy. (This isn’t the actual case, but I’m making it up as an example.)
When we look at all the data, what matters most is the agreement of the studies.
If one study says one thing, but there are five times as many studies saying the opposite, it’s probably the case that those five are correct and that the one that said otherwise is an outlier - a piece of data which isn't in agreement with the others. Outliers are a bad thing, and they can confuse and trick us.
In any study, data won’t be uniform. Human beings have quite a bit of genetic and environmental variety, so it’s possible that some people’s bodies react differently to butter than others.
What may be good for some, isn’t so good for others. Some people may react more strongly (positively or negatively) to consumption of butter than others. Maybe, for example, some people have allergies or food sensitivities that make them react very poorly to butter consumption, but this can’t be generalized to other people (who can eat butter without any intestinal issues).
The idea of the study isn't to find the outliers for whom butter is particularly good or bad - it's to find what generally applies to the majority of people.
Now if I was to run just one study on butter instead of ten, and I worked with just a few people, and all of those people turned out to be people with allergies or food sensitivities that made butter harmful to their stomachs, this would create data that says butter is universally bad for all people! A clear mistake. But if I were to run that same study again with different people, I might get more average results.
The more data there is on a subject, the better. If I run a study where I give ten people a lot of butter for two months, this isn’t going to be as powerful as a study that gives a hundred people a lot of butter for two months, or a thousand people.
The problem is, there’s limited funding out there for scientific research, and the more people involved, the more it costs. So, for the sake of cost, scientists are often forced to run studies with fewer people than they’d like. This doesn't mean that these studies are "bad", just that repetition is needed to normalize the data.
This can be a real problem. If there’s only one study on butter in the world, and it’s only conducted on ten people, this isn’t really a big enough data set to get an idea of what the average should be. Now, say that you picked up that study, it says that butter is bad for you, and you start telling everyone that butter is bad for them. The reality isn’t that butter is bad for you, it’s just that butter requires more research.
So sometimes, there’s simply not enough research out there to really give you an idea of what the average impact of what you’re studying is. Sometimes, there is plenty of research out there, but you only know of a small fraction of the research, so you make generalizations based on only a part of the data.
Most of us don’t go around reading every single study on a topic. I certainly haven’t bothered to read all the data on butter’s impact on health, for example.
When you don’t know the whole picture, it’s easy to “guess” about what the rest of the picture looks like based on incomplete knowledge, but it’s also very easy to get it wrong. So for the sake of saving ourselves time, most of us dig only very briefly into the research, because we have busy lives and other shit to do.
This is combined with another problem: confirmation bias. Confirmation bias is a kind of cognitive bias, meaning that it’s an inherent problem built into the way that our brains work.
In general, we tend to surround ourselves with people who think the way we do, with people who believe similar things to what we believe, and with information that confirms the things that we already believe.
This is a kind of mental stress saving effort - after all, spending time with someone we get along with is infinitely preferable to spending time with someone we don’t get along with.
If we were constantly bombarded with information that conflicted with our current views (correct or otherwise), we’d get mad and frustrated and likely look to find ways to avoid it. So while some of us are better at seeking out “true” information, even when it’s sometimes uncomfortable or unpleasant, most of us just tend to confirm the things we already believe as a way of minimizing stress and maximizing happiness.
Likewise, the internet is a place where information is more available than ever. There's good and bad quality information out there - but we never experience all of it. We tend to select the information that agrees with us, and don't bother clicking on links that look uninteresting. This is good at helping us cut down on information overload, but not necessarily best at giving us a complete picture of all the data.
Confirmation bias isn’t necessarily a bad thing, but it is something that we have to be aware of when appraising the evidence for or against something that we don’t know much about. Everyone suffers from confirmation bias, including the smartest people on earth. If you think that you don't suffer from confirmation bias - well, that's confirmation bias at work.
For most of us, we hear something about the way that the world is when we’re younger. Maybe when I was five, my mother told me not to eat so much butter because I liked the taste of it and ate a lot. “It’s unhealthy!” she might have said.
Now, this isn’t very good “evidence” in a strict sense. It’s just that someone told me something which may or may not be true. But I was five! I couldn’t have been expected to understand the way scientific evidence works, or that my mother’s advice may have been partially inaccurate or only situationally accurate. So I believed my mother because she was an authority figure (more on this later) and I internalized that piece of advice.
Now say twenty years later, say I’m trying to learn a lot more about nutrition. I’ve always believed butter is bad for me, even though I didn’t really have any strong basis for it. Now that I’m looking for evidence about the healthiness of butter, I already believe that butter is bad for me, so when I’m looking for evidence I tend to believe only the evidence that tells me it’s unhealthy.
Maybe I do a quick google search, search through a mess of poor-quality evidence providing conflicting opinions in major publications, and pick out only the stuff that confirms my biases. Now I’m only further convinced that butter is unhealthy, even though I’ve never touched on a complete understanding of all the scientific data! This is how scientific knowledge works, for most of us.
In short, the biggest problem most of us face is that there’s a lot of data out there, and so we tend to focus only on the part of it that confirms our biases - this is the problem of cherry picking. In some cases, data that exists is incomplete or of poor quality, making it hard to really understand what’s going on with any degree of certainty.
This is why, for example, you might see a headline that says that some kind of food is “good” for you, and then a year later a new headline that says that actually, it’s “bad” for you - because reporters will simply cherry pick new and exciting sounding studies, report them with exaggerated confidence as to the value of their findings, and then change their tune the instant a new study comes out!
This also leads the common person to think that science doesn’t actually “mean” anything, or doesn’t agree on anything, because popular reporting on the subject completely misses the point of individual studies and the overall “body of evidence”. A single study may not mean anything, but a lot of studies taken together mean a lot more.
Luckily, there are a few ways that you can get around this problem.
The first is meta analyses. Meta analyses are essentially studies of studies - instead of taking the data from a single study, they pool the data from a lot of similar studies and compare all this data together. This gives a much better picture of the overall body of evidence, and helps eliminate the problem of cherry picking. A meta analysis is like a regular study, but beefed up and superpowered.
Meta analyses are not perfect - sometimes, the scientists assembling them may use flawed or misleading methods to draw a conclusion that the data doesn’t really support.
Another problem is that not all topics have enough existing data to even make a meta-analysis - if there’s some “hot new supplement” on the market with just a single study on it, then there’s clearly not enough studies to make a meta analysis.
It's still possible for the makers of a meta-analysis to cherry pick studies that agree with them and ignore those that don't. If someone with an agenda to prove or disprove a certain belief makes a meta analysis, they may or may not play fair.
But when they exist, meta analyses generally do a solid job of summarizing the body of research and allowing you to draw solid conclusions.
Another way to get a better overview of the data is to subscribe to a research review. Typically, a research review is a publication in which a scientifically literate person (or a group of them working together) analyzes current research and provides feedback.
They might say why certain studies are good or bad, or what they contribute to the current body of knowledge. These research reviews make it easy to get a quick overview of the research without doing all the time intensive and costly work of digging through boring scientific journals yourself.
When it comes to fitness, there are a handful of options.
MASS and the Strength and Conditioning Research Review are both publications which do overviews of current training literature. Alan Aragon’s Research Review does a good job of overviewing current nutrition literature.
At a low monthly rate, these publications can greatly simplify the amount of work you have to do to stay more caught up on existing research, and you’re also paying a lot less than the cost of subscribing to a number of scholarly scientific journals. In short, it’s a win-win.
There’s a lot of data out there, and most of us don’t bother digging through all of it for the sake of convenience - after all, we aren’t scientists who need to know this for a living, and this is a time intensive and costly process.
When you don’t have an understanding of the entire body of research, it’s easy to make mistaken conclusions based on limited data.
Sometimes there’s not enough existing data to draw strong conclusions. If there’s a limited number of studies on a subject, they may “suggest” or “point” towards a certain outcome, but more study is required to confirm that this outcome is likely to be correct.
Popular reporting on science is highly flawed and tends to report individual studies as if they were a scientific consensus. It also thrives on attention getting headlines, so results are often exaggerated and overblown.
Aside from digging through all the data, meta analyses can give you a quick overview of the existing data. Even these aren’t perfect, and in many cases may not exist due to a lack of sufficient data.
Research reviews can give us a quick overview of current research from actual scientists with a better overall understanding of the data. These publications are useful in keeping up with current data without needing to actually go through all of it.
- Do You Really Understand Science?
- Publication Bias and the Study Hype Cycle
Are you interested in perfecting your deadlift and building legendary strength and muscle? Check out my free ebook, Deadlift Every Day.
Interested in coaching? Inquire here. If you don’t have the money or interest in purchasing long term coaching, consider donating a small monthly amount to my Patreon, which also nets you a copy of my book, the UpLift Method. You can also subscribe to my mailing list, which gets you the free GAINS exercise program for maximizing strength, size, and endurance.