Skip to content
Hard Science

A 6-Step Infographic For Ending Pseudoscience

Don’t believe every science study you read, because sometimes not even their authors believe them. Here are the issues corrupting good, honest science – and how to fix them. 
Pseudoscience caught in the act! This article by TIME stretched the truth of a study that showed flavanols in cocoa are linked to the slowing or reversing of age-related cognitive decline, by reporting that eating chocolate can win you a Nobel Prize.
Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.


It’s a dirty little secret in the science community that most published scientific studies aren’t 100% true. As Nobel Prize-winning biologist Thomas Sudhof told PLOS, there are a host of problems with science journals. He summarizes those five problems as:

1. Hidden conflicts of interest between the journal and its reviewers

2. Trivial accountability measures for journals and reviewers

3. Expensive publishing costs and limited journals for authors to publish in

4. A murky, hodge-podge peer-review process

5. Experiments with unreproducible results

Once these studies are published they get into the media’s unreliable little hands, some of whom are genuinely confused by the science, and others who are genuinely sensationalizing science for publicity gains. Depending on the day and the news outlet, coffee will either kill you or be the secret to eternal life (depending of course through which orifice you administer it). Owning a certain pet can make you infertile. Smelling farts can prevent cancer. Eating chocolate can turn you into a Nobel Prize winner. Watching pornography could make men better weightlifters. The list could, and unfortunately does, go on.

It’s perhaps best said by John Oliver in his excellent report on sham science studies: “In science, you don’t just get to cherry-pick the parts that justify what you were going to do anyway. That’s religion. You’re thinking of religion,” he says.

Much of the information gets dumbed down or selectively sensationalized as it passes from news source to news source, and some of it was dodgy from the start due to publicity-hungry scientists, which you can kind of understand (but not entirely forgive) as their continued funding depends on finding things that are spectacular, even if a little fictional. And yet it appears grant money is pissing down over Aston University in England, where a study concluded that toast falling off a table will tend to fall butter-side down. This important information was published in the European Journal of Physics.

The five problems Sudhof described above are big. All of them need to be fixed. When they are, papers published in scientific journals would not only be more honest; they’d be more varied. More kinds of research would be published – smaller experiments, overlooked topics, and even experiments that had unfavorable or negative results. All of those outcomes would make scientific papers more approachable to the general public. It would also cut down on the amount of pseudoscience that attempts to explain the actual science and ends up confusing everyone.

So is there a way to fix those 5 problems? You bet! At least from the scientific end (the media is another kettle of fish). Sudhof offers 6 easy tips scientists can use to fix their publication problems and get the public interested in their work:

Credit: Laurie Vazquez/Big Think

1. Post research to preprint servers before publication, giving researchers time to improve their work

When a scientist runs an experiment and has a significant result to report, their first step is to write it all up. Their second step is to find a journal to publish in. This is an enormous pain for many reasons, but one of the biggest is that every journal uses a different submission format. Journals collect and publish materials in different ways; streamlining the editorial process by putting all the journals on the same publishing system would let researchers focus more on honing their results, instead of futzing with formatting. Cold Spring Harbor Laboratory’s bioRxiv is already doing this. Hopefully more platforms follow.  

2. Clarifying review forms to give workable feedback to authors

Because each journal has its own submission format, they’ve also got their own publishing process. That means they use different methods to review papers, and those methods are often forms that are “cumbersome or insufficient to provide thoughtful and constructive feedback to authors,” Sudhof explains. Streamlining those forms would cut down on the amount of back and forth between the researcher and the journal, again allowing them to focus more on clarifying their work than formatting it.

3. Reviewer and editor training that puts burgeoning and established reviewers on the same playing field

Journals have a variety of people reviewing proposed publications. Some of them were trained decades ago. Some of them are brand-new to reviewing. None of them have a standardized review process that tells them what to look for. Investing in training allows them to assess papers fairly and give constructive feedback to the researcher.

4. Reduce the complexity of experiments to make the results easier to reproduce

“Many experiments are by design impossible to repeat,” Sudhof writes. “Many current experiments are so complex that differences in outcome can always be attributed to differences in experimental conditions (as is the case for many recent neuroscience studies because of the complexity of the nervous system). If an experiment depends on multiple variables that cannot be reliably held constant, the scientific community should not accept the conclusions from such an experiment as true or false.”

5. Validate the methods of the experiment

Sudhof again: “Too often, papers in premier journals are published without sufficient experimental controls—they take up too much space in precious journal real estate!—or with reagents that have not been vetted after they were acquired.”

6. Publish ALL results, not just ones that support the conclusion you want to make

Journals are a business, and as such they tend to publish results that will encourage people to buy them. In this case, that means focusing on experiments with positive results. Sudhof takes particular issue with this, citing the “near impossibility of actually publishing negative results, owing to the reluctance of journals—largely motivated by economic pressures—to devote precious space to such papers, and to the reluctance of authors to acknowledge mistakes.” However, not all journals are like that. PLOS ONE lets scientists publish “negative, null and inconclusive” results, not just ones that support the experiment. That allows for a more comprehensive understanding of the experiment, and can even provide more helpful data than positive results. Hopefully more journals follow suit.

By taking these 6 steps, scientists would make their results clearer to the public. That would make discoveries easier to understand, help increase scientific curiosity, and cut down misinformation. It would also force scientists to communicate in plain English, which would make a serious dent in the amount of pseudoscience we hear on a daily basis. Physicist and renowned skeptic Richard Feynman explained it to us this way: “’Without using the new word which you have just learned, try to rephrase what you have just learned in your own language.” Pseudoscience explanations are larded with jargon and often can’t be explained in plain English; without the jargon, the explanation falls apart at the seams. Actual science can – and should – do better.

Plus, the sooner pseudoscience goes away, the happier – and smarter – we’ll all be. The ball’s in your court, scientists. Run with it.

Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

Related

Up Next