There are many excellent science studies based on well-designed experiments and which make reasoned claims based on the assembled experimental data. While the majority of scientific findings and papers issued each year offer valid findings and make a contribution to the body of knowledge, there are, unfortunately, many cases of “bad science” out there.
There are multiple reasons for bad science: poor research, poorly designed experiments, misconduct by researchers, and accidental or deliberate misinterpretation of data. Sometimes this is not the fault of the scientists, for the media can take a small observation or a statement that is written in a guarded way as a possibility, and turn this into established facts. A “cure for all types of cancer” is one of the most common (and misleading) headlines.
In another example, a few years ago Mother Jones featured a science article based around the idea “Gasoline lead is responsible for a good share of the rise and fall of violent crime over the past half century.” Now lead poisoning is a destructive neurotoxin, but as Julia Belluz’s critiqued it were the scientists behind the research really claiming that economic factors, like unemployment; social factors, like the rise of gangs; and psychiatric factors were lesser reasons for violent crime? Unlikely.
However, scientists themselves can be guilty of making bold and unsubstantiated claims based on flimsy research. One problem is extrapolation, where an outcome is extended beyond what is reasonably credible. This can sometimes occur with experiments carried out on cells (in the metaphorical Petri dish). Here the results of these often small scale studies are transmogrified into something that can affect the health and well-being of a human being.
Simply put, cell studies are not always good predictors for what will happen in a person. For example, if a study has found an antioxidant, extracted from blueberries, works on mouse cells in a laboratory, in terms of lowering the risk of cancerous cells forming, this doesn’t mean that if a typical person suddenly sips blueberry juice each morning they won’t develop cancer. There are many complexities with human physiology that a laboratory study cannot replicate.
In relation to this, some animal studies can act as good predictors of what might happen if a developmental drug product is administered into a human being. Matters of safety are an example of where this is important (although something went tragically wrong in 2016 with safety checks in a recent clinical trial in France; here one person was rendered brain dead and later died.)
At other times, animal studies are not a good predictor for what will happen in people. A satisfactory test on a new drug on mice, for example, does not necessarily mean the same physiological or biochemical reaction will occur in a person. A study by Bracken, reviewing the usefulness of animal experiment, additionally noted from his review of scores of animal research: “many animal experiments are poorly designed, conducted and analyzed.”
Then there are genetic, environmental and lifestyle factors. Considering again “miracle cures,” taking a probiotic in a yoghurt, this might have a measurable benefit with one person but not with another. This is because the bacteria in the guts of these two people (the ‘microbiome‘) might differ significantly, allowing the probiotic organisms to work in the intestines of one person but not with another. With lifestyle, someone could take as many vitamins as they like (‘proven to work in a clinical trial’) but if that person is a heavy tobacco smoker, the risks from smoking will outweigh any benefits from pill popping.
Good science needs to be repeatable. Sometimes claims made in journals cannot be replicated. One of the reasons for publishing science papers is so another qualified scientist can replicate the research. The experimental claims made don’t always stack up. One group from Stanford University recently attempted to a reproduce the findings of 100 psychology papers. They only managed to achieve similar results for 39 of the studies, meaning that around 60 percent of the described studies were so poorly constructed they could not be proven.
There are also risks with scientific data being biased. Questions of bias often arise when laboratory research is sponsored by a company with a vested interest in the outcome (such as the manufacturer of a medicine). In such circumstances, care needs to be taken when reviewing sponsored findings, even when interests are declared upfront (and if the declaration of interest has been obfuscated, a strong dose of skepticism is in order.)
A darker side of “sponsorship,” which occurred in 2015, was when several academics received an unsolicited marketing email from a research company called Cyagen (who produce transgenic mice and stem cells). According to medical doctor Ben Goldacre, the email was headed “Rewards for your publications”. In the message, Cyagen stated: “We are giving away $100 or more in rewards for citing us in your publication.” It would seem that 164 scientists took up the offer. There is no indication that this led to any biases or adding references invalidated the research, but it does leave an unsavory taste about the objectiveness of the referencing — at least.
Another reason for “bad science” is unintended and intended “errors.” Back in 2012, the journal Proceedings of the National Academy of Sciences tallied up 2,047 recent retractions from journals. “Retraction” means a journal paper has been pulled because of an error. The review article found about 20 percent of the retractions were due to unintended errors (like a badly performed calculation, of the sort that should be picked up using statistical techniques like Benford’s law); however, and more seriously, about 67 percent were retracted due to misconduct. Of these, 43 percent were due to deliberate fraud.
A survey discussed in The Atlantic, found that 2 percent of scientists openly admitted to having falsified data; and around one third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ or ‘changing the design, methodology or results of a study in response to pressures from a funding source.’
Why do this? The most common reason given was peer pressure or the tough completion for scientific jobs. In the science world, the merits of most scientists are judged on finding something remarkable and getting it published in a journal with a high impact factor (which basically means a few people actually read it and took notice.)
One of the most serious cases of falsified data occurred in 2014 (with a conviction made in 2015.) Researcher Dr. Dong Pyou Han was imprisoned for 57 months for making misleading claims in relation to research on the HIV, where he made out he had created a vaccine for AIDS. His research was funded by the U.S. National Institutes for Health to the tune of over $7 million. For several years Dr. Han was successful in convincing top notch scientists and research grant awarders that his experiments were legitimate. In reality he was passing off adulterated samples of rabbit blood.
Sometimes a single journal is faced with the ignominy of retracting papers. In 2015, Digital Journal reported that a major British science journal called BioMed Central was forced to retract 43 papers due to evidence of faked peer review. Each of the papers was written by scientists and medics based in China.
There are many other examples of ‘bad science.’ Digital Journal regularly runs science articles. While many of the findings are legitimate it is worth maintaining an air of skepticism. Here is Digital Journal’s checklist for spotting ‘bad science’:
1. Misinterpreted findings: many research briefs and press releases misinterpret findings and journalists can do so as well. At Digital Journal we strive to provide a link to the original research paper, so readers can check facts for themselves.
2. Correlation and causation: this is a big problem of misinterpretation. Because A influences B in an experiment, does not mean a causes B to happen. There are often other factors at play.
3. Replication: can the results of the study be replicated? Has a different research group attempted this, and, if so, what did they find? If a different conclusion was reached, this undermines the original claims.
4. Unrepresentative subjects: how many test subjects were used and whether they were representative of the general population as a whole is important. For instance, a study carried out in Poland using women aged 40-50 years may not be applicable to teenagers living in Utah. In relation, it is important to note the sample size. A study involving 1000 test subjects will be more robust than one including just 25 people or animals.
5. Failure to use a control group: studies where experimental drug A has not been compared with a placebo are often unreliable. This is because it is unknown whether the results obtained are due to the drug itself or simply to chance.
6. Selective data: it is important to note if all of the experimental data has been used for the data analysis. Sometimes scientist excluded the odd data point for good reason (such as a contaminated test tube,) But have they declared what and why? Related to this, literature based papers can sometimes ‘cherry-pick’ certain evidence in order to prove a certain perspective while ignoring another.
7. Blind testing: linked to controls, in ideal experiments, subjects should not know which drug they have been given — the experimental one or a placebo.
8. Sensationalized headlines: like the “cure for all cancers.” A good headline should entice the reader in, but it shouldn’t mislead or over-simplify the findings.
9. Conclusions: linked to the headline, does the conclusion made actually reflect the data. Sometimes conclusions made in papers are not supported by the actually data gathered.
10. Conflicts of interest: has any company sponsored the study and why? And most importantly, has this been declared?
11.Peer review: has the study been published in a reputable peer reviewed journal? Sometimes this doesn’t guarantee a solid and reliable piece of science, but in many cases it does. Certainly non-peer reviewed studies should be regarded with caution — if the scientists behind the study are confident as to its reliability, then why not send it for peer review?
Hopefully this list, and the earlier commentary, will allow Digital Journal readers to examine science findings with a critical eye. Science can, and is, getting better. In a follow-up article we’ll look at how this is being achieved.
