Connect with us

Hi, what are you looking for?

Tech & Science

Attrition: The big problem in experiments using mice and rats

Gizmodo reports on a team of researchers examined a number of scientific papers and noticed a startlingly common attribute — the number of rodents identified at the beginning of the paper often did not match the number given at the end.

The paper was published Monday in the journal PLOS Biology, called “Reproducible Research Practices and Transparency across the Biomedical Literature.” The group was led by Ulrich Dirnagl at Berlin, Germany’s Charité Universitätsmedizin.

The problem of fluctuating numbers is due to attrition. In a scientific context, this is when rodents die or get sick due to something outside of the study or when they behave in a comparatively random way that makes them impossible to study. Attrition becomes a major problem when there is either too much of it, it goes unreported, or it is selectively ignored — otherwise referred to as biased attrition. The latter may not necessarily be intended deception, but often comes when researchers expect to see one kind of result and write off attrition as a random death or something else.

To see what effect the different types of attrition had on experimental results, the group simulated studies using eight treated animals and eight untreated animals as a control. They found too much attrition can lead to more false negative results because many studies usually start with a small number of animals. When too many of those animals die, statistics can quickly become skewed.

Biased attrition, meanwhile, can mean a much more falsely positive result. If “outliers” are ignored, the positive detection rate can increase to a staggering 80 percent from 37 percent.

The group then looked at randomly selected studies on cancer and strokes and sorted them into three categories: unclear, where it wasn’t easy to tell when numbers given in the “method” and “results” seconds matched; matched, where the method and results numbers were the same; and attrition, where there was clearly a different number of test animals given in the two sections. The latter category was further divided into “explained” and “unexplained,” because some papers described why animals were eliminated.

More than half of the papers were “unexplained,” followed by “matched” and then “attrition.”

Finally, the group looked at whether the untreated or treated groups of animals were better off in the three categories described above — the treated group fared better in all three. The “matched” papers fared the best for both cancer and stroke, “attrition” came in second for cancer and third for stroke, and “unclear” came in second for stroke and third for cancer. The researchers can’t explain why the “matched” category did so well, because they couldn’t be sure any attrition wasn’t weeded out before the paper was written.

Gizmodo says the biggest problem is with the “unclear” papers; in this case, the researchers — and therefore anyone else who reads the papers — won’t know for sure if the experiments were flawed.

As The Scientist explained, another troubling finding of the paper showed a lack of transparency in scientific data. All 441 randomly selected papers failed to completely provide the author’s data. That means the results of many papers are difficult to reproduce.

Written By

You may also like:

Entertainment

Asked if a painting made by a machine could really be considered art, the robot insisted that "my artwork is unique and creative."

Tech & Science

It’s hard to believe that the all-knowing AI of a month ago is now a sort of sewer outlet.

Life

Losing weight does not always equate to winning at health. Why is this?

Business

In conversation with Digital Journal at Inventures 2025, Savilow discussed how the carbon-producing industry can go green.