Email
Password
Remember meForgot password?
    Log in with Twitter

article imageCreating better coronavirus science through AI tools

By Tim Sandle     Oct 12, 2020 in Science
Most of the COVID-19 related research is reliable and proven to be accurate, or at least part of an iterative process whereby new research builds upon current findings. There are, however, bad research examples. Could AI have filtered these bad examples?
Partly driven by the desire to have a much information about the novel coronavirus published as quickly as possible, given the level of scientific and public interest, this has led to some research being published which is not robust. This includes research that has not been subject to the usual rigors of peer review. Sometimes findings are in the form of ‘preprints’, which means research pending review (as opposed to bypassing review altogether). While preprints can be interesting, they are often subject to changes and, in some cases, may not end up being published at all.
Perhaps the most serious case involved French researchers who reported how a combination of hydroxychloroquine and azithromycin could provide a new, and effective, treatment option for those infected with the SARS-CoV-2 virus. The original paper was submitted to Cell Research on 25 January 2020, accepted on 28 January. This was following only three days in peer review.
Based on the French findings, the U.S. Food and Drug Administration (FDA) agreed to an emergency use authorization that led to the drug combination being rolled out as a treatment for patients. Later, more detailed research, raised serious concerns about the efficacy of the hydroxychloroquine and azithromycin combination therapy. This led to the FDA having to reverse their decision.
On this subject, Dr Angela Rasmussen is a virologist and associate research scientist at Columbia University's Centre for Infection and Immunity, tells the magazine The Biologist: "The pace at which data is coming out is fantastic but there have also been some studies that have either been just not great, or misinterpreted. And I see a lot of problems with press reports and press releases being treated as if they are data."
How can more reliable research be evaluated? According to Professor Tudor Oprea (University of New Mexico), in an interview with Biotechniques, artificial intelligence (specifically machine learning) could help with introducing new tools that could assist with the evaluation of new peer reviewed papers.
The researcher explains that text mining (the rapid examination of millions of pages so that specific patterns can be identified) is the solution to filtering out ill-thought claims or poorly executed research.
What is proposed is a digital-based machine learning method that can assess heterogeneous data in a short period of time. providing interpretation on a scale and speed well-beyond what is possible from human interpretation.
Professor Oprea has outlined his model in the journal Nature Biotechnology, where the paper is headed “Artificial intelligence, drug repurposing and peer review.”
More about machine learning, coronavirus, Artificial intelligence, Viruses, text mining
 
Latest News
Top News