AI and Systematic Reviews: What's Real and What's Not

Picture it: you’re embarking on a new systematic review. The research question is timely and important, and you’re excited to see what the evidence reveals.

However, your heart sinks at the thought of the thousands of papers that you and your team are likely to sift through over the weeks and months ahead as you screen the abstracts, extract data from relevant studies, and produce your reports and narrative.

But wait, you think. Didn’t I hear something about technology that can do some of that on its own, without human involvement?

Free eBook: Learn How to Completely Streamline Your Systematic Review Process.  Click here.

Too Good to be True?

It’s no secret that there is increasing pressure to complete systematic reviews faster and more economically, while maintaining scientific rigor. Accurate and timely healthcare decision-making depends heavily on the efficient identification and synthesis of current evidence.

Artificial intelligence (AI) is being explored and tested for everything from learning to play chess to diagnosing clinical disease. It’s no surprise, then, that the research community has a keen interest in using AI in the systematic review process to help tackle the ever-growing demand for evidence.

As noted by the International Collaboration for the Automation of Systematic Reviews (ICASR), the capabilities of AI are now at a level such that systems can accurately simulate some of the human effort in the activities involved in conducting a literature review.

Examples of the types of tasks that are currently amenable to automation include:

  • Literature search construction and study identification
  • Citation screening
  • Study mapping
  • Simple data extraction (data collection)
  • Evaluating risk of bias of relevant studies.

The use of literature review software alone (without AI) can improve efficiency by 40–60% and dramatically reduce manual errors. Imagine, then, what bringing AI into the picture could do.

One large organization exploring the possibilities of AI in this context is the European Food Safety Authority (EFSA).

A few years ago, EFSA undertook an initiative to assess the feasibility of implementing machine learning techniques (MLTs; a subset of AI) for the automation of screening of abstracts, data extraction, and critical appraisal for their systematic review processes. In the detailed report of their findings, the authors state: “it can be concluded the [sic] there is definitely an opportunity to use the introduced MLTs for automating the screening of abstracts and full texts steps”.

Identifying the Gaps

Although its conclusions about the applications of AI in systematic reviews were promising, the report from EFSA also highlighted the current limitations of the algorithms developed and deployed in the test case studies. For example, there was a risk that some articles were mistakenly deemed irrelevant and vice versa. Tasks that were not currently amenable to automation were also discussed, for example, universal data extraction and critical appraisal of selected studies.

These limitations are among the reasons why adoption of AI in this field is slow going, despite all the buzz and activity. Skepticism regarding the validity of the tools in development and whether they adhere to guidelines such as the PRISMA Statement and the Cochrane MECIR Standards for conduct and reporting of reviews are also barriers to adoption.

Acknowledging these limitations, though, can guide the improvement of machine learning tools and algorithms to ensure that AI eventually fulfills its potential in the area of evidence generation.

Future Developments

Looking ahead, AI technology in literature review software could be applied to:

  • Process PDFs into structured text
  • Accommodate the larger variation of linguistic data in a full article
  • Enable facilitated updating of SRs
  • Produce living systematic reviews
  • Allow data/classification reuse
  • Enable auto-alerts in literature surveillance.

In their latest meeting report, the ICASR also noted the need to develop feasible workflows for combining the tools into an information pipeline. This is particularly important for groups seeking to assimilate data from different types of study designs, for example, from preclinical experimental studies in animals to diagnostic test evaluations.

Adopting a Pragmatic Approach

Failing to adapt or innovate in these rapidly changing times creates the risk of becoming obsolete. AI is rapidly becoming a game-changer, whether we like it or not.

For evidence-based researchers, AI has enormous potential as a way to keep pace with the increasing demand for timely evidence. Literature reviews are no longer just for academic research. Just look at the medical device industry, whose manufacturers must now conduct rigorous and thorough literature review to comply with new EU regulations. The increased workload for these companies can be staggering, and AI is often hailed as a possible solution.

There are many initiatives underway to develop AI to the point where it can completely automate the systematic review process. Most of these require further development and testing in real-world applications. AI today is most effective when used pragmatically to augment the researcher’s toolbox, not replace it altogether.

Making AI a part of the process, rather than trying to get it to replace the process, will be key to successful adoption going forward. Considerations such as transparency and validation, for example, still require resolution before AI can be trusted in critical decision making.

While many AI efforts are going for the moon shot and missing, the key to AI success is incremental: get into orbit first and then build out from there. More specially, AI can be applied in areas where it can add value without adding undue risk or obfuscation. Many, many such areas exist today. Let’s pursue that low hanging fruit.

AI can still help you with that systematic review project today–it just isn’t ready to tackle it alone (yet).

Related Reading:

Best practices for systematic literature review

Author

Peter O'Blenis

Peter O’Blenis is a co-founder of Evidence Partners and has assembled a collection of best practices and methodologies for using web-based software to streamline clinical research. He believes that well written web-enabled software can solve real-world problems and has presented globally on the topic.