Systematic reviews are a time-consuming and resource-intensive process. Even when the review is “small,” it’s common to see these projects take several months to complete. However, today there has been a shift in the needs of the systematic review community. More evidence than ever before is getting published, over 3 million articles annually in English alone, and expected to grow. When research teams are unable to create more resources (e.g., time, funding, or hiring larger teams), they must create efficiencies in their processes. This is where artificial intelligence fits in.
Ian Stefanison, Evidence Partners’ Chief Technology Officer, has been working with research teams for over 20 years. He is passionate about helping researchers become more efficient and accurate. His participation in groups, such as the International Collaboration for Automation in Systematic Reviews (ICASR), has enabled him to keep his finger on the pulse of promising new technology to help researchers better manage their workloads, keep stakeholders informed, and disseminate their evidence so it is impactful, relevant, and timely.
One of DistillerSR’s most powerful time-saving tools, continuous AI reprioritization, has recently been the subject of a study published in BMC’s Medical Research Methodology. According to the study, continuous AI reprioritization has been shown to reduce title and abstract screening burdens for research teams by as much as 80%.
We spoke with Ian about continuous AI reprioritization and its impact on the systematic review process.
Can you quickly explain exactly what continuous AI reprioritization does?
IS: Continuous AI reprioritization is a huge breakthrough. When it is enabled, the system will very frequently train a model using your reviewed references to score all unreviewed references. Your reviewers will always be presented the highest scoring references first. What this means, in practice, is that in a 10,000 reference screening set, you should find 95% of your relevant references after reviewing about 2,000-3,000 references. The system re-trains the model every 20-200 references depending on project size, so it learns very quickly and continuously as new information comes in.
What is the benefit of using continuous AI reprioritization in a systematic review?
IS: The main benefit is that it can make the screening process much faster, especially on larger projects. Continuous AI reprioritization by itself would be powerful as references are moved to subsequent parts of the review sooner, so full-text procurement and data extraction can begin earlier if different people are working on them. We have also created a predictive report that uses the reprioritization data to predict how many relevant references are likely remaining so that reviewers can know, with a high degree of confidence, when they have found 95% of their relevant references. The third benefit is that it enables groups to work through much larger sets of references and execute very broad search strategies to ensure that no relevant information is missed.
How can we trust it?
IS: For both the prioritization algorithm and the prediction algorithm, we spent quite a bit of time doing internal validation against 143 completed, real-world systematic reviews, where we knew which articles were excluded at an abstract screening level, and which made it to full text screening or beyond. We then simulated the systematic review screening process by shuffling the reference list, taking 2% (minimum 25, maximum 200) off the top of the list to train a model and then used that model to re-order the remaining references. We repeated that process until we had predicted that we found 95% of the included articles, and then tallied the count and percentage of excluded articles in the list of references remaining. We used these simulations to optimize the ranking algorithm and ensure that the prediction algorithm was conservative enough to always have 95% when we predict 95%. You do not have to stop when we predict 95% of course - you can keep screening as long as you like.
The diagonal line represents traditional screening methods while the green line represents the time saved with continuous AI reprioritization.
Do researchers need to change their workflow to incorporate it into their processes?
IS: The short answer is no. There is also little risk in taking the reprioritization feature for a test drive because the worst case is that your reviewers still review everything but in a different order than they would have without it. However, there are things that DistillerSR users can do to optimize their screening protocols to leverage AI. Without prescribing specifics, the best workflow is whatever teaches accurate labels to the AI the fastest.
In practice, this may mean doing some conflict resolution at your title/abstract level or taking a little more time to read the initial title/abstracts. The idea is, “Am I 100% sure there is nothing in the title/abstract I can use to exclude this reference?” rather than, “Could this maybe end up being included?” The better the training set, the faster you’ll find all the relevant references.
We have also added AI error detection where reviewers can easily see references that were excluded, but the AI would have scored highly as an include. This helps detect human errors and misclicks, and also helps ensure that the AI is learning as efficiently as possible by not training that include as an exclude. We recommend running this report regularly to correct human errors that may be confusing the model.
DistillerSR’s continuous AI reprioritization is just one of the many ways Evidence Partners is working to help researchers save time and money in their systematic reviews. It’s a low-risk, powerful system that runs in the background. The ability to find relevant references 50% faster, on average, enables teams to move to subsequent review tasks sooner, saving time, and maximizing resources in the process.