Artificial intelligence (AI) is a hot topic in the literature review world these days. For researchers working on systematic reviews, there are many tasks that are currently amenable to automation and AI. But it’s important to know, from a researcher perspective, that not all AI is created equal. Before committing your organization to a solution that claims it can do it all with the click of a button, you might want to ask a few critical questions to get a better understanding of the way the AI works and how it can be integrated into your existing processes.
Here are four questions to help get the real story on AI:
1. Is the solution seamless?
For many researchers, the main benefit of using AI in their systematic review process is the time it saves. However, from end-to-end, the application of AI should integrate as seamlessly as possible into your workflow to be truly beneficial in a time-saving capacity. If you are using multiple tools to manage references, screen them with AI, extract data, and so on, you could end up having to manually move references from one tool to the next as you complete the different steps in your review process. This takes time and introduces the likelihood of errors. The full benefits of AI truly come from having it embedded in a single, integrated platform where it can be used throughout the entire process.
2. How do I train an AI to answer specific questions?
If you want an AI to answer specific questions (e.g. to have it filter your references for those relevant to a specific review), you will probably need to train it. Great training data is an important factor in what truly makes an AI intelligent. One of the biggest concerns that many research teams have when they first start introducing AI into their process is that training an AI will be difficult and time consuming. This is where the integration of your AI into your overall platform becomes critical.
The DistillerSR AI SYstem (DAISY) can be taught new things simply by showing it review work that has already been done by humans. It can then test itself and tell you how good you can expect it to be at that task. For example, let’s say you had done a review where you asked, “Do the subjects in this study include children?” DAISY can review the references that have been assessed, along with the answer to that question, as provided by human reviewers. In doing this, it learns to recognize whether or not a reference refers to children or not. It can then test itself against a subset of human review answers and give you a good idea of how competent it is at that task.
The upshot of this is that you can train AI classifiers using work you have already done and then save these classifiers to provide AI assistance in new reviews.
3. Does it learn on its own?
One of the benchmarks of a true AI solution is its ability to learn. It’s called “machine learning” for a reason. In simple terms, machine learning works by observing human feedback and making predictions based on the information found in the data. Consider the autocorrect feature on your smartphone: if you spell a certain word a certain way enough times, the phone’s AI system will pick up on it, learn your behavior and start to automatically spell it that way for you. That’s machine learning!
Learning behavior is seen in tools and features such as DistillerSR’s Continuous AI Reprioritization. This feature runs in the background while you screen references and learns the characteristics of included and excluded references by watching the choices you make. It can then apply this knowledge to continuously move the references that are most likely to be included to the top of your screening list. It trains itself quickly and can be used in any type of project with minimal risk. By moving the most relevant references to the top of the screening list, researchers will save time by finding their included studies faster.
4. Does it produce consistent results?
Reproducibility is a hallmark of transparent, evidence-based research. When utilizing an AI solution to help with systematic review tasks, it’s important that the AI produces results that can be replicated consistently and validated completely. Otherwise, how can you trust the results it produces?
An effective AI solution enables users to test its precision and accuracy so researchers can use it with confidence and, just as importantly, to get exactly the same result every time it is used. Integrated solutions also provide the ability to run an AI in parallel with humans, comparing the decisions made by each. Doing so allows researchers to validate the accuracy of the AI and assess how much they can trust it to make decisions autonomously.
How DistillerSR does AI better
Applying AI to a systematic review is not about pressing a button and magically having the review done for you (sadly). It’s about using AI in ways that enable researchers to focus on science rather than administrative tasks. DistillerSR takes a highly pragmatic approach to using AI that produces tangible results with minimum preparation or configuration.
By automating certain tasks throughout the entire systematic review process, DistillerSR helps researchers save time and reduce the risk of errors in many different stages of the review. Rather than applying multiple standalone AI tools, which would be cumbersome and error prone, DistillerSR provides a complete end-to-end literature review platform that is customizable, flexible, and easy to use.
Want to learn more about the AI in DistillerSR? Request a free live demo today!
- Rise of the Robots: 3 Ways to Leverage Artificial Intelligence in Systematic Reviews Today
- AI and Systematic Reviews: What's Real and What's Not
- Rise of the Robots 2.0: Real Talk About AI in Systematic Reviews
- Past, Present, and Future: Automation in Systematic Review Software