Systematic review development is known to be a labor-intensive endeavor that require a team of researchers dedicated to the task. The development of a living systematic review (LSR) that is continually updated as newly relevant evidence becomes available presents additional challenges. However, as Thomas and colleagues write in the second installment of the 2017 series on LSRs in the Journal of Clinical Epidemiology, we can make the process quicker, easier, and more efficient by harnessing the power of machine learning and “microtasks.”
Suggestions for improvements in efficiency can be categorized as either automation (incorporation of machine learning/replacement of human effort) or crowdsourcing (distribution of human effort across a broader base of individuals).
From soup to nuts, opportunities for the incorporation of machine learning into the LSR development process include:
- Continuous, automatic searches that “push” new potentially relevant studies out to human reviewers
- Exclusion of ineligible citations through automatic text classification, reducing the number of items that require human screening with over 99% sensitivity
- Crowdsourcing of study identification and "microtask" screening efforts such as Cochrane Crowd, which at the time of this blog’s writing had resulted in over 4 million screening decisions from over 17,000 contributors
- Automated retrieval of full text versions of included documents
- Machine-based extraction of relevant data, graphs and tables from included documents
- Machine-assisted risk of bias assessment
- Template-based reporting of important items
- Statistical thresholds that flag when a change of conclusions may be warranted
As technology in this field progresses, the traditionally duplicated stages of screening and data extraction may even be taken on by a computer-human pair, combining the ease and efficiency of automation with the “human touch” and high-level discernment that algorithms still lack.
Manuscript available from publisher's website here.