Thursday, December 3, 2020

Assessing the Reliability of Recently Developed Risk of Bias Tools for Non-Randomized Studies

Risk of bias is one of the five domains to be considered when assessing the certainty of evidence across a body of studies, and is the only domain which must first be assessed on the individual study level. While several risk of bias assessment tools exist for non-randomized studies (NRS; or observational trials), two of the most recently introduced are the Risk of Bias in Non-Randomized Studies of Interventions (ROBINS-I, developed in 2016) and the Risk of Bias instrument for NRS of Exposures (ROB-NRSE, developed in 2018). Assessment of the risk of bias in a systematic review off of which a guideline is based should ideally be conducted independnelty by at least two reviewers. Given this scenario, how likely is it that the two reviewers' assessments will agree sufficiently with one another?

In a recently published paper by Jeyaraman and colleagues, a multi-center group of collaborators assessed both the inter-rater reliability (IRR) and interconsensus reliability (ICR) of these tools based on a previously published cross-sectional study protocol. The seven reviewers had a median of 5 years of experience assessing risk of bias, and two pairs of reviewers assessed risk of bias using each tool. IRR was used to assess reliability within pairs, while ICR assessed reliability between the pairs. The time burden was also assessed by recording the amount of time required to assess each included study and to come to a consensus. For the overall assessment of bias, IRR was rated as "Poor" (Gwet's agreement coefficient of 0%) for the ROBINS-I tool and "slight" (11%) for the ROB-NRSE tool, whereas the ICR was rated as "poor" for both ROBIN-I (7%) and ROB-NRSE (0%). The average evaluator time burden was over 48 minutes for the ROBINS-I tool and almost 37 minutes for the ROB-NRSE.

Click to enlarge.

Click to enlarge.

The authors note that overall, ROBINS-I tended to have a better IRR as well as ICR, both of which may be due in part to poorer reporting quality in exposure studies. In addition, simplification of related guidance documents for applying the tool and increased training for reviewers looking to use the ROBINS-I and ROB-NRSE tools to assess risk of bias in non-randomized studies may improve agreement considerably while cutting down on the time required to apply the tool correctly to each individual study.

Jeyaraman MM, Rabbani R, Copstein L, Robson RC, Al-Yousif N, Pollock M, ... & Abou-Setta AM. (2020). Methodologically rigorous risk of bias tools for nonrandomized studies had low reliability and high evaluator burden. J Clin Epidemiol 128:140-147.

Manuscript available from the publisher's web site here.