Thursday, February 25, 2021

The Use of GRADE in Systematic Reviews of Nutrition Interventions is Still Rare, but Growing

While the GRADE framework is used by over 100 health organizations to assess the certainty of evidence and guide the formulation of clinical recommendations, its use in the field of nutrition for these purposes is still sparse. A recent examination of all systematic reviews using GRADE in the ten highest-impact nutrition journals over the past five years provides insight and suggestions for moving the field forward in the use of GRADE for evidence assessment in systematic reviews of nutritional interventions.

Werner and colleagues identified 800 eligible systematic reviews, 55 (6.9%) of which used GRADE, and 47 (5.9%) of which rated the certainty of evidence specific to different outcomes. The number of these reviews using GRADE increased year-to-year, from two in 2015 to 23 in 2019. Reviews claiming to use a modification of GRADE were excluded from analysis.

Of the 811 identified cases of downgrading the certainty of evidence, and 31 cases of upgrading. Reviews of randomized controlled trials had a mean number of 1.6 domains downgraded per outcome, while reviews of non-randomized studies had a mean of 2.1. In about 6.5% of upgrading cases, this was done for unclear purposes not in line with GRADE guidance, such as upgrading for low risk of bias, narrow confidence intervals, or very low p-values. Reviews of non-randomized studies were more likely to have outcomes downgraded for imprecision and inconsistency, and less likely to have downgrades for publication bias than those of randomized studies. 

The authors conclude that while the use of GRADE in systematic reviews of nutritional interventions has grown over recent years based on this sample, continued education and training of nutrition researchers and experts can help improve the spread and quality of the application of GRADE to assess the certainty of evidence in this discipline.

Werner SS, Binder N, Toews I, et al. (2021). The use of GRADE in evidence syntheses published in high-impact-factor nutrition journal: A methodological survey. J Clin Epidemiol, in-press.

Manuscript available here. 











Friday, February 19, 2021

Registration of Trials Included in Systematic Reviews Has Improved Over Time, but Remains Under 50% for Most Years

 The prospective registration of a randomized controlled trial (RCT) can reduce bias by clearly laying out the methods to be used before the research is conducted and data analyzed. Registration can also help limit unintentional duplication of efforts, which can be seen as an ethical charge, as duplication of research may mean duplication of unnecessary potential risks to participants and a failure to properly disseminate the findings of research. In 2004, the International Committee of Medical Journal Editors (ICMJE)  recommended that journals only consider publishing results of a trial that was prospectively registered.

A new study by Lindsley and colleagues set out to examine just how many RCTs included in a sample of systematic reviews were properly registered, and if so, whether the registration entry was updated with results of the trial. From a group of 618 systematic reviews published within the Cochrane Musculoskeletal, Oral, Skin and Sensory (MOSS) network between 2014 and 2019, between a total of 100 eligible reviews were randomly selected from each of the network's eight groups (30 from the Eyes and Vision group, which provided the pilot data, and ten from each of the remaining seven). 

Among a total of 1,432 included trials published since 2000 when the protocol repository clinicaltrials.gov became available, only 379 (26%) had been registered. Of those 1,177 trials published since 2005, when the ICMJE recommendation first went into effect, the proportion of registered trials increased to 31%, and then to 38% for those published since 2010. Registered trials had double the median number of patients (120) than non-registered trials (60). About one-third (31%) of the trials published since 2005 included at least one major outcome within the registry record.  While trial registration did seem to increase over time, only during two years - 2015 and 2018 - did the proportion of registered to nonregistered trials exceed 50%. 

Overall, the authors found that while trial registration has become more common since 2005, it still tends to make up the minority of trials included within systematic reviews of this area. In addition, only about 10% of trials examined had updated the registration record with results related to safety or efficacy. Much room for improvement remains in terms of increasing and incentivizing the prospective registration of trials and update of information with publicly available results.

Lindsley K, Fusco N, Teeuw H, et al. (2021). Poor compliance of clinical trial registration among trials included in systematic reviews: A cohort study. J Clin Epidemiol 132:79-87. 

Manuscript available here. 









Friday, February 12, 2021

Common Challenges Faced by Scoping Reviewers and Ways to Solve Them

Scoping reviews provide an avenue for the exploration, description, and dissemination of a body of evidence before a more systematic review is undertaken. As such, they can help clarify how research on a certain has been defined and conducted, in addition to identifying common issues and knowledge gaps - all of which can go on to inform a more effective approach to systematically reviewing the literature.

The Joanna Briggs Institute (JBI) has provided guidance on the conduct of scoping reviews since 2013. While developing the latest version published in 2020, the group identified the most common challenges and posed some solutions for those looking to develop a scoping review.

Key challenges included:

  • a lack of people trained in methodology unique to scoping reviews (helpful resources can be found on the JBI Global page and elsewhere).
  • how to decide when a scoping review is appropriate (hint: they should never be done in lieu of a systematic review if the intention is to provide recommendations)
  • deciding which type of review is most appropriate (this online tool can help)
  • knowing how much and what type of data to extract - for instance, making determinations between "mapping" of concepts around particular areas, populations, or methodologies and conducting a qualitative thematic analysis
  • reporting results effectively, such as with an evidence gap map
  • resisting the urge to overstate conclusions and provide recommendations for practice
  • a lack of editors and peer reviewers adequately trained to critically revise scoping reviews (the PRISMA extension for scoping reviews - PRISMA ScR - provides a checklist for proper conduct and reporting).

Khalil H., Peters M.D.J., Tricco A.C., et al. (2021). Conducting a high quality scoping review: Challenges and solutions. J Clin Epidemiol 130:156-160.

Manuscript available from publisher's website here.


















Monday, February 1, 2021

RIGHT-PVG: A New Checklist for the Reporting of Patient Versions of Guidelines

 Patient versions of guidelines (PVGs) can provide crucial information about diagnoses and management options to patients in clear, plain language and can help guide shared decision-making between patients and their providers to improve the quality of care. However, the construction and reporting of PVGs is variable in terms of quality and content. Now, a new extension of the Reporting Tool for Practice Guidelines in Health Care - the RIGHT-PVG - aims to standardize the development of such documents.

Development of the RIGHT-PVG comprised 17 experts from around the world with experience in guideline development, patient communication, and epidemiology, and clinical practice. First, an initial list of items was generated from common themes in a sample of 30 PVGs. Then, four organizational guidance documents for the development of PVGs were identified and used to refine initial criteria. Two rounds of a modified Delphi consultation were used to further pare and refine checklist items from an original list of 45, with all panelist feedback anonymized. 


Final items included within the RIGHT-PVG fell under four main categories:

  • Basic information: items 1-3 include the reporting of title and copyright, contact information, and a general summary of the PVG's key points.
  • Background: items 4-6 include a general introduction to the topic at hand, information about the scope and target audience of the document, and a link to the original guideline off of which the PVG is based.
  • Recommendations: items 7 and 8 comprise the meat of the PVG: what is the guideline recommending, for whom, and what are the potential desirable and undesirable effects of the intervention? 
    • Recommendations should be easily identifiable via boxing, shading/coloring, or bold type.
    • The strength of each recommendation should be included along with a transparent reporting of the certainty of the evidence behind it.
    • Easy-to-understand symbols can be used to denote the differences between strong and more conditional recommendations.
  • Other information: items 9-12 recommend the inclusion of suggested questions for the reader to ask their provider; a glossary of terms and abbreviations; information about how the guideline was funded; and disclosure of any relevant conflicts of interest.
Wang X., Chen Y., Akl E.A., ... and the RIGHT working group. (2021). The reporting checklist for public versions of guidelines: RIGHT-PVG. Implement Sci 6(10). 

Manuscript available from the publisher's site here.