Tuesday, May 31, 2022

What is a Scoping Review, Exactly? JBI Provides a Formal Definition in New Publication

A scoping review, by any other name, would be as broad...

Multiple definitions of "scoping review" have been used in the literature and, according to a new paper by Munn and colleagues, the use of these reviews is increasingly common. Therefore, the Joanna Briggs Institute recently released a definition of the term as well as guidance on its proper application in evidence synthesis.

The paper, published in April, formally defines a scoping review as "a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (ie, primary research, reviews, non-empirical evidence) within or across particular contexts." Further, the paper details, scoping reviews "can clarify key concepts/definitions in the literature and identify key characteristics or factors related to a concept, including those related to methodological research."


Scoping reviews are similar to other types of articles within the broader family of evidence syntheses, and should ideally include important characteristics such as the use of pre-specified protocols, question(s), and inclusion/exclusion criteria; a comprehensive search; more than one author; and adherence to guidelines such as the
PRISMA statement.

Because the main purpose of a scoping review is the explore and describe the breadth of evidence on a topic, pre-specified questions are usually broader in scope, and the evidence base often includes multiple types of evidence based on what is available. Beyond simply mapping the existing literature, however, scoping reviews may also be used to identify or clarify key concepts used in the field or to examine how research is typically conducted in the area.

Munn, Z, Pollock, D., Khalil, H., et al. (2022). What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis. JBI Evid Synth 20(4): 950-952. Manuscript available at publisher's website here. 








Wednesday, May 18, 2022

New Study of Randomized Trial Protocols Highlights Prevalence of Early Discontinuation, Importance of Reporting Adherence

Registration of clinical trials was first introduced as a way to combat publication bias and avoid the duplication of efforts in medical research. Since then, the registration of clinical trials has been deemed a requirement of publication by the International Committee of Journal Editors (ICJE) and various federal laws. However, the mere registration of a clinical trial does not guarantee its ultimate completion, nor publication.

In a new article published last month in PLoS Medicine, Speich and colleagues set out to better understand the prevalence and impact of non-registration, early trial discontinuation, and non-publication within the current landscape of medical research.

An update of previous findings published in 2014, the study examined 360 protocols for randomized controlled trials (RCTs) approved by Research Ethics Committees (RECs) based in Switzerland, the United Kingdom, Germany, and Canada. Of these, 326 were eligible for further analysis. The team collected data on whether the trials had been registered, whether sample sizes were planned and achieved, and whether study results were available or published. The team also assessed each of the RCTs for protocol reporting quality based on the Standard Protocol Items: Recommendations for Intervention Trials (SPIRIT) guidelines. 

Overall, the included RCTs had a median planned sample size of 250 participants and met a median of 69% of SPIRIT guidelines. Just over half (55%) were sponsored by industry. The large majority (94%) were registered, though 10% were registered retrospectively. About half (53%) reported results in a registry, whereas most (79%) had published results in a peer-reviewed journal. However, adherence to reporting guidelines did not appear to the rate of trial discontinuation.

About one-third (30%) of RCTs had been prematurely discontinued, which indicated no change in this regard since the previous investigation of RCT protocols approved between 2000-2003 (28%). Most commonly, trials in the current investigation were discontinued due to preventable reasons such as poor recruitment (37%), organizational/strategic issues (6%), and limited resources (1%). A smaller proportion were discontinued due to preventable reasons such as futility (16%), harm (6%), benefit (3%), or external evidence (3%).

For every 10% increase in adherence to SPIRIT protocol reporting guidelines, RCTs were 29% less likely to go unpublished (OR: 0.71; 95% confidence interval: from 0.55 to 0.92), and only about 1 in every 5 unpublished trials had been registered.

The authors suggest that these findings implicate the need for investigators to report results of trials in registries as well as in peer-reviewed publications. Furthermore, future research may assess the utility of feasibility or pilot studies in reducing the rate of trial discontinuation due to recruitment issues. Journals can require trial registration as a requirement to publish. 

Speich, B., Gryaznov, D., Busse, J.W., lohner, S., Klatte, K., Heravi, A.T., ... & Briel, M. (2022). Nonregistration, discontinuation, and nonpublication of randomized trials: A repeated research meta-analysis. PLoS Medicine 19(4), e1003980. https://doi.org/10.1371/journal.pmed.1003980. Manuscript available at the publisher's website here.



Thursday, May 12, 2022

Evidence Foundation Scholar Update: Reena Ragala's "Guideline Development Bootcamp"

As one of the Evidence Foundation's fall 2021 scholars, Reena Ragala attended the most recent GRADE guideline development workshop, held virtually. As part of her application, Reena discussed her current project to develop a clinical guideline development "bootcamp" within her new position at Medical University of South Carolina (MUSC) Health, and presented this project to her fellow participants during the workshop.

Below, Reena provides an update on her exciting work.


"I work in MUSC Health’s Value Institute as an evidence-based practice analyst. 

"Our GRADE bootcamp training is 'in progress.' The presentation content and audio are being finalized for the target audience (MUSC rural health network care team members). We have also decided to expand the target audience to include any subject matter expert that serves on our clinical practice guideline (CPG) workgroups, allowing all clinicians (MD, RN, therapist, SW, etc.) the opportunity to become more confident about what GRADE is, why it is used in decision-making at MUSC Health, and how evidence-based decisions are made using the GRADE methodology. Once the training module is recorded, we will begin sending it out as 'homework' for all subject matter experts in advance of each new CPG kickoff meeting. The training module will also be uploaded into our new education platform which goes live in November 2022.

 

"The dissemination of this training program has been delayed to November 2022 due to unexpected systemwide changes to our education platform. In addition, nursing shortages and COVID-related high census have limited our ability to get the necessary approvals for training that is not directly related to patient care or patient safety.   

 

"Since attending the GRADE workshop, I have also worked with colleagues to update the formatting of our evidence brief template. We adopted the 'Summary of Findings Table' and expanded the details of evidence appraisal based on the GRADE criteria, allowing the end users to more appropriately interpret the recommendations for clinical practice."

Friday, May 6, 2022

Summarizing Patient Values and Preferences Data to Better Inform Recommendations

The consideration of patients' values and preferences during the formulation of clinical recommendations requires that guideline developers have an understanding of how patients and other stakeholders weigh the potential desirable and undesirable effects of any given intervention against one another. This consideration of the Relative Importance of Outcomes (RIO) is crucial for developing clinical recommendations that are most relevant to the providers and patients who will be using them. But how can we ensure that guideline developers have a thorough understanding of these considerations when going from evidence to decisions?

In a new paper to be published in the July issue of Journal of Clinical Epidemiology, Zhang and colleagues developed and tested a standardized summary of findings table that presents the RIO evidence on a given clinical decision in order to better inform the development of recommendations while keeping the data on patients' values and preferences top-of-mind.

Figure 1 of the paper provides the route map of the table's user testing process

The methods included four rounds of user testing comprising semi-structured interviews with clinical researchers and guideline developers. Guided by Morville's Honeycomb Model, the authors aimed to assess the usability, credibility, usefulness, desirability, findability, and value of the table while addressing identified issues. Overall, 20 individuals participated, 19 of whom had experience in guideline development and all of whom had experience with Summary of Findings tables.

In terms of the table's usability, problems interpreting and understanding the health utility were present; the introduction of a visual analogue scale (VAS) improved this. The combination of quantitative and qualitative evidence when considering RIOs, in addition to the presentation of the variability surrounding given estimates, were other sources of confusion. However, the participants generally found the table useful, valuable, and easy to navigate.


Zhang, Y, Li, S-A, Yepes-Nuñez, J.J., Morgan, R.L., Pardo-Hernandez, H., Alonso Coello, P., Ren, M., ... & Schünemann, H.J. (2022). GRADE summary of findings tables enhanced understanding of values and preferences evidence. J Clin Epidemiol 147: 60-68. Manuscript available at the publisher's website here.