Wednesday, August 3, 2022

Spring 2022 Scholars Discuss Developments in Diagnostic and Environmental Health Evidence

The USGN's 16th GRADE Guideline Development Workshop, held in Chicago, was the first to be held in-person since March of 2020. In classic USGN style, participants enjoyed vibrant conversation, hours of learning, and delicious yogurt parfaits and strong coffee during morning breaks.

Two participants joined the fun and learning as part of the Evidence Foundation scholarship program, presenting to fellow attendees about their current projects related to evidence synthesis and guideline development. 

Spring 2022 Evidence Foundation scholars Kapeena Sivakumaran and Ibrahim El Mikati, center, pose for a photo between sessions in Chicago with the U.S. GRADE Network faculty (from left to right: Reem Mustafa, Philipp Dahm, Shahnaz Sultan, Yngve Falck-Ytter, Rebecca Morgan, and Hassan Murad).

Ibrahim El Mikati, a post-doctoral research fellow in the Outcomes and Implementation Research Unit at the University of Kansas Medical center, discussed his project helping to develop guidance for judging imprecision in diagnostic evidence. This approach will utilize thresholds for confidence intervals and will also introduce the concept of optimal information sizes for assessing imprecision in the context of diagnostic guidelines.

One thing that the GRADE workshop has helped me appreciate is transparency," said Ibrahim. "Having a transparent explanation of judgments provides users with trustworthy guidelines."

Kapeena Sivakumaran is currently leading two systematic reviews for Health Canada related to the impact of noise exposure and sleep disturbance on health outcomes. Challenges of these projects include a focus on short-term outcomes in the relevant literature as well as the need to incorporate multiple evidence streams, such as mechanistic data that can be interpreted in conjunction with observational evidence. 

“The workshop provided me with valuable insight into guideline development and using the GRADE approach to assess the evidence," said Kapeena. "One new thing I learned from the workshop was how automation and [artificial intelligence] can be integrated into the process of living systematic reviews to support guideline development.”

Note: applications for scholarships to attend our upcoming systematic review and guideline development workshops, held virtually, close August 12th and September 30th, 2022, respectively. See application details here.










Tuesday, May 31, 2022

What is a Scoping Review, Exactly? JBI Provides a Formal Definition in New Publication

A scoping review, by any other name, would be as broad...

Multiple definitions of "scoping review" have been used in the literature and, according to a new paper by Munn and colleagues, the use of these reviews is increasingly common. Therefore, the Joanna Briggs Institute recently released a definition of the term as well as guidance on its proper application in evidence synthesis.

The paper, published in April, formally defines a scoping review as "a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (ie, primary research, reviews, non-empirical evidence) within or across particular contexts." Further, the paper details, scoping reviews "can clarify key concepts/definitions in the literature and identify key characteristics or factors related to a concept, including those related to methodological research."


Scoping reviews are similar to other types of articles within the broader family of evidence syntheses, and should ideally include important characteristics such as the use of pre-specified protocols, question(s), and inclusion/exclusion criteria; a comprehensive search; more than one author; and adherence to guidelines such as the
PRISMA statement.

Because the main purpose of a scoping review is the explore and describe the breadth of evidence on a topic, pre-specified questions are usually broader in scope, and the evidence base often includes multiple types of evidence based on what is available. Beyond simply mapping the existing literature, however, scoping reviews may also be used to identify or clarify key concepts used in the field or to examine how research is typically conducted in the area.

Munn, Z, Pollock, D., Khalil, H., et al. (2022). What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis. JBI Evid Synth 20(4): 950-952. Manuscript available at publisher's website here. 








Wednesday, May 18, 2022

New Study of Randomized Trial Protocols Highlights Prevalence of Early Discontinuation, Importance of Reporting Adherence

Registration of clinical trials was first introduced as a way to combat publication bias and avoid the duplication of efforts in medical research. Since then, the registration of clinical trials has been deemed a requirement of publication by the International Committee of Journal Editors (ICJE) and various federal laws. However, the mere registration of a clinical trial does not guarantee its ultimate completion, nor publication.

In a new article published last month in PLoS Medicine, Speich and colleagues set out to better understand the prevalence and impact of non-registration, early trial discontinuation, and non-publication within the current landscape of medical research.

An update of previous findings published in 2014, the study examined 360 protocols for randomized controlled trials (RCTs) approved by Research Ethics Committees (RECs) based in Switzerland, the United Kingdom, Germany, and Canada. Of these, 326 were eligible for further analysis. The team collected data on whether the trials had been registered, whether sample sizes were planned and achieved, and whether study results were available or published. The team also assessed each of the RCTs for protocol reporting quality based on the Standard Protocol Items: Recommendations for Intervention Trials (SPIRIT) guidelines. 

Overall, the included RCTs had a median planned sample size of 250 participants and met a median of 69% of SPIRIT guidelines. Just over half (55%) were sponsored by industry. The large majority (94%) were registered, though 10% were registered retrospectively. About half (53%) reported results in a registry, whereas most (79%) had published results in a peer-reviewed journal. However, adherence to reporting guidelines did not appear to the rate of trial discontinuation.

About one-third (30%) of RCTs had been prematurely discontinued, which indicated no change in this regard since the previous investigation of RCT protocols approved between 2000-2003 (28%). Most commonly, trials in the current investigation were discontinued due to preventable reasons such as poor recruitment (37%), organizational/strategic issues (6%), and limited resources (1%). A smaller proportion were discontinued due to preventable reasons such as futility (16%), harm (6%), benefit (3%), or external evidence (3%).

For every 10% increase in adherence to SPIRIT protocol reporting guidelines, RCTs were 29% less likely to go unpublished (OR: 0.71; 95% confidence interval: from 0.55 to 0.92), and only about 1 in every 5 unpublished trials had been registered.

The authors suggest that these findings implicate the need for investigators to report results of trials in registries as well as in peer-reviewed publications. Furthermore, future research may assess the utility of feasibility or pilot studies in reducing the rate of trial discontinuation due to recruitment issues. Journals can require trial registration as a requirement to publish. 

Speich, B., Gryaznov, D., Busse, J.W., lohner, S., Klatte, K., Heravi, A.T., ... & Briel, M. (2022). Nonregistration, discontinuation, and nonpublication of randomized trials: A repeated research meta-analysis. PLoS Medicine 19(4), e1003980. https://doi.org/10.1371/journal.pmed.1003980. Manuscript available at the publisher's website here.



Thursday, May 12, 2022

Evidence Foundation Scholar Update: Reena Ragala's "Guideline Development Bootcamp"

As one of the Evidence Foundation's fall 2021 scholars, Reena Ragala attended the most recent GRADE guideline development workshop, held virtually. As part of her application, Reena discussed her current project to develop a clinical guideline development "bootcamp" within her new position at Medical University of South Carolina (MUSC) Health, and presented this project to her fellow participants during the workshop.

Below, Reena provides an update on her exciting work.


"I work in MUSC Health’s Value Institute as an evidence-based practice analyst. 

"Our GRADE bootcamp training is 'in progress.' The presentation content and audio are being finalized for the target audience (MUSC rural health network care team members). We have also decided to expand the target audience to include any subject matter expert that serves on our clinical practice guideline (CPG) workgroups, allowing all clinicians (MD, RN, therapist, SW, etc.) the opportunity to become more confident about what GRADE is, why it is used in decision-making at MUSC Health, and how evidence-based decisions are made using the GRADE methodology. Once the training module is recorded, we will begin sending it out as 'homework' for all subject matter experts in advance of each new CPG kickoff meeting. The training module will also be uploaded into our new education platform which goes live in November 2022.

 

"The dissemination of this training program has been delayed to November 2022 due to unexpected systemwide changes to our education platform. In addition, nursing shortages and COVID-related high census have limited our ability to get the necessary approvals for training that is not directly related to patient care or patient safety.   

 

"Since attending the GRADE workshop, I have also worked with colleagues to update the formatting of our evidence brief template. We adopted the 'Summary of Findings Table' and expanded the details of evidence appraisal based on the GRADE criteria, allowing the end users to more appropriately interpret the recommendations for clinical practice."

Friday, May 6, 2022

Summarizing Patient Values and Preferences Data to Better Inform Recommendations

The consideration of patients' values and preferences during the formulation of clinical recommendations requires that guideline developers have an understanding of how patients and other stakeholders weigh the potential desirable and undesirable effects of any given intervention against one another. This consideration of the Relative Importance of Outcomes (RIO) is crucial for developing clinical recommendations that are most relevant to the providers and patients who will be using them. But how can we ensure that guideline developers have a thorough understanding of these considerations when going from evidence to decisions?

In a new paper to be published in the July issue of Journal of Clinical Epidemiology, Zhang and colleagues developed and tested a standardized summary of findings table that presents the RIO evidence on a given clinical decision in order to better inform the development of recommendations while keeping the data on patients' values and preferences top-of-mind.

Figure 1 of the paper provides the route map of the table's user testing process

The methods included four rounds of user testing comprising semi-structured interviews with clinical researchers and guideline developers. Guided by Morville's Honeycomb Model, the authors aimed to assess the usability, credibility, usefulness, desirability, findability, and value of the table while addressing identified issues. Overall, 20 individuals participated, 19 of whom had experience in guideline development and all of whom had experience with Summary of Findings tables.

In terms of the table's usability, problems interpreting and understanding the health utility were present; the introduction of a visual analogue scale (VAS) improved this. The combination of quantitative and qualitative evidence when considering RIOs, in addition to the presentation of the variability surrounding given estimates, were other sources of confusion. However, the participants generally found the table useful, valuable, and easy to navigate.


Zhang, Y, Li, S-A, Yepes-Nuñez, J.J., Morgan, R.L., Pardo-Hernandez, H., Alonso Coello, P., Ren, M., ... & Schünemann, H.J. (2022). GRADE summary of findings tables enhanced understanding of values and preferences evidence. J Clin Epidemiol 147: 60-68. Manuscript available at the publisher's website here.











Monday, April 11, 2022

It's Alive! Pt. IV: Results from a Trainee Living Systematic Review Experience

Living systematic reviews (LSRs) continue to be a topic of interest among systematic review and guideline developers, as evidenced by our history posts on the topic here, here, and here. While automation and machine learning have begun to help facilitate what is a generally time- and resource-intensive process to evidence syntheses perpetually up-to-date, some aspects of LSR development still require the human touch. Now, a recently published mixed-methods study discusses the successes and challenges of utilizing a crowdsourcing approach to keep the LSR wheels turning.

The article describes the process of involving trainees in the development of a living systematic review and network meta-analysis (NMA) on drug therapy for rheumatoid arthritis. In their report, the authors posit that evidence-based medicine is a key pillar of learning for trainees, but that they may learn better through an experiential rather than a purely didactic approach; providing the opportunity to participate in a real-life systematic review may provide this experiential learning. 

In short, the team first applied machine learning to sort through an initial database to filter out randomized controlled trials, which was then further assessed by a crowdsourcing platform, Cochrane Crowd. Next, trainees ranging from undergraduate students to practicing rheumatologists and researchers recruited through Canadian and Australian rheumatology mailing lists further assessed articles for eligibility and extracted data from included articles. 

Training included a mix of online webinars, one-on-one trainings, and handbook provisions. Conflicting results were further assessed by an expert member of the team. The authors then elicited both quantitative and qualitative feedback about the trainees' experiences of taking part in the project through a combination of electronic survey and one-on-one interviews. 

Overall, the 21 trainees surveyed rated their training as adequate and experience generally positive. Respondents specifically listed better understanding of PICO criteria, familiarity with outcome measures used in rheumatology, and the assessment of studies' risk of bias as the greatest learning benefits obtained. 

Of the 16 who participated in follow-up interviews, the majority (94%) described a practical and enjoyable experience. Of particular positive regard was the use of task segmentation throughout the project, during which specific tasks (i.e., eligibility assessment versus data extraction) could be "batch-processed," allowing trainees to match the specific time and focus demands to the selected task at hand. Trainees also communicated an appreciation for the international collaboration involved in the review as well as the feeling of meaningfully contributing to the project. 

Notable challenges included issues related to the clarity of communication regarding deadlines and expectations, as well as technical glitches experienced through the platforms used for screening and extraction. Though task segmentation was seen as a benefit, it also included drawbacks: namely, the risk of more repetitive tasks such as eligibility assessment becoming tedious while others that require more focus (i.e., data extraction) may be difficult to integrate into an already-busy daily schedule. To address these issues, the authors suggest improving communications to include regular, frequent updates and deadline reminders, working through technological glitches, and carefully matching tasks to the specific skillsets and availabilities of each trainee.

Lee, C., Thomas, M., Ejaredar, M., Kassam, A., Whittle, S.L., Buchbinder, R., ... & Hazlewood, G.S. (2022). Crowdsourcing trainees in a living systematic review provided valuable experiential learning opportunities: A mixed-methods study. J Clin Epidemiol (in-press). Manuscript available at the publisher's website here.











Wednesday, March 30, 2022

A New Template for Standardized Wording when Reporting Evidence-to-Decision Considerations in Guidelines

One of the major tenets of GRADE is that certainty of the evidence is just one component of decision-making. Ultimately, decision-makers also need to take into account important factors such as values and preferences, feasibility, and considerations of the impact of a decision on health equity and resource utilization. These factors and others are part of the Evidence-to-Decision (EtD) framework that guides the process from the assessment of certainty of evidence to the final formulation of recommendations in a structured, transparent manner.

Often, multiple teams and individuals involved in the development of a guideline will need to work together to compete the EtD process, which can be a source of confusion. Additionally, until now, no official guidance existed for the use of standardized wording when considering and reporting each EtD framework component. Earlier this year, Piggott and colleagues aimed to address this issue with an article published in the Journal of Clinical Epidemiology.



The project, comprising ten guideline development groups and over 250 recommendations, set out to develop a standardized framework for clear, transparent, and efficient wording when reporting Evidence-to-Decision components within a guideline. This template was then used in two guidelines in development - the European Commission Initiative on Breast Cancer (ECIBC) and the Endocrine Society guidelines on hyperglycemia, hypoglycemia and hypercalcemia. During this process, the authors were able to pilot the wording, receive feedback, and refine the template. The real-life guidelines were also used to provide examples of wording recommendations.

The article includes suggested wording structure and examples for reporting the magnitude and certainty of effect estimates, for conclusions of each portion of the EtD framework, and for justification of recommendations as well as notes on implementation considerations, monitoring and evaluation, and research priorities. 

The authors note that these suggestions are preliminary and may require further refinement. Additionally, current examples of consistent and clear wording of EtDs continues to be lacking, though the dissemination of this guidance may improve future publications. While the suggestions within the article are focused on clinical decisions related to management of conditions, future efforts may expand this to guidelines for diagnostic testing, coverage, and other important areas.

Piggott, T., Baldeh, T., Dietl, B., Wiercoch, W., Nieuwlaat, R., Santesso, N., ... & Schünemann, H. (2022). Standardized wording to improve efficiency and clarity of GRADE EtD frameworks in health guidelines. J Clin Epidemiol (online ahead of print). Manuscript available at the publisher's website here.