Wednesday, May 18, 2022

New Study of Randomized Trial Protocols Highlights Prevalence of Early Discontinuation, Importance of Reporting Adherence

Registration of clinical trials was first introduced as a way to combat publication bias and avoid the duplication of efforts in medical research. Since then, the registration of clinical trials has been deemed a requirement of publication by the International Committee of Journal Editors (ICJE) and various federal laws. However, the mere registration of a clinical trial does not guarantee its ultimate completion, nor publication.

In a new article published last month in PLoS Medicine, Speich and colleagues set out to better understand the prevalence and impact of non-registration, early trial discontinuation, and non-publication within the current landscape of medical research.

An update of previous findings published in 2014, the study examined 360 protocols for randomized controlled trials (RCTs) approved by Research Ethics Committees (RECs) based in Switzerland, the United Kingdom, Germany, and Canada. Of these, 326 were eligible for further analysis. The team collected data on whether the trials had been registered, whether sample sizes were planned and achieved, and whether study results were available or published. The team also assessed each of the RCTs for protocol reporting quality based on the Standard Protocol Items: Recommendations for Intervention Trials (SPIRIT) guidelines. 

Overall, the included RCTs had a median planned sample size of 250 participants and met a median of 69% of SPIRIT guidelines. Just over half (55%) were sponsored by industry. The large majority (94%) were registered, though 10% were registered retrospectively. About half (53%) reported results in a registry, whereas most (79%) had published results in a peer-reviewed journal. However, adherence to reporting guidelines did not appear to the rate of trial discontinuation.

About one-third (30%) of RCTs had been prematurely discontinued, which indicated no change in this regard since the previous investigation of RCT protocols approved between 2000-2003 (28%). Most commonly, trials in the current investigation were discontinued due to preventable reasons such as poor recruitment (37%), organizational/strategic issues (6%), and limited resources (1%). A smaller proportion were discontinued due to preventable reasons such as futility (16%), harm (6%), benefit (3%), or external evidence (3%).

For every 10% increase in adherence to SPIRIT protocol reporting guidelines, RCTs were 29% less likely to go unpublished (OR: 0.71; 95% confidence interval: from 0.55 to 0.92), and only about 1 in every 5 unpublished trials had been registered.

The authors suggest that these findings implicate the need for investigators to report results of trials in registries as well as in peer-reviewed publications. Furthermore, future research may assess the utility of feasibility or pilot studies in reducing the rate of trial discontinuation due to recruitment issues. Journals can require trial registration as a requirement to publish. 

Speich, B., Gryaznov, D., Busse, J.W., lohner, S., Klatte, K., Heravi, A.T., ... & Briel, M. (2022). Nonregistration, discontinuation, and nonpublication of randomized trials: A repeated research meta-analysis. PLoS Medicine 19(4), e1003980. https://doi.org/10.1371/journal.pmed.1003980. Manuscript available at the publisher's website here.



Thursday, May 12, 2022

Evidence Foundation Scholar Update: Reena Ragala's "Guideline Development Bootcamp"

As one of the Evidence Foundation's fall 2021 scholars, Reena Ragala attended the most recent GRADE guideline development workshop, held virtually. As part of her application, Reena discussed her current project to develop a clinical guideline development "bootcamp" within her new position at Medical University of South Carolina (MUSC) Health, and presented this project to her fellow participants during the workshop.

Below, Reena provides an update on her exciting work.


"I work in MUSC Health’s Value Institute as an evidence-based practice analyst. 

"Our GRADE bootcamp training is 'in progress.' The presentation content and audio are being finalized for the target audience (MUSC rural health network care team members). We have also decided to expand the target audience to include any subject matter expert that serves on our clinical practice guideline (CPG) workgroups, allowing all clinicians (MD, RN, therapist, SW, etc.) the opportunity to become more confident about what GRADE is, why it is used in decision-making at MUSC Health, and how evidence-based decisions are made using the GRADE methodology. Once the training module is recorded, we will begin sending it out as 'homework' for all subject matter experts in advance of each new CPG kickoff meeting. The training module will also be uploaded into our new education platform which goes live in November 2022.

 

"The dissemination of this training program has been delayed to November 2022 due to unexpected systemwide changes to our education platform. In addition, nursing shortages and COVID-related high census have limited our ability to get the necessary approvals for training that is not directly related to patient care or patient safety.   

 

"Since attending the GRADE workshop, I have also worked with colleagues to update the formatting of our evidence brief template. We adopted the 'Summary of Findings Table' and expanded the details of evidence appraisal based on the GRADE criteria, allowing the end users to more appropriately interpret the recommendations for clinical practice."

Friday, May 6, 2022

Summarizing Patient Values and Preferences Data to Better Inform Recommendations

The consideration of patients' values and preferences during the formulation of clinical recommendations requires that guideline developers have an understanding of how patients and other stakeholders weigh the potential desirable and undesirable effects of any given intervention against one another. This consideration of the Relative Importance of Outcomes (RIO) is crucial for developing clinical recommendations that are most relevant to the providers and patients who will be using them. But how can we ensure that guideline developers have a thorough understanding of these considerations when going from evidence to decisions?

In a new paper to be published in the July issue of Journal of Clinical Epidemiology, Zhang and colleagues developed and tested a standardized summary of findings table that presents the RIO evidence on a given clinical decision in order to better inform the development of recommendations while keeping the data on patients' values and preferences top-of-mind.

Figure 1 of the paper provides the route map of the table's user testing process

The methods included four rounds of user testing comprising semi-structured interviews with clinical researchers and guideline developers. Guided by Morville's Honeycomb Model, the authors aimed to assess the usability, credibility, usefulness, desirability, findability, and value of the table while addressing identified issues. Overall, 20 individuals participated, 19 of whom had experience in guideline development and all of whom had experience with Summary of Findings tables.

In terms of the table's usability, problems interpreting and understanding the health utility were present; the introduction of a visual analogue scale (VAS) improved this. The combination of quantitative and qualitative evidence when considering RIOs, in addition to the presentation of the variability surrounding given estimates, were other sources of confusion. However, the participants generally found the table useful, valuable, and easy to navigate.


Zhang, Y, Li, S-A, Yepes-Nuñez, J.J., Morgan, R.L., Pardo-Hernandez, H., Alonso Coello, P., Ren, M., ... & Schünemann, H.J. (2022). GRADE summary of findings tables enhanced understanding of values and preferences evidence. J Clin Epidemiol 147: 60-68. Manuscript available at the publisher's website here.











Monday, April 11, 2022

It's Alive! Pt. IV: Results from a Trainee Living Systematic Review Experience

Living systematic reviews (LSRs) continue to be a topic of interest among systematic review and guideline developers, as evidenced by our history posts on the topic here, here, and here. While automation and machine learning have begun to help facilitate what is a generally time- and resource-intensive process to evidence syntheses perpetually up-to-date, some aspects of LSR development still require the human touch. Now, a recently published mixed-methods study discusses the successes and challenges of utilizing a crowdsourcing approach to keep the LSR wheels turning.

The article describes the process of involving trainees in the development of a living systematic review and network meta-analysis (NMA) on drug therapy for rheumatoid arthritis. In their report, the authors posit that evidence-based medicine is a key pillar of learning for trainees, but that they may learn better through an experiential rather than a purely didactic approach; providing the opportunity to participate in a real-life systematic review may provide this experiential learning. 

In short, the team first applied machine learning to sort through an initial database to filter out randomized controlled trials, which was then further assessed by a crowdsourcing platform, Cochrane Crowd. Next, trainees ranging from undergraduate students to practicing rheumatologists and researchers recruited through Canadian and Australian rheumatology mailing lists further assessed articles for eligibility and extracted data from included articles. 

Training included a mix of online webinars, one-on-one trainings, and handbook provisions. Conflicting results were further assessed by an expert member of the team. The authors then elicited both quantitative and qualitative feedback about the trainees' experiences of taking part in the project through a combination of electronic survey and one-on-one interviews. 

Overall, the 21 trainees surveyed rated their training as adequate and experience generally positive. Respondents specifically listed better understanding of PICO criteria, familiarity with outcome measures used in rheumatology, and the assessment of studies' risk of bias as the greatest learning benefits obtained. 

Of the 16 who participated in follow-up interviews, the majority (94%) described a practical and enjoyable experience. Of particular positive regard was the use of task segmentation throughout the project, during which specific tasks (i.e., eligibility assessment versus data extraction) could be "batch-processed," allowing trainees to match the specific time and focus demands to the selected task at hand. Trainees also communicated an appreciation for the international collaboration involved in the review as well as the feeling of meaningfully contributing to the project. 

Notable challenges included issues related to the clarity of communication regarding deadlines and expectations, as well as technical glitches experienced through the platforms used for screening and extraction. Though task segmentation was seen as a benefit, it also included drawbacks: namely, the risk of more repetitive tasks such as eligibility assessment becoming tedious while others that require more focus (i.e., data extraction) may be difficult to integrate into an already-busy daily schedule. To address these issues, the authors suggest improving communications to include regular, frequent updates and deadline reminders, working through technological glitches, and carefully matching tasks to the specific skillsets and availabilities of each trainee.

Lee, C., Thomas, M., Ejaredar, M., Kassam, A., Whittle, S.L., Buchbinder, R., ... & Hazlewood, G.S. (2022). Crowdsourcing trainees in a living systematic review provided valuable experiential learning opportunities: A mixed-methods study. J Clin Epidemiol (in-press). Manuscript available at the publisher's website here.











Wednesday, March 30, 2022

A New Template for Standardized Wording when Reporting Evidence-to-Decision Considerations in Guidelines

One of the major tenets of GRADE is that certainty of the evidence is just one component of decision-making. Ultimately, decision-makers also need to take into account important factors such as values and preferences, feasibility, and considerations of the impact of a decision on health equity and resource utilization. These factors and others are part of the Evidence-to-Decision (EtD) framework that guides the process from the assessment of certainty of evidence to the final formulation of recommendations in a structured, transparent manner.

Often, multiple teams and individuals involved in the development of a guideline will need to work together to compete the EtD process, which can be a source of confusion. Additionally, until now, no official guidance existed for the use of standardized wording when considering and reporting each EtD framework component. Earlier this year, Piggott and colleagues aimed to address this issue with an article published in the Journal of Clinical Epidemiology.



The project, comprising ten guideline development groups and over 250 recommendations, set out to develop a standardized framework for clear, transparent, and efficient wording when reporting Evidence-to-Decision components within a guideline. This template was then used in two guidelines in development - the European Commission Initiative on Breast Cancer (ECIBC) and the Endocrine Society guidelines on hyperglycemia, hypoglycemia and hypercalcemia. During this process, the authors were able to pilot the wording, receive feedback, and refine the template. The real-life guidelines were also used to provide examples of wording recommendations.

The article includes suggested wording structure and examples for reporting the magnitude and certainty of effect estimates, for conclusions of each portion of the EtD framework, and for justification of recommendations as well as notes on implementation considerations, monitoring and evaluation, and research priorities. 

The authors note that these suggestions are preliminary and may require further refinement. Additionally, current examples of consistent and clear wording of EtDs continues to be lacking, though the dissemination of this guidance may improve future publications. While the suggestions within the article are focused on clinical decisions related to management of conditions, future efforts may expand this to guidelines for diagnostic testing, coverage, and other important areas.

Piggott, T., Baldeh, T., Dietl, B., Wiercoch, W., Nieuwlaat, R., Santesso, N., ... & Schünemann, H. (2022). Standardized wording to improve efficiency and clarity of GRADE EtD frameworks in health guidelines. J Clin Epidemiol (online ahead of print). Manuscript available at the publisher's website here.


















   

Saturday, March 5, 2022

U.S. GRADE Network Holds Its First Two-Day Systematic Review Workshop

In response to popular demand, the U.S. GRADE Network recently expanded its half-day course on systematic reviews into a two-day virtual workshop. Taking place over February 1-2, 2022, the sessions comprised six total hours of instructional time and question-and-answer sessions with Network faculty in addition to three hours of hands-on activities in a small-group format. Large-group lectures ranged from developing a focused clinical question to conducting meta-analysis and evaluating the quality of systematic reviews. In hands-on sessions of ten participants each, attendees were introduced to a free online screening platform (Rayyan) and tried their hands at assessing risk of bias and conducting meta-analysis in RevMan, a free systematic review and meta-analysis software from Cochrane.

As part of the program, the Evidence Foundation welcomed three recipients of scholarships to attend the workshop free of charge. As part of their applications, the scholars described a current or proposed project for a systematic review. The three recipients, along with their projects, included:
• Bryden Giving, MAOT, OTR/L (Boston University): A traffic light of evidence for occupational therapy interventions supporting autistic children and youth
• Nirjhar Ruth Ghosh, MS (Texas A&M University): Evidence-based practice in the field of nutrition: A systematic review of knowledge, skills, attitudes, behaviors and teaching strategies
• Milton A. Romero-Robles (Universidad Nacional del Santa): Participation, involvement and main barriers in the inclusion of patients with non-communicable diseases in the development of clinical practice guidelines: A systematic review protocol


Be the first to hear about these and other trainings @USGRADEnet on twitter or at www.systematicreview.org.

Note: applications for scholarships to attend the upcoming GRADE Guideline Development Workshop held July 29-July, 2022, in Chicago, Illinois, close March 31. See application details here.








Friday, January 7, 2022

Guideline Development Resource Alert: the G-I-N Public and Patient Toolkit

One of the most common challenges to developing rigorous and high-quality guidelines is the inclusion of the patient and public perspective into the formulation of recommendations. In fact, in a recently published needs assessment of guideline developers worldwide, 81.5% answered with a 5 or greater on a 7-point Likert scale that the incorporation of the patient voice was a relevant need for their organization. 

Now, the Guidelines International Network (G-I-N) has launched a large-scale toolkit aimed to address commonly experienced issues related to patient and public involvement. The result of a combination of international experiences and best practice examples, the toolkit is a one-stop shop spanning the systematic review and guideline development process, from conducting targeted consultation with the public to recruiting and supporting patient panel members to communicating recommendations to the public at-large. As a "living resource," the toolkit will continue to expand and evolve as further information and experience is cultivated.


You can find the freely available toolkit here.