Friday, November 4, 2022

COVID-END Working Groups Call for Living Systematic Reviews and Considerations of Health Equity in Evidence Synthesis and Guideline Efforts

The COVID-19 pandemic was not only associated with a rapid worldwide spread of a virus, but also of large amounts of information across the globe - not all of which was trustworthy or credible. Experts call this an "infodemic." In order to improve the synthesis and dissemination of trustworthy information in a manner that could keep up with the fast pace and ever-changing landscape of knowledge on COVID-19, the COVID-19 Evidence Network to support Decision-making (COVID-END) was established.

In a paper published in this month's issue of Journal of Clinical Epidemiology, McCaul and colleagues describe how the COVID-19 pandemic ushered in an urgent need to rapidly understand the etiology and management strategies for the disease, and to disseminate this information far and wide. However, a lack of collaboration resulting in duplication of work across institutions and countries hampered these efforts. COVID-END, comprising two working groups  dedicated to overseeing the coordination and dissemination of trustworthy evidence syntheses and guidelines, was a result of these unprecedented needs. The effort also included an Equity Task Group that evaluated the impact of evidence synthesis and recommendations on matters of health and socioeconomic disparities arising from or exacerbated by the pandemic.


Figure from McCaul et al. describing the efforts of COVID-END

The goal of the project, in the authors' words, was to support the "evidence supply side" by promoting already available resources and work led by institutions across the globe, both for those involved in evidence synthesis or the formulation of recommendations based on the evidence. The avoidance of effort duplication was highlighted by, for instance, urging guideline developers to first search for existing high-quality and up-to-date guidelines before beginning work on new recommendations. The development and use of living systematic reviews, which are continually updated as new evidence becomes available, is further highlighted as a way to improve the timeliness of evidence syntheses while reducing efforts put into new projects.

McCaul, M., Tovey, D., Young, T., et al. (2022). Resources supporting trustworthy, rapid and equitable evidence synthesis and guideline development: Results from the COVID-19 Evidence Network to support Decision-making (COVID-END). J Clin Epidemiol 151: 88-95. Manuscript available at publisher's website here. 















Monday, October 24, 2022

8 Steps Toward Incorporating Equity in Rapid Reviews and Guidelines

Along with the mass mobilization of systematic reviews of evidence ushered in by the COVID-19 pandemic was also the need to synthesize evidence and disseminate results as rapidly as possible. As part of the process of formulating guidelines based upon rapid reviews, the impact of decisions and policies on equity should be considered. In a newly published paper, Dewidar and colleagues provide specific steps for incorporating stakeholders and improving the consideration of equity in the context of rapid guidelines.

The project was part of work conducted by the Equity Task Force of the global COVID-19 Evidence Network to support Decision-making COVID-END) network. The team was diverse in terms of the gender (70% women), regions (17% from low-middle-income countries), and career stages (40% early career) represented. The resulting guidance was created in line with the steps outlined in the Cochrane Handbook's chapter on equity and followed the PRISMA-Equity (PRISMA-E) extension for reporting. The team then identified published systematic reviews related to COVID-19 that focused on populations experiencing inequities as categorized by the PROGRESS-Plus paradigm - for instance, by Place of Residence (health systems in rural areas and their preparedness for outbreaks), Education (the impact of educational attainment on adherence to COVID-19 health guidelines), and Disability (the impact of COVID-19 on those with disabilities) - for examples that review authors can incorporate equity into their own reviews.

The authors conclude that greater involvement of diverse stakeholders can encourage the consideration of more diverse social factors in the development and interpretation of systematic reviews and resulting guidelines and policies. Rapid reviews also benefit from having a translation plan that includes methods for disseminating findings in a way that is consistent with the goal of reducing inequities. 

Dewidar, O., Kawala, B.A., Antequera, A., et al. (2022). Methodological guidance for incorporating equity when informing rapid-policy and guideline development. J Clin Epidemiol 150: 142-153. Manuscript available at the publisher's website here. 





















Tuesday, October 18, 2022

U.S. GRADE Network Describes Experience Moving to All-Virtual Workshop Format

On March 4-6, 2020, the U.S. GRADE Network held an in-person workshop in Phoenix, Arizona, much like any of the 11 workshops to have come before it. Participants and facilitators enjoyed a taco buffet bar together at the first night's reception, sat together in small and large rooms to learn and collaborate, and mingled over coffee and pastry refreshments during breaks.

One week later, the World Health Organization announced that COVID-19 had reached pandemic proportions. 

Over that summer, the USGN took our workshops online, hosting three consecutive fully virtual workshops in October 2020, May 2021, and October 2021. While some changes were made (the addition of multiple, 45-60 minute breaks, for instance, to accommodate eating times in multiple time zones), much of what lay at the heart of a GRADE workshop remained: a three-day format  including plenary lectures from PICOs to recommendations, presentations by Evidence Foundation scholars, and small-group, hands-on experiential learning opportunities.

The USGN's shift to an all-virtual setting, and its challenges as well as opportunities for growth, are presented in a new paper by Siedler and colleagues published online in the BMJ Evidence-Based Medicine journal. Using routine feedback survey data collected both before and after the pandemic, the authors (all GRADE workshop facilitators) found that...

  • Perceived understanding of GRADE improved to the same extent in virtual and in-person formats,
  • At least half of attendees (54-62%) indicated that the virtual format was important for their ability to attend, and
  • Participants indicated a high degree of workshop satisfaction and perceived educational value. Similar results were observed for the level of knowledgeability of speakers, value of plenary sessions, and helpfulness of small-group sessions.

The major takeaway from the USGN's experience in an all-virtual format is that, based upon positive feedback and the ability to reach a global audience of learners, it will continue to offer learning opportunities in a virtual setting this year and beyond.

In fact, the next all-virtual workshop will take place November 30-December 2, 2022, and registration is now open at www.gradeconf.org.

Siedler MR, Murad MH, Morgan RL, et al. (2022). Proof of concept: All-virtual guideline development workshops using GRADE during the COVID-19 pandemic. BMJ Evidence-Based Medicine (online before print). Manuscript available from publisher's website here.



















Monday, October 17, 2022

Use of an Evidence-to-Decision Framework is Associated with Better Reporting, More Thorough Consideration of Recommendations

In the guideline development process, a panel should use a defined framework to consider multiple aspects of a clinical decision, including but not limited to the certainty of the underlying evidence, potential impact on resource use, or variability in the values and preferences of patients and other stakeholders. Such frameworks include the GRADE Evidence-to-Decision (EtD) format as well as others such as the "decision-making triangle" and Guidance for Priority-Setting in Health care (GPS Health). 

To better understand the prevalence and use of these various frameworks within guidelines, Meneses-Echavez and colleagues systematically searched for guidelines and related guideline production manuals published between 2003 and May 2020. Items were screened and extracted by two independent authors, with a total of 68 full text documents included and analyzed.

Of these documents, most (93%) reported using a structured framework to assess the certainty of evidence, about half (53%) of which used GRADE or adapted systems based on GRADE (10%). Similarly, 88% of documents reported using a framework to rate the strength of recommendations, with about half (51%) using the GRADE approach. However,  only about two-thirds (66%) of the included documents explicitly stated the process for formulating resulting recommendations. 

Finally, the GRADE framework  was most commonly used for the evidence-to-decision, being cited in 42% of the included articles, with other reported frameworks including NICE (8%), SIGN (8%) and USPSTF (4%). Articles using the GRADE EtD framework reported considering more criteria than those using alternative approaches. The most commonly used criteria across documents included desirable effects (72%), undesirable effects (73%), and the certainty of evidence of effects (73%); the least commonly applied criteria were acceptability (28%), certainty of the evidence of required resources (25%), and equity (16%). 



The use of any EtD framework was associated with a greater likelihood of incorporating perspectives (odds ratio: 2.8; from 0.6-13.8) and subgroup considerations (odds ratio:7.2; from 0.9-57.9), as was the use of GRADE compared to other EtDs (odds ratios: 1.4 and 8.4). These differences also affected whether justifications were reported for each judgment as well as the inclusion of notes to consider for the implementation of recommendations and for monitoring and evaluating recommendations.  

The authors conclude that guidance documents stand to benefit from the more explicit reporting of how recommendations are formulated, from the initial grading of the certainty of underlying evidence to the consideration of how recommendations will affect various criteria such as resource use and equity. These changes, in the words of the authors, may "enhance transparency and credibility, enabling end users to determine how much confidence they can have in the recommendations; facilitate later adaptation to contexts other than the ones in which they were originally developed; and improve usability and communicability of the EtD frameworks." 

Meneses-Echaves JF, Bidonde J, Yepes-Nuñez JJ, et al. (2022).  Evidence to decision frameworks enabled structured and explicit development of healthcare recommendations. J Clin Epidemiol 150:51-62. Manuscript available at publisher's website here.









Tuesday, September 27, 2022

Only One-Third of a Sample of RCTs Had Made Protocols Publicly Available, New Report Finds

Earlier this year, a study in PloS Medicine found that nearly one-third (30%) of a sample of randomized controlled trials (RCTs) had been discontinued prematurely, a number that had not improved over the previous decade. Furthermore, for every 10% increase in adherence to SPIRIT protocol reporting guidelines, RCTs were 29% less likely to go unpublished (OR: 0.71; 95% confidence interval: from 0.55 to 0.92), and only about 1 in every 5 unpublished trials had been registered.

Now, in this month's issue of Journal of Clinical Epidemiology, Schönenberger and colleagues have released a study of the availability of RCT protocols from a sample of published works.

Public availability of study protocols, the authors argue, improves research quality by promoting thoughtfulness in methodological design, reducing selective outcomes reporting or "cherry-picking," and reducing the misreporting of results while promoting ethical compliance. This is especially the case when trial protocols are made available before the publication of study results. 

From a random sample of RCTs approved by ethics committees in Switzerland, Germany, Canada, and the United Kingdom in 2012, the authors examined the proportion of studies that had publicly available protocols and the nature of how the protocols were cited and disseminated. Of the resulting 326 RCTs, 118 (36.2%) had publicly available protocols. Of the protocols, nearly half (47.5%) were available as standalone peer-reviewed publications while 40.7% were available as supplementary material with the published results. A smaller proportion (10.2%) of protocols were available on a trial registry. 

Studies with a sample size of >500 or that were investigator- (non-industry)-sponsored were more likely to have publicly available protocols. The nature of the intervention (drug versus non-drug) did not appear to affect protocol availability, nor did whether the trial was conducted in a multicenter or single-center setting. The majority (91.8%) of protocols were made available after the enrollment of the first patient, and just 2.7% were made available after publication of trial results. Protocols were commonly published shortly before the trial results, at a median of 90% of the time between the start of the trial and its publication.

As this sample comprised only RCTs published in 2012 and by relatively high-income countries, it is unclear whether public protocol availability has improved over time or may be different in other global regions. However, the authors argue, these numbers lend credence to the need for efforts to improve the public availability of RCT protocols, such as through trial registries or requirements by publishing or funding bodies.

Schönenberger, C.M., Griessbach, A., Heravi, A.T., et al. (2022). A meta-research study of randomized controlled trials found infrequent and delayed availability of protocols. J Clin Epidemiol 149:45-52. Manuscript available at publisher's website here










Wednesday, September 21, 2022

Second USGN Systematic Review Workshop Features a Record Nine Scholars From Around the Globe

Earlier this month, the U.S. GRADE Network held its second two-day, comprehensive, all-virtual systematic review workshop. This workshop allowed participants to learn about each step of the systematic review process, from designing a search strategy to meta-analysis to preparing a manuscript for publication. True to USGN style, the workshops consisted of a mixture of large-group lectures and smaller-group experiential learning components in which participants received a tutorial of Rayyan (a free online screening tool), assessed studies for risk of bias, and created risk of bias and forest plots in Cochrane's Review Manager software.

Uniquely, this workshop featured nine Evidence Foundation scholars who attended the workshop free of charge. As part of their applications for this scholarship, these participants described a proposed or current systematic review project related to health care, with a preference given to projects aimed at addressing inequities or with a focus on underserved populations. The nine accepted applicants provided a diverse array of exciting projects - from HIV pre-exposure prophylaxis to interventions for rabies control to weight-bearing exercise in pregnant patients - and hailed from across the globe, from Canada and Benin to Turkey, Syria, Dubai, and Kazakhstan.


Be the first to hear about these and other trainings @USGRADEnet on twitter or at www.systematicreview.org.

Note: applications for scholarships to attend the upcoming GRADE Guideline Development Workshop, held virtually November 30 - December 2, 2022, close September 30. See application details here.












Standardized Mean Difference Estimates Can Vary Widely Depending on the Methods Used, New Review Finds

In meta-analyses of outcomes that utilize multiple scales of measurement, a standardized mean difference (SMD) may be used. Randomized controlled trials may also use SMDs to help interpret the effect size for readers. Most commonly, the SMD reports the effect size with Cohen's d, a metric of how many standard deviations are contained in the mean difference within or between groups (e.g., an intervention caused the outcome to increase or decrease by x number of standard deviations, or the two groups were x number of standard deviations different from one another with regards to the outcome). This is typically done by dividing the difference between groups, or from pretest to posttest in a single group, by some form of standard deviation (e.g., pooled standard deviation at baseline, posttest, or the standard deviation of change scores). Cohen's d is often utilized because a general rule of interpretation has been suggested: 0.2 is a small effect, 0.5 is a medium-sized effect, and 0.8 is large.

However, there are multiple ways to approach the calculation of SMDs, and these may result in varying interpretations of the size of the effect. To further investigate this, Luo and colleagues recently published a review of 161 articles using SMDs and the way they can be calculated. Of the 161 randomized controlled trials published since 2000 and reporting outcomes with some form of SMD, the authors calculated potential between-group SMDs using reported data and up to seven different methodological approaches.

Some studies reported more than one type of SMD, meaning that 171 total SMD approaches were reported across the 161 studies. Of these, 34 (19.9%) did not describe the chosen method at all, 84 (49.1%) reported but in insufficient detail, and 53 (31%) reported the approach in sufficient detail. The confidence interval was only reported for 52 (30.4%) of SMDs. Of the 161 individual articles, the rule for interpretation was clearly stated in only 28 (17.4%). 

The most common method of calculating SMD was using a standard deviation of baseline scores, seen in 70 (40.9%) of studies. Meanwhile, 30 (17.5%) used posttest standard deviations and 43 (25.1%) used the standard deviation of change scores.

Figure displaying the variability of SMD estimates across 161 included studies. Click to enlarge.

Of all the potential ways to calculate SMD, the median article varied by 0.3 - which could potentially be the difference between a "small" and "moderate" or "between a "moderate" and "large effect size for Cohen's d using Cohen's suggested rule of thumbThe studies with the largest variation tended to have smaller sample sizes and greater reported effect sizes.

This work raises an important point, which is that while no one method for the calculation of SMDs is considered superior to another, if calculation approaches are not prespecified by researchers, different methods could be tried until the most impressive effect size is reached. To help prevent these issues, the authors suggest prespecifying the analytical approach and reporting SMDs together with raw mean differences and standard deviations to further aid interpretation and provide context. 

Luo, Y., Funada, S., Yoshida, K., et al. (2022). Large variation existed in standardized mean difference estimates using different calculation methods in clinical trials. J Clin Epidemiol 149: 89-97. Manuscript available at the publisher's website here.










Thursday, September 8, 2022

New Study Sheds Light on the Perks of Copublication of Cochrane Systematic Reviews

Since 1994, Cochrane has allowed the publication of certain systematic reviews to extend beyond the eponymous Database of Systematic Reviews and into the pages of a medical specialty journal with the hopes of increasing dissemination. Often, these copublished reviews will include an abridged version of the Cochrane review as well as commentary or other additional features explaining the review and its findings.



A new retrospective cohort study published in this month's issue of the Journal of Clinical Epidemiology highlights the benefits of such an approach. In brief, Zhu and colleagues investigated the rate of citation of Cochrane systematic reviews that had been copublished in a second journal versus those that were published only in the Database. Using a ratio of 2:1, the authors matched two randomly selected noncopublished reviews for each copublished review that came up in an indexed journal that had copublication agreements with Cochrane.

Out of the resulting sample of 101 copublished and 202 noncopublished reviews, the median number of citations over the first five years since publication was higher for reviews that had been copublished (71 versus 32.5, or approximately 118% higher in the copublished reviews). The median as higher for copublished reviews for the first, second, third, and fifth years specifically as well. In 19% of journals, the copublication of a review led to a subsequent increase in impact factor over the following year; this was true for 27.3% of journals during the second year. There was no clear trend over time for the rate of copublication across journals, though the total number of Cochrane reviews published during the same period generally increased. 

Zhu, L., Zhang, Y., Yang, R., et al. (2022). Copublication improved the dissemination of Cochrane reviews and benefited copublishing journals: A retrospective cohort study. J Clin Epidemiol 149:110-117. Manuscript available at publisher's website here. 









Thursday, September 1, 2022

When is Imprecision "Very Serious" for a Systematic Review? New GRADE Guidance for Rating Down Two Levels Using a Minimally Contextualized Approach

Within the GRADE framework, the assessment of imprecision of an effect estimate relies on two major aspects: Optimal Information Size (OIS), which considers whether the sample size of the pooled estimate has adequate power to detect a minimally important difference, and the confidence interval (CI) approach, which requires inspecting the width of the 95% confidence interval and determining whether it crosses a meaningful threshold of effect (for instance, ranges from a null or trivial effect on one end to a potentially meaningful effect on the other).

GRADE users who are applying the framework to the assessment of the certainty of evidence in the context of a systematic review but not a practice guideline have, in previous guidance, been instructed to prioritize the OIS rather than the CI approach, which typically requires some judgment about a meaningful effect threshold, or "contextualization." However, given the fact that systematic review authors and readers can also benefit from the application of effect thresholds when assessing imprecision, new guidance has been published to help systematic reviewers apply a "minimally contextualized approach" and consider both reasons for rating down by one, two, or three levels if necessary. 




In addition to providing multiple examples in which rating down by two levels based on the span of the CI may be warranted, the paper also suggests circumstances in which systematic reviewers may rate down by two levels in the absence of concerns about the CI but in the presence of a particularly small information size. Specifically, when the CI does not overlap with threshold(s) of interest and the effect is sufficiently large, authors may still consider rating down for imprecision if the OIS (the sample size needed to detect an effect within one adequately powered trial as determined by a conventional power analysis) is not met. 

For dichotomous outcomes, in this case, one may consider rating down by two levels immediately if the ratio of the upper- to the lower-boundary of the CI is higher than 2.5 for odds ratios or 3.0 for relative risk ratios. If these criteria are not fulfilled, authors should still compare the pooled sample size to the determined OIS. For continuous outcomes, a total sample size that does not exceed 30-50% of the OIS is considered reason for rating down by two levels to "very serious" imprecision.

Finally, if a CI is so wide that it causes authors to be very uncertain about any estimate of effect, it may be acceptable to rate down by three levels for imprecision.

Zeng, L., Brignardello-Petersen, R., Hultcrantz, M., et al. (2022). GRADE Guidance 34: update on rating imprecision using a minimally contextualized approach. J Clin Epidemiol, online ahead of print. Manuscript available at the publisher's web site here. 












Wednesday, August 3, 2022

Spring 2022 Scholars Discuss Developments in Diagnostic and Environmental Health Evidence

The USGN's 16th GRADE Guideline Development Workshop, held in Chicago, was the first to be held in-person since March of 2020. In classic USGN style, participants enjoyed vibrant conversation, hours of learning, and delicious yogurt parfaits and strong coffee during morning breaks.

Two participants joined the fun and learning as part of the Evidence Foundation scholarship program, presenting to fellow attendees about their current projects related to evidence synthesis and guideline development. 

Spring 2022 Evidence Foundation scholars Kapeena Sivakumaran and Ibrahim El Mikati, center, pose for a photo between sessions in Chicago with the U.S. GRADE Network faculty (from left to right: Reem Mustafa, Philipp Dahm, Shahnaz Sultan, Yngve Falck-Ytter, Rebecca Morgan, and Hassan Murad).

Ibrahim El Mikati, a post-doctoral research fellow in the Outcomes and Implementation Research Unit at the University of Kansas Medical center, discussed his project helping to develop guidance for judging imprecision in diagnostic evidence. This approach will utilize thresholds for confidence intervals and will also introduce the concept of optimal information sizes for assessing imprecision in the context of diagnostic guidelines.

One thing that the GRADE workshop has helped me appreciate is transparency," said Ibrahim. "Having a transparent explanation of judgments provides users with trustworthy guidelines."

Kapeena Sivakumaran is currently leading two systematic reviews for Health Canada related to the impact of noise exposure and sleep disturbance on health outcomes. Challenges of these projects include a focus on short-term outcomes in the relevant literature as well as the need to incorporate multiple evidence streams, such as mechanistic data that can be interpreted in conjunction with observational evidence. 

“The workshop provided me with valuable insight into guideline development and using the GRADE approach to assess the evidence," said Kapeena. "One new thing I learned from the workshop was how automation and [artificial intelligence] can be integrated into the process of living systematic reviews to support guideline development.”

Note: applications for scholarships to attend our upcoming systematic review and guideline development workshops, held virtually, close August 12th and September 30th, 2022, respectively. See application details here.










Tuesday, May 31, 2022

What is a Scoping Review, Exactly? JBI Provides a Formal Definition in New Publication

A scoping review, by any other name, would be as broad...

Multiple definitions of "scoping review" have been used in the literature and, according to a new paper by Munn and colleagues, the use of these reviews is increasingly common. Therefore, the Joanna Briggs Institute recently released a definition of the term as well as guidance on its proper application in evidence synthesis.

The paper, published in April, formally defines a scoping review as "a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (ie, primary research, reviews, non-empirical evidence) within or across particular contexts." Further, the paper details, scoping reviews "can clarify key concepts/definitions in the literature and identify key characteristics or factors related to a concept, including those related to methodological research."


Scoping reviews are similar to other types of articles within the broader family of evidence syntheses, and should ideally include important characteristics such as the use of pre-specified protocols, question(s), and inclusion/exclusion criteria; a comprehensive search; more than one author; and adherence to guidelines such as the
PRISMA statement.

Because the main purpose of a scoping review is the explore and describe the breadth of evidence on a topic, pre-specified questions are usually broader in scope, and the evidence base often includes multiple types of evidence based on what is available. Beyond simply mapping the existing literature, however, scoping reviews may also be used to identify or clarify key concepts used in the field or to examine how research is typically conducted in the area.

Munn, Z, Pollock, D., Khalil, H., et al. (2022). What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis. JBI Evid Synth 20(4): 950-952. Manuscript available at publisher's website here. 








Wednesday, May 18, 2022

New Study of Randomized Trial Protocols Highlights Prevalence of Early Discontinuation, Importance of Reporting Adherence

Registration of clinical trials was first introduced as a way to combat publication bias and avoid the duplication of efforts in medical research. Since then, the registration of clinical trials has been deemed a requirement of publication by the International Committee of Journal Editors (ICJE) and various federal laws. However, the mere registration of a clinical trial does not guarantee its ultimate completion, nor publication.

In a new article published last month in PLoS Medicine, Speich and colleagues set out to better understand the prevalence and impact of non-registration, early trial discontinuation, and non-publication within the current landscape of medical research.

An update of previous findings published in 2014, the study examined 360 protocols for randomized controlled trials (RCTs) approved by Research Ethics Committees (RECs) based in Switzerland, the United Kingdom, Germany, and Canada. Of these, 326 were eligible for further analysis. The team collected data on whether the trials had been registered, whether sample sizes were planned and achieved, and whether study results were available or published. The team also assessed each of the RCTs for protocol reporting quality based on the Standard Protocol Items: Recommendations for Intervention Trials (SPIRIT) guidelines. 

Overall, the included RCTs had a median planned sample size of 250 participants and met a median of 69% of SPIRIT guidelines. Just over half (55%) were sponsored by industry. The large majority (94%) were registered, though 10% were registered retrospectively. About half (53%) reported results in a registry, whereas most (79%) had published results in a peer-reviewed journal. However, adherence to reporting guidelines did not appear to the rate of trial discontinuation.

About one-third (30%) of RCTs had been prematurely discontinued, which indicated no change in this regard since the previous investigation of RCT protocols approved between 2000-2003 (28%). Most commonly, trials in the current investigation were discontinued due to preventable reasons such as poor recruitment (37%), organizational/strategic issues (6%), and limited resources (1%). A smaller proportion were discontinued due to preventable reasons such as futility (16%), harm (6%), benefit (3%), or external evidence (3%).

For every 10% increase in adherence to SPIRIT protocol reporting guidelines, RCTs were 29% less likely to go unpublished (OR: 0.71; 95% confidence interval: from 0.55 to 0.92), and only about 1 in every 5 unpublished trials had been registered.

The authors suggest that these findings implicate the need for investigators to report results of trials in registries as well as in peer-reviewed publications. Furthermore, future research may assess the utility of feasibility or pilot studies in reducing the rate of trial discontinuation due to recruitment issues. Journals can require trial registration as a requirement to publish. 

Speich, B., Gryaznov, D., Busse, J.W., lohner, S., Klatte, K., Heravi, A.T., ... & Briel, M. (2022). Nonregistration, discontinuation, and nonpublication of randomized trials: A repeated research meta-analysis. PLoS Medicine 19(4), e1003980. https://doi.org/10.1371/journal.pmed.1003980. Manuscript available at the publisher's website here.



Thursday, May 12, 2022

Evidence Foundation Scholar Update: Reena Ragala's "Guideline Development Bootcamp"

As one of the Evidence Foundation's fall 2021 scholars, Reena Ragala attended the most recent GRADE guideline development workshop, held virtually. As part of her application, Reena discussed her current project to develop a clinical guideline development "bootcamp" within her new position at Medical University of South Carolina (MUSC) Health, and presented this project to her fellow participants during the workshop.

Below, Reena provides an update on her exciting work.


"I work in MUSC Health’s Value Institute as an evidence-based practice analyst. 

"Our GRADE bootcamp training is 'in progress.' The presentation content and audio are being finalized for the target audience (MUSC rural health network care team members). We have also decided to expand the target audience to include any subject matter expert that serves on our clinical practice guideline (CPG) workgroups, allowing all clinicians (MD, RN, therapist, SW, etc.) the opportunity to become more confident about what GRADE is, why it is used in decision-making at MUSC Health, and how evidence-based decisions are made using the GRADE methodology. Once the training module is recorded, we will begin sending it out as 'homework' for all subject matter experts in advance of each new CPG kickoff meeting. The training module will also be uploaded into our new education platform which goes live in November 2022.

 

"The dissemination of this training program has been delayed to November 2022 due to unexpected systemwide changes to our education platform. In addition, nursing shortages and COVID-related high census have limited our ability to get the necessary approvals for training that is not directly related to patient care or patient safety.   

 

"Since attending the GRADE workshop, I have also worked with colleagues to update the formatting of our evidence brief template. We adopted the 'Summary of Findings Table' and expanded the details of evidence appraisal based on the GRADE criteria, allowing the end users to more appropriately interpret the recommendations for clinical practice."

Friday, May 6, 2022

Summarizing Patient Values and Preferences Data to Better Inform Recommendations

The consideration of patients' values and preferences during the formulation of clinical recommendations requires that guideline developers have an understanding of how patients and other stakeholders weigh the potential desirable and undesirable effects of any given intervention against one another. This consideration of the Relative Importance of Outcomes (RIO) is crucial for developing clinical recommendations that are most relevant to the providers and patients who will be using them. But how can we ensure that guideline developers have a thorough understanding of these considerations when going from evidence to decisions?

In a new paper to be published in the July issue of Journal of Clinical Epidemiology, Zhang and colleagues developed and tested a standardized summary of findings table that presents the RIO evidence on a given clinical decision in order to better inform the development of recommendations while keeping the data on patients' values and preferences top-of-mind.

Figure 1 of the paper provides the route map of the table's user testing process

The methods included four rounds of user testing comprising semi-structured interviews with clinical researchers and guideline developers. Guided by Morville's Honeycomb Model, the authors aimed to assess the usability, credibility, usefulness, desirability, findability, and value of the table while addressing identified issues. Overall, 20 individuals participated, 19 of whom had experience in guideline development and all of whom had experience with Summary of Findings tables.

In terms of the table's usability, problems interpreting and understanding the health utility were present; the introduction of a visual analogue scale (VAS) improved this. The combination of quantitative and qualitative evidence when considering RIOs, in addition to the presentation of the variability surrounding given estimates, were other sources of confusion. However, the participants generally found the table useful, valuable, and easy to navigate.


Zhang, Y, Li, S-A, Yepes-Nuñez, J.J., Morgan, R.L., Pardo-Hernandez, H., Alonso Coello, P., Ren, M., ... & Schünemann, H.J. (2022). GRADE summary of findings tables enhanced understanding of values and preferences evidence. J Clin Epidemiol 147: 60-68. Manuscript available at the publisher's website here.











Monday, April 11, 2022

It's Alive! Pt. IV: Results from a Trainee Living Systematic Review Experience

Living systematic reviews (LSRs) continue to be a topic of interest among systematic review and guideline developers, as evidenced by our history posts on the topic here, here, and here. While automation and machine learning have begun to help facilitate what is a generally time- and resource-intensive process to evidence syntheses perpetually up-to-date, some aspects of LSR development still require the human touch. Now, a recently published mixed-methods study discusses the successes and challenges of utilizing a crowdsourcing approach to keep the LSR wheels turning.

The article describes the process of involving trainees in the development of a living systematic review and network meta-analysis (NMA) on drug therapy for rheumatoid arthritis. In their report, the authors posit that evidence-based medicine is a key pillar of learning for trainees, but that they may learn better through an experiential rather than a purely didactic approach; providing the opportunity to participate in a real-life systematic review may provide this experiential learning. 

In short, the team first applied machine learning to sort through an initial database to filter out randomized controlled trials, which was then further assessed by a crowdsourcing platform, Cochrane Crowd. Next, trainees ranging from undergraduate students to practicing rheumatologists and researchers recruited through Canadian and Australian rheumatology mailing lists further assessed articles for eligibility and extracted data from included articles. 

Training included a mix of online webinars, one-on-one trainings, and handbook provisions. Conflicting results were further assessed by an expert member of the team. The authors then elicited both quantitative and qualitative feedback about the trainees' experiences of taking part in the project through a combination of electronic survey and one-on-one interviews. 

Overall, the 21 trainees surveyed rated their training as adequate and experience generally positive. Respondents specifically listed better understanding of PICO criteria, familiarity with outcome measures used in rheumatology, and the assessment of studies' risk of bias as the greatest learning benefits obtained. 

Of the 16 who participated in follow-up interviews, the majority (94%) described a practical and enjoyable experience. Of particular positive regard was the use of task segmentation throughout the project, during which specific tasks (i.e., eligibility assessment versus data extraction) could be "batch-processed," allowing trainees to match the specific time and focus demands to the selected task at hand. Trainees also communicated an appreciation for the international collaboration involved in the review as well as the feeling of meaningfully contributing to the project. 

Notable challenges included issues related to the clarity of communication regarding deadlines and expectations, as well as technical glitches experienced through the platforms used for screening and extraction. Though task segmentation was seen as a benefit, it also included drawbacks: namely, the risk of more repetitive tasks such as eligibility assessment becoming tedious while others that require more focus (i.e., data extraction) may be difficult to integrate into an already-busy daily schedule. To address these issues, the authors suggest improving communications to include regular, frequent updates and deadline reminders, working through technological glitches, and carefully matching tasks to the specific skillsets and availabilities of each trainee.

Lee, C., Thomas, M., Ejaredar, M., Kassam, A., Whittle, S.L., Buchbinder, R., ... & Hazlewood, G.S. (2022). Crowdsourcing trainees in a living systematic review provided valuable experiential learning opportunities: A mixed-methods study. J Clin Epidemiol (in-press). Manuscript available at the publisher's website here.











Wednesday, March 30, 2022

A New Template for Standardized Wording when Reporting Evidence-to-Decision Considerations in Guidelines

One of the major tenets of GRADE is that certainty of the evidence is just one component of decision-making. Ultimately, decision-makers also need to take into account important factors such as values and preferences, feasibility, and considerations of the impact of a decision on health equity and resource utilization. These factors and others are part of the Evidence-to-Decision (EtD) framework that guides the process from the assessment of certainty of evidence to the final formulation of recommendations in a structured, transparent manner.

Often, multiple teams and individuals involved in the development of a guideline will need to work together to compete the EtD process, which can be a source of confusion. Additionally, until now, no official guidance existed for the use of standardized wording when considering and reporting each EtD framework component. Earlier this year, Piggott and colleagues aimed to address this issue with an article published in the Journal of Clinical Epidemiology.



The project, comprising ten guideline development groups and over 250 recommendations, set out to develop a standardized framework for clear, transparent, and efficient wording when reporting Evidence-to-Decision components within a guideline. This template was then used in two guidelines in development - the European Commission Initiative on Breast Cancer (ECIBC) and the Endocrine Society guidelines on hyperglycemia, hypoglycemia and hypercalcemia. During this process, the authors were able to pilot the wording, receive feedback, and refine the template. The real-life guidelines were also used to provide examples of wording recommendations.

The article includes suggested wording structure and examples for reporting the magnitude and certainty of effect estimates, for conclusions of each portion of the EtD framework, and for justification of recommendations as well as notes on implementation considerations, monitoring and evaluation, and research priorities. 

The authors note that these suggestions are preliminary and may require further refinement. Additionally, current examples of consistent and clear wording of EtDs continues to be lacking, though the dissemination of this guidance may improve future publications. While the suggestions within the article are focused on clinical decisions related to management of conditions, future efforts may expand this to guidelines for diagnostic testing, coverage, and other important areas.

Piggott, T., Baldeh, T., Dietl, B., Wiercoch, W., Nieuwlaat, R., Santesso, N., ... & Schünemann, H. (2022). Standardized wording to improve efficiency and clarity of GRADE EtD frameworks in health guidelines. J Clin Epidemiol (online ahead of print). Manuscript available at the publisher's website here.


















   

Saturday, March 5, 2022

U.S. GRADE Network Holds Its First Two-Day Systematic Review Workshop

In response to popular demand, the U.S. GRADE Network recently expanded its half-day course on systematic reviews into a two-day virtual workshop. Taking place over February 1-2, 2022, the sessions comprised six total hours of instructional time and question-and-answer sessions with Network faculty in addition to three hours of hands-on activities in a small-group format. Large-group lectures ranged from developing a focused clinical question to conducting meta-analysis and evaluating the quality of systematic reviews. In hands-on sessions of ten participants each, attendees were introduced to a free online screening platform (Rayyan) and tried their hands at assessing risk of bias and conducting meta-analysis in RevMan, a free systematic review and meta-analysis software from Cochrane.

As part of the program, the Evidence Foundation welcomed three recipients of scholarships to attend the workshop free of charge. As part of their applications, the scholars described a current or proposed project for a systematic review. The three recipients, along with their projects, included:
• Bryden Giving, MAOT, OTR/L (Boston University): A traffic light of evidence for occupational therapy interventions supporting autistic children and youth
• Nirjhar Ruth Ghosh, MS (Texas A&M University): Evidence-based practice in the field of nutrition: A systematic review of knowledge, skills, attitudes, behaviors and teaching strategies
• Milton A. Romero-Robles (Universidad Nacional del Santa): Participation, involvement and main barriers in the inclusion of patients with non-communicable diseases in the development of clinical practice guidelines: A systematic review protocol


Be the first to hear about these and other trainings @USGRADEnet on twitter or at www.systematicreview.org.

Note: applications for scholarships to attend the upcoming GRADE Guideline Development Workshop held July 29-July, 2022, in Chicago, Illinois, close March 31. See application details here.








Friday, January 7, 2022

Guideline Development Resource Alert: the G-I-N Public and Patient Toolkit

One of the most common challenges to developing rigorous and high-quality guidelines is the inclusion of the patient and public perspective into the formulation of recommendations. In fact, in a recently published needs assessment of guideline developers worldwide, 81.5% answered with a 5 or greater on a 7-point Likert scale that the incorporation of the patient voice was a relevant need for their organization. 

Now, the Guidelines International Network (G-I-N) has launched a large-scale toolkit aimed to address commonly experienced issues related to patient and public involvement. The result of a combination of international experiences and best practice examples, the toolkit is a one-stop shop spanning the systematic review and guideline development process, from conducting targeted consultation with the public to recruiting and supporting patient panel members to communicating recommendations to the public at-large. As a "living resource," the toolkit will continue to expand and evolve as further information and experience is cultivated.


You can find the freely available toolkit here.