Showing posts with label rapid recommendations. Show all posts
Showing posts with label rapid recommendations. Show all posts

Tuesday, October 6, 2020

New Study Examines the Impact of Abbreviated vs. Comprehensive Search Strategies on Resulting Effect Estimates

It's common practice - indeed, it's widely recommended - that systematic reviewers search multiple databases in addition to alternative sources of data such as the grey literature to ensure that no relevant studies are left out of analysis. However, meta-research on whether this theory holds up in practice is mainly limited to examinations of recall - in other words, reporting how many potentially relevant studies are picked up by an abbreviated search method as opposed to one that's more extensive. What's missing from this body of research, write Ewald and colleagues in a newly published study, is that recall studies compare items retrieved in absolute terms without considering the final weight or importance of each individual study - variables which will ultimately affect the direction, magnitude, and precision of the resulting effect estimate. Since larger studies with more caché are likely to have the greatest impact on the final estimate and certainty of evidence - and these studies are more likely to be picked up in even an abbreviated search - the added value of utilizing more extensive search strategies on a meta-analysis is left unclear.

To examine the impact of the extensiveness of a search strategy on resulting findings and certainty of evidence, the authors randomly selected 60 Cochrane reviews from a range of disciplines for which certainty of evidence assessments and summaries of findings were available. Thirteen reviews did not report at least one binary outcome, leaving a total of 47 for analysis. They then replicated these reviews' search strategies in addition to conducting 14 abbreviated searches for each review (e.g., MEDLINE only), such as limiting to one database or a combination of just two or three (e.g., MEDLINE and Embase only). Finally, meta-analyses were replicated for each of these scenarios, leaving out studies that would not have been picked up in the various abbreviated search strategies. 

Searching only one database led to a loss of at least one trial in half of the reviews, and a loss of two trials in one-quarter of them. As may be expected, the use of additional databases reduced the loss of information. Overall, however, the direction and significance of the resulting effect estimates remained unchanged in a majority of the cases, as shown in Figure 1 from the paper, below.

Click to enlarge.

The use of abbreviated searches did, however, introduce some amount of imprecision, typically increasing standard error by around 1.02 to 1.06-fold. The inclusion of multiple versus a single database did not clearly appear to improve precision compared to a comprehensive search.

The authors note that these findings are particularly applicable to authors of potential rapid reviews and guidelines, where a consideration of trade-offs between speed and thoroughness is of great importance. Rapid reviewers should be aware that limiting search strategy may change the direction of an effect estimate or render an effect estimate uncalculable in up to one in seven instances, but this should be weighed against the benefits of a quicker time to the dissemination of findings, especially during emergent health crises where time is of the essence.

Ewald, H., Klerings, I., Wagner, G., Heise, T.L., Dobrescu, A.I., Armijo-Olivo, S., ... & Hemkens, L.G. (2020). Abbreviated and comprehensive literature searches led to identical or very similar effect estimates: A meta-epidemiological study. J Clin Epidemiol 128:1-12.

Manuscript available from publisher's website here.  















Wednesday, August 26, 2020

Rapid, Up-to-Date Evidence Synthesis in the Time of COVID

In emergent situations with sparse and rapidly evolving bodies of research, evidence synthesis programs must be able to adapt to a shortened timeline to provide clinicians with the best available evidence for decision-making. (See our previous posts on rapid systematic review and guideline development, here, here, here, and here). But perhaps no health crisis in the modern era has made this more clear than the coronavirus disease 2019 (COVID-19) pandemic.

Recently, Murad and colleagues published a framework detailing a four-pillar program through which they have been able to synthesize evidence related to the COVID-19 pandemic. This system has been tried and tested within the Mayo Clinic, a multi-state academic center with more than 1.2 million patients per year.

 

Launched within two weeks of the World Health Organization’s declaration of COVID-19 as a pandemic, Mayo Clinic’s evidence synthesis program consisted of four major components:

  • What is New?: an automatically generated list of COVID-19-related studies published within the last three days and categorized into topic areas such as diagnosis or prevention
  • Repository of Studies: a running list of previously published studies since the first case report of COVID-19, including those that move from the “What is New?” list after three days’ time
  • Rapid Reviews: reviews published within three to four days in response to pressing clinical questions from those on the frontlines and utilizing the study repository. To facilitate evidence synthesis, studies are often screened and selected by a single reviewer and evidence is rarely meta-analyzed.
  • Repository of Reviews: a collection of reviews including those developed at Mayo and elsewhere, identified in twice-weekly searches and through a list of predetermined websites. To supplement knowledge, some reviews included indirect evidence borrowed from studies of other coronaviruses or respiratory infections, when appropriate.
Click to enlarge.

Within one month of the framework’s establishment, the team had conducted seven in-house rapid reviews and had indexed more than 100 newly published reviews into a database housing over 2,000 total.
 

The authors conclude that while an intensive system such as this may not be feasible in smaller health systems, cross-collaboration and sharing of knowledge can allow for informed and up-to-date clinical care that adapts in the face of a rapidly changing landscape of evidence.


Murad, M.H., Nayfeh, T., Suarez, M.U., Seisa, M.O., Abd-Rabu, R., Farah, M.H.E..., & Saadi, S.M. 2020. A framework for evidence synthesis programs to respond to a pandemic. Mayo Clin Proc 95(7):1426-1429.


Manuscript available at the publisher's website here.

Wednesday, April 15, 2020

Rapid Guidelines in GRADE Pt. III: A checklist for rigorously rapid recommendations

In recent posts, we have introduced the concept of rapid recommendations as well as how developers of these recommendations at the World Health Organization (WHO) perceive facilitators and barriers to this process. This information was gathered as part of a published series on rapid guidance in 2018.

In the final part of the series, Morgan and colleagues propose an extension of the G-I-N/McMaster Checklist for Guideline Development aimed at those producing rapid guidelines. Comprising 21 discrete guideline principles that align with the original Guideline Development Checklist, the checklist is a tool for developers to take stock of the resources available for their rapid guideline development goals and identify areas in need of improvement. Important considerations include:
·      Make use of virtual meetings (Principle 7) and pre-meeting voting (Principle 15) to expedite the drafting of recommendations.
·      If possible, limit guideline panel composition to those not reporting financial COIs, or if not possible to provide necessary topic expertise, transparently declare any modifications of existing COI policies for the topic at hand (Principle 9).
·      Limiting the number of PICOs (Principle 10) and limiting the assessment of outcomes to only those deemed critical (Principle 11).
·      Consider ways to facilitate systematic review stage, such as updating existing reviews, developing rapid reviews, or tailoring search criteria to a smaller scope (Principle 13)
·      Arrange for external reviewers early on in the process so that they are quickly deployed when a draft is available for review (Principle 18).

The full checklist extension for rapid guidelines can be viewed here.

Morgan, R.L., Florez, I., Falavigna, M. et al. Development of rapid guidelines: 3. GIN-McMaster Guideline Development Checklist extension for rapid recommendations. Health Res Policy Sys 16, 63 (2018). https://doi.org/10.1186/s12961-018-0330-0

Manuscript available at the publisher's website here.

Friday, April 10, 2020

Rapid Guidelines in GRADE Pt. II: Rapid Recs in the Real World

In Part I of our series on rapid guidelines, we discussed the utility and terminology of rapid recommendations: those recommendations made in response to an urgent public health issue with timeframes ranging from a few short hours up to three months' time.

Who develops rapid guidelines?

In the first of a three-part series on rapid guidance published in 2018, Kowalski and colleagues conducted a systematic survey of the methodologies and processes of rapid guideline-producing organizations. Nomenclature used to identify these documents varied by organization, from “rapid advice guideline” to “interim guidance” to “short clinical guideline.” While the quality of these documents as assessed with the AGREE II tool was variable, it was greater in documents from the WHO and NICE than it was for the CDC or other smaller organizations. While NICE guidelines were of higher quality as assessed with the domains of AGREE II, they took substantially more time to develop than those from WHO.

It's important to note that while terminology differs between organizations, the word "interim" has been used to connote a response to an emergent public health issue with shorter time frames than a rapid guideline - typically on the order of 1-3 weeks. 

Organization
Nomenclature
AGREE II domain score range (lowest – highest)
Development
timeline (from manual)
WHO
Rapid advice guideline
54 - 92
1-3 months
NICE
Short clinical guideline
81 - 94
11-13 months
CDC
Interim guidance
10 - 82
Not reported
Other
Interim guidelines, interim position statement, clinical guidelines
21 - 67
Not reported

Common Challenges and Facilitators to Rapid Guideline Development

While both the World Health Organization (WHO) and the National Institute for Health and Care Excellence (NICE) reported the use of a systematic review of the evidence to guide recommendations, common issues among developers included a lack of reporting on the management of conflict of interest, external review, and of the process for the drafting of recommendation.

In follow-up qualitative interviews with guideline-developing staff from WHO, participants cited a lack of adequate staffing, monetary resources, and evidence as key obstacles to the development of rapid guidelines. While the development of a systematic review is likely one of the more time-consuming elements of a rapid guideline process, most participants agreed that it is a fundamental part of developing trustworthy guidance that should not be skipped if possible. 

Participants also indicated that the external/peer review process can add unwanted time to the development of rapid guidelines. To this effect, developers can consider limiting the reach of peer review to the final draft only as well as reducing the ability to drastically change recommendations in a way that would require a reconvention of the guideline panel. Virtual conferencing technology was named as a facilitator to developing guidelines on a quicker schedule by reducing the need for face-to-face meetings. 

For a checklist to guide the development of rapid recommendations, see the G-I-N/McMaster checklist extension for rapid guidelines.


Kowalski, S.C., Morgan, R.L., Falavigna, M. et al. Development of rapid guidelines: 1. Systematic survey of current practices and methods. Health Res Policy Sys 16, 61 (2018).

Manuscript available at the publisher's website here.  

Florez, I.D., Morgan, R.L., Falavigna, M. et al. Development of rapid guidelines: 2. A qualitative study with WHO guideline developers. Health Res Policy Sys 16, 62 (2018).

Manuscript available at the publisher's website here.  







Monday, April 6, 2020

Rapid Guidelines in GRADE Pt. I: Needed Advice when Time is of the Essence

While most clinical practice guidelines take 2-3 years to develop and publish, the emergence of a public health crisis or urgent humanitarian need requires the dissemination of evidence-based guidance in a more rapid manner. To this effect, several national- and international-level guideline-producing organizations, such as the Centers for Disease Control (CDC) and the World Health Organization (WHO), have developed processes for the development of evidence-based guidance for these more urgent situations.

WHO’s 2006 recommendations for the pharmacological management of avian influenza in humans is one example of a rapidly developed guideline. Of current relevance, WHO has recently published interim guidance on the management of severe acute respiratory infection when novel coronavirus is suspected, and the UK’s National Institute for Health and Care Excellence (NICE) has also developed interim guidelines for the treatment of COVID-19 in patients receiving critical care, kidney dialysis, and systemic anticancer therapy. Because this matter is rapidly evolving and advice is needed immediately, the protocols used by NICE, WHO and other organizations are different than it would be for less urgent topics.

Can rapid guidelines use GRADE?

In short, yes. Recommendations can be made based on the transparent grading and reporting of the certainty of evidence that lie at the heart of GRADE, whether this is over the timeframe of hours, days, weeks, or months. The key word here is transparent: no matter the speed of development, recommendations should always be couched within the terms of the certainty of evidence behind them, and judgments of the evidence should be clearly presented. In a 2016 paper on the use of GRADE to respond to health questions with different levels of urgency, Thayer and Schünemann provide terms for the various speeds of response, and considerations for recommendations therein:
  • Ultra-short emergency response: 1 or more hours
  • Urgent response: 1-3 weeks
  • Rapid response: 1-3 months
  • Routine response: More than three months

Recommendations can still be formed based on the certainty of the evidence that's available, whatever that evidence may be. While systematic reviews of all available evidence are a foundational aspect of non-urgent guidelines, evidence in the form of narrative syntheses, modeling, or late-breaking data from the field can be used when time is short and systematically compiled data are sparse. Regardless of the source, the domains of GRADE still allow for evidence to be appraised and to guide the resulting direction and strength of recommendations.

Stay tuned for Pt. II coming soon, where we'll take a closer look at organizations that have developed rapid recommendations in response to time-sensitive public health issues.

For a checklist to guide the development of rapid recommendations, see the G-I-N/McMaster checklist.

For more information about appraising the certainty of evidence in the lack of meta-analyzed data, see this paper.

Thayer KA & Schünemann H. Using GRADE to Respond to Health Question With Different Levels of Urgency. Environment international. 2016 July-August: 585-589.

Manuscript available at the publisher's website here.