Wednesday, April 21, 2021

In Studies of Patients at High Risk of Death, More Explicit Reporting of Functional Outcomes is Needed

Randomized controlled trials examining the effects of an intervention in patients with a high risk of death will often also include functional outcomes - such as quality of life, cognition, or physical disability. However, the death of patients before these outcomes can be assessed (also known as "truncation due to death") can confound the results of a "survivors-only" analysis, especially if mortality rates are higher in certain groups than others. 

A new methodology review of studies published within 5 high-impact general medical journals from 2014 to 2019 provides insight into this phenomenon and suggestions for improving how functional outcomes are handled. To be eligible for the review, a study needed to be a randomized controlled trial (RCT) with a mortality rate of at least 10% in one arm and to report at least one functional outcome in addition to mortality. The authors recorded the outcomes analyzed, the type of statistical analyses used, and the sample population of each of the 434 included studies. For most (351, or 79%) of these, function was a secondary outcome, while it was a primary outcome for 91 (21%) of them.

Only one-quarter (25%) of the functional outcomes within the studies that examined them as secondary outcomes used an approach that included all randomized patients (intention-to-treat); for the studies for which functional outcomes were the primary outcomes analyzed, this proportion was 60%.

The authors provide suggestions for best ways to handle and report data in these studies:
  • In the methods rather than only in tables or supplementary material, explicitly state the sample population from which the functional outcomes were drawn, whether it's survivors-only or another type of analysis.
  • If a survivors-only analysis is used, the authors should report the baseline characteristics between the groups analyzed and transparently discuss this as a limitation within the discussion section.
  • If all randomized participants are analyzed regardless of mortality, authors should report the assumptions upon which these analyses are based; for instance, if death is one outcome ranked among others in a worst-rank analysis, the justification for the ranking of outcomes should be discussed in the methods, and the implications of these decisions included in the discussion section. 
Colantuoni E, Li X, Hashem MD et al. (2021). A structured methodology review showed analyses of functional outcomes are frequently limited to "survivors only" in trials enrolling patients at high risk of death. J Clin Epidemiol (e-pub ahead of print).

Manuscript available here.

Thursday, April 8, 2021

Digging Deeper: 5 Ways to Help Guide Decision-Making When Research Evidence is "Insufficient"

A key tenet underlying the GRADE framework is that the certainty of available research evidence is a key factor to be considered in the course of clinical decision-making. But what if little to no published research exists off of which to base a recommendation? At the end of the day, clinicians, patients, policymakers, and others will still need to make a decision, and will look to a guideline for direction. Thankfully, there are other options to pursue within the context of a systematic review or guideline that ensures that as much of the available evidence is presented as possible, although it may be from less traditional or direct sources.

A new project conducted by the Evidence-based Practice Center (EPC) Program of the Agency for Healthcare Research and Quality (AHRQ) developed guidance for supplementing a review of evidence when the available research evidence is sparse or insufficient. This guidance was based on a three-pronged approach, including:

  • a literature review of articles that have defined and dealt with insufficient evidence, 
  • a convenience sample of recent systematic reviews conducted by EPCs that included at least one outcome for which the evidence was rated as insufficient, and
  • an audit of technical briefs from the EPCs, which tend to be developed when a given topic is expected to yield little to no published evidence and which often contain supplementary sources of information such as grey literature and expert interviews.
Through this approach, the workgroup identified five key strategies for dealing with the challenge of insufficient evidence:
  1. Reconsider eligible study designs: broaden your search to capture a wider variety of published evidence, such as cohort or case studies.
  2. Summarize evidence outside the prespecified review parameters: use indirect evidence that does not perfectly match the PICO of your topic in order to better contextualize the decision being presented.
  3. Summarize evidence on contextual factors (factors other than benefits/harms): these include key aspects of the GRADE Evidence-to-Decision framework, such as patient values and preferences and the acceptability, feasibility, and cost-effectiveness of a given intervention.
  4. Consider modeling if appropriate, and if expertise is available: if possible, certain types of modeling can help fill in the gaps and make useful predictions for outcomes in lieu of real-life research.
  5. Incorporate health system data: "real-world" evidence such as electronic health records and registries can supplement more mechanistic or explanatory RCTs.

Some of these challenges can be more efficiently addressed up-front, before the scoping of a new review even begins. For instance, identifying topic experts and stakeholders who are familiar with the quantity and quality of available evidence can help a group foresee potential gaps and plan for the need to broaden the scope. Care should be taken to identify the outcomes that are of critical importance to patients, and through this lens, develop strategies and criteria within the protocol that will best meet the needs of the review while tapping into as much evidence as possible. Finally, researchers should avoid using the term "insufficient" when describing the evidence, and instead explicitly state that no eligible studies or types of evidence were available.

Murad MH, Chang SM, Fiordalisi CV, et al. (2021). Improving the utility of evidence synthesis for decisionmakers in the face of insufficient evidence. J Clin Epidemiol, ahead-of-print. 

Manuscript available from publisher's website here.

Friday, April 2, 2021

New Review of Pragmatic Trials Reveals Insights, Identifies Gaps

As opposed to an "explanatory" or "mechanistic" randomized controlled trial (RCT), which seeks to examine the effect of an intervention under tightly controlled circumstances, "pragmatic" or "naturalistic" trials study interventions and their outcomes when used in more real-world, generalizable settings. One example of such a study might include the use of registry data to examine interventions and outcomes as they occur in the "real world" of patient care. However, there are currently few standards for identifying, reporting, and discussing the results of such "pragmatic RCTs." A new paper by Nicholls and colleagues aims to provide an overview of the current landscape of this methodological genre.

The authors searched for and synthesized 4,337 trials using keywords such as "pragmatic," "real world," "registry based," and "comparative effectiveness" to better map an understanding of how pragmatic trials are presented in the RCT literature. Overall, only about 22% (964) of these trials were identified as "pragmatic" RCTs in the title, abstract, or full text; about half of these (55%) used this term in the title or abstract, while the remaining 45% described the work as a pragmatic trial only in the full text. 

About 78.1% (3,368) of the trials indicated that they were registered. However, only about 6% were indexed in PubMed as a pragmatic trial, and only 0.5% were labeled with the MeSH topic of Pragmatic Clinical Trial. The target enrollment of pragmatic trials was a median of 440 participants within an interquartile range (IQR) of 244 to 1,200; the actual achieved accrual was 414 (IQR: 216 - 1,147). The largest trial included 933,789 participants; the smallest enrolled 60.

Overall, pragmatic trials were more likely to be centered in North America and Europe and to be funded by non-industry sources. Behavioral, rather than drug or device-based, interventions were most common in these trials. Not infrequently, the trials were mislabeled or contained erroneous data in their registration information. The fact that only about half of the sample were clearly labeled as "pragmatic" may mean that these trials may go undetected with less sensitive search mechanisms than the authors used.

Authors of pragmatic trials can improve the quality of the field by clearly labelling their work as such and by registering their trials and ensuring that registered data are accurate and up-to-date. The authors also suggest that taking a broader view of what constitutes a "pragmatic RCT" also generates questions regarding proper ethical standards when research is conducted on a large scale with multiple lines of responsibility. Finally, the mechanisms used to obtain consent in these trials should be further examined in light of the finding that many pragmatic trials fail to achieve goals set for participant enrollment.

Manuscript available from publisher's web site here. 

Nicholls SG, Carroll K, Hey SP, et al. (2021). A review of pragmatic trials found a high degree of diversity in design and scope, deficiencies in reporting and trial registry data, and poor indexing. J Clin Epidemiol (ahead of print). 

Monday, March 15, 2021

A Blinding Success?: The Debate over Reporting the Success of Blinding

While the use of blinding is a hallmark of placebo-controlled trials, whether the blinding was successful - i.e., whether or not participants were able to figure out the treatment condition to which they have been assigned - isn't always tested, nor are the results of these tests always reported. The measurement of the success of blinding in trials is controversial and not uniformly used, and the item has been dropped from subsequent versions of the CONSORT reporting items for trials. According to a recent discussion of the pros and cons to measuring the success of blinding, only between 2-24% of trials perform or report these types of tests.

As Webster and colleagues explain, the benefits to measuring the success of blinding are as follows:

  • the success (or failure) of blinding in a placebo-controlled trial can introduce a source of bias that affects the results. 
  • while the effect of blinding itself may be small, these small effects could still result in changes to policy or practice
  • there are documented instances in which the failure to properly blind (for instance, providing participants with a sour-tasting Vitamin C condition versus a sweet lactose "placebo") led to an observed effect (for instance, on preventing or treating the common cold) whereas there was no effect in the subgroup of participants who were successfully blinded.
Reasons commonly given against the testing of successful blinding include the following:
  • At times, a break in blinding can lead to conclusions in the opposite direction. For instance, physicians who are unblinded may assume that the patients with better outcomes received a drug widely supposed to be "superior," when in fact, the opposite occurred.
  • In some cases, a treatment with dramatically superior results can result in unblinding, even when the treatment conditions were identical - but that doesn't necessarily mean the blinding was a failure or could have been prevented, given the dramatic differences in outcomes.
  • If the measurement of blinding is performed at the wrong time - such as before the completion of the trial - participants may become suspicious and this in itself could potentially confound treatment effects.

Webster RK, Bishop F, Collins GS, et al. (2021). Measuring the success of blinding in placebo-controlled trials: Should we be so quick to dismiss it? J Clin Epidemiol, pre-print.

Manuscript available from publisher's website here.

Tuesday, March 9, 2021

Expert Evidence: A Framework for Using GRADE When "No" Evidence Exists

To guide the formulation of clinical recommendations, GRADE relies on the use of direct or, if necessary, indirect evidence from peer-reviewed publications as well as the gray literature. However, in some cases, no such evidence may be found even after an extensive search has been conducted. A new paper - part of the informal GRADE Notes series in the Journal of Clinical Epidemiology - relays the results of piloting an "expert evidence" approach and provides key suggestions when using it.

As opposed to simply asking the panel members of a guideline to base their recommendations off of informal opinion, the expert evidence approach systematizes this process by eliciting the extent of their experience with certain clinical scenarios through quantitative survey methods. In this example, at least 50% of the panel members were free of conflicts of interest, with various countries and specialties represented. While members were not required to base their answers off of patient charts, the authors suggest that this can be used to further increase the rigor of the survey. 

As a result of the survey, the recommendations put forward reflected a cumulative 12,000 cases of experience. Because the members felt that at least some recommendation was necessary to help guide care - where the alternative would be to provide no recommendation at all - the guideline helped to fill a gap while indicating the current lack of high-quality published evidence for several clinical questions, which may help guide the production of higher-quality evidence and recommendations in the future. Importantly, by utilizing a survey approach to facilitate the formulation of recommendations, the authors note that it avoided the pitfall of "consensus-based" approaches to guideline development which can often manifest as simply reflecting the opinions of those with the loudest voices. 

Mustafa RA, Cuello Garcia CA, Bhatt M, Riva JJ, Vesely S, Wiercioch W, ... & HJ Sch√ľnemann. (2021). How to use GRADE when there is "no" evidence? A case study of the expert evidence approach. J Clin Epidemiol, in-press. 

Manuscript available from the publisher's website here

Wednesday, March 3, 2021

Dealing with Zero-Events Studies in Meta-analysis: There's a Better Way than Throwing it Away!

When meta-analyzing data from studies examining the incidence of rare events - or those with a small sample size or short follow-up period, it is not uncommon to come across a study with 0 events of the outcome of interest. In fact, approximately one-third of a random sample of 500 Cochrane reviews contained at least one zero-events study.

Zero-events studies are typically categorized as single-arm (there are 0 events reported in just one group) or double-arm (there are 0 events reported in both groups). While some software automatically discard double-arm zero-events studies from a meta-analysis, this is not ideal because these data still add useful information in regards to the overall effect of an intervention. Ideally, meta-analyses could include a pooled event count that may be zero in one arm, both arms, or neither, with various single-arm and double-arm zero-events studies potentially contributing to this final effect. Thus, in a recently published article, Xu and colleagues propose a more detailed framework for approaching zero-events studies in the context of a meta-analysis. 

The authors describe six classifications as follows, with the degree of difficulty when meta-analyzing generally increasing from 1 to 6:

1) MA-SZ: meta-analysis contains zero-events only occurring in single arms, no double-arm-zero-events studies are included, and the total events count in neither arm is zero;

2) MA-MZ: meta-analysis contains zero-events occurring in both single and double arms, and the total events count in neither arm is zero;

3) MA-DZ: meta-analysis contains zero-events only occurring in double arms, and the total events count in neither arm is zero;

4) MA-CSZ: meta-analysis contains zero-events occurring in single arms, and no double-arm-zero-events studies are included, while the total events count in one of the arms is zero;

5) MA-CMZ: meta-analysis contains zero-events occurring in both single arm and double arms, while the total events count in one of the arms is zero;

6) MA-CDZ: meta-analysis only includes double-arm-zero-events studies, while the total events count in both arms are zero

The authors examined data from the Cochrane Database of Systematic Reviews (CDSR), including any review published between January 2003 - May 2018 and meta-analyzing at least two studies. Of the 61,090 reviews identified with binary outcomes, 21,288 (34.85%) contained at least one zero-events study. In a great majority (90.7%) of these, the total event count was greater than zero for both arms and the meta-analysis only included single-arm rather than double-arm zero-events studies. Second most common (6.21%) was the MA-CSZ, in which the total event count includes one arm with zero events, and the zero-events studies included are only single-arm. All others of the four remaining categories each made up less than 1.5% of the whole.
The authors propose that those looking to meta-analyze studies that include zero events first categorize their specific subtype, and then work through one of the suggested methods in the figure below. Finally, a sensitivity analysis should be used following an alternative method to determine the robustness of the results.

Xu C, Furuya-Kanamori L, Zorzela L, Lin L, and Vohra S. (2021). A proposed framework to guide evidence synthesis practice for meta-analysis with zero-events studies. J Clin Epidemiol, in-press.
Manuscript available from the publisher's website here

Thursday, February 25, 2021

The Use of GRADE in Systematic Reviews of Nutrition Interventions is Still Rare, but Growing

While the GRADE framework is used by over 100 health organizations to assess the certainty of evidence and guide the formulation of clinical recommendations, its use in the field of nutrition for these purposes is still sparse. A recent examination of all systematic reviews using GRADE in the ten highest-impact nutrition journals over the past five years provides insight and suggestions for moving the field forward in the use of GRADE for evidence assessment in systematic reviews of nutritional interventions.

Werner and colleagues identified 800 eligible systematic reviews, 55 (6.9%) of which used GRADE, and 47 (5.9%) of which rated the certainty of evidence specific to different outcomes. The number of these reviews using GRADE increased year-to-year, from two in 2015 to 23 in 2019. Reviews claiming to use a modification of GRADE were excluded from analysis.

Of the 811 identified cases of downgrading the certainty of evidence, and 31 cases of upgrading. Reviews of randomized controlled trials had a mean number of 1.6 domains downgraded per outcome, while reviews of non-randomized studies had a mean of 2.1. In about 6.5% of upgrading cases, this was done for unclear purposes not in line with GRADE guidance, such as upgrading for low risk of bias, narrow confidence intervals, or very low p-values. Reviews of non-randomized studies were more likely to have outcomes downgraded for imprecision and inconsistency, and less likely to have downgrades for publication bias than those of randomized studies. 

The authors conclude that while the use of GRADE in systematic reviews of nutritional interventions has grown over recent years based on this sample, continued education and training of nutrition researchers and experts can help improve the spread and quality of the application of GRADE to assess the certainty of evidence in this discipline.

Werner SS, Binder N, Toews I, et al. (2021). The use of GRADE in evidence syntheses published in high-impact-factor nutrition journal: A methodological survey. J Clin Epidemiol, in-press.

Manuscript available here.