Thursday, July 30, 2020

Research Revisited: "Quality of Evidence is a Key Determinant for Making a Strong GRADE Guidelines Recommendation" (2015)

This month in 2015, Djulbegovic and colleagues published a paper that examined the impact of quality of evidence, balance between benefits and harms, patient values and preferences, and resource use (the four GRADE factors) on the strength of resulting clinical recommendations.

The four major GRADE factors that drive clinical recommendations 






















The authors circulated a survey among 18 members of a guideline panel of the American Association of Blood Banking (AABB) who had recently convened to develop guidelines for the use of prophylactic versus therapeutic platelet transfusion in patients with thrombocytopenia. Using the panel members’ assessments of the GRADE factors with regards to the evidence that had been presented and their resulting recommendations for strong or weak recommendations, a logistic regression was conducted in order to examine the relative impact of each of the four GRADE factors.

The guideline panel had reviewed the evidence for ten key questions. Overall, the consistency of judgments across panel members was good (Cronbach’s alpha = 0.86). Those questions with a high quality of evidence were 4.5 times more likely to result in a strong recommendation (p < 0001), whereas none of the three remaining GRADE factors were significantly associated with the strength of the resulting recommendations. Moreover, the model suggested that in cases where the quality of evidence was high, there was a 90% chance of the resulting recommendation being strong; when the quality of evidence was very low, this chance dropped to 10%.

The figure from Djulbegovic shows the associations between increasing quality of evidence and strength of resulting recommendations

The authors concluded that the quality of evidence was far and away the most important contributing factor to the resulting strength of recommendations, at least within the studied guideline panel. However, it’s important to note that patient values and preferences, the balance between benefits and harms, and issues of resource use should all be involved in the process of moving from evidence to decisions, but the relative nebulousness around these considerations and a lack of determined structures for eliciting the data they require (such as the use of a patient panel or survey) likely make them less impactful on the overall strength of recommendation. 

The Evidence-to-Decision framework, which makes these additional considerations more explicit in the formulation of recommendations, was introduced just a year later. It’s worth wondering whether the relative impact of the remaining three GRADE factors has changed since the introduction and adoption of this framework – perhaps presenting an opportunity to revisit this research.

Djulbegovic, B., Kumar, A., Kaufman, R.M., Tobian, A., and Guyatt, G.H. Quality of evidence is a key determinant for making a strong GRADE guidelines recommendation. J Clin Epidemiol 68(7): 727-732. 

Manuscript is available at the publisher's website here. 

Thursday, July 23, 2020

Systematic Review Updates Often Improve Precision of Estimate, but not Overall Findings

When done well, systematic reviews with meta-analysis provide a comprehensive view of the current state of the evidence on a topic as well as the general strength and direction of findings. Sometimes, however, what they find is that little evidence so far exists, or that the findings are relatively heterogenous. In these cases, the addition of more studies will likely improve their precision; thus, future updates are warranted to reanalyze the data with the infusion of newly published studies and grey literature. (Check out our series on living systematic reviews for one way these reviews could be kept continuously up-to-date – Part 1, Part 2, Part 3).

A recent analysis published in the September issue of the Journal of Clinical Epidemiology supports this concept. Analyzing the original and updated versions of 30 meta-analyses published between 1994 and 2018 across 19 countries, Gao and colleagues found the following:
  • The average time from original publication to update was 4 years, 8 months.
  • Most of the updates (80%) included more randomized controlled trials than the original. In the majority of cases (73%), an update also led to a higher total number of patients analyzed.
  • The proportion of reviews reporting the methodological quality of included studies was slightly higher among the updates (76.7%) than the original publications (70%).
  • The quality of most included trials was low, and in most cases, the authors did not report the results of the assessment of individual items within their methodological appraisal.
A figure from Gao et al. shows the frequencies of publication years of original systematic reviews analyzed (in blue) and their updates (in green). Click to enlarge.



Among the 30 pairs of included SRs, the authors identified 130 comparable outcomes in just over half (16) of the reviews that could be analyzed for changes over time; most of these (74.6%) were binary outcomes. Overall, just three (2.3%) of the outcomes had a significant change that affected the estimates to a statistically significant degree. In 88.3% of the 94 comparable outcomes that incorporated new evidence, the precision improved (i.e., the width of the confidence interval narrowed). Precision was reduced in five outcomes (5.3%) and did not change in the remaining six (6.4%).

The authors conclude that while update of meta-analyses within systematic reviews often improves the precision of an estimate (as assessed by the width of the confidence interval around it), it rarely changes the estimate itself. In addition, systematic review authors should take care to more thoroughly assess and report the methodological appraisal of individual items within each included study. This may serve as a guide for determining whether a future update may dramatically impact current findings, and how frequently updates should be conducted.

Gao, Y., Yang, K., Cai, Y., Shi, S., Liu, M., Zhang, J... & Song, F. Updating systematic reviews can improve the precision of outcomes: A comparative study. J Clin Epidemiol, 2020; 125: 108-119.

Manuscript available here. 

Monday, July 20, 2020

New Review Provides Insight into Unique Challenges of Continuous and TTE Outcomes, Potential Solutions

Health Technology Assessments (HTAs) and guidelines often meta-analyze non-binary outcomes, such as continuous and time-to-event outcomes, in order to elucidate the observed effect of a health intervention. However, these types of outcomes may require more sophisticated analysis and modeling techniques, making it more difficult to be synthesized by authors with limited statistical knowledge or resources.

A newly published review by Freeman and colleagues aimed to describe the use and presentation of these outcomes, and in doing so, identify potential challenges and facilitators to improving their application in future publications. The study analyzed a total of 25 technology appraisals and 15 guidelines from the UK’s National Institute for Health Care Excellence (NICE) and 7 HTA reports from the National Institute of Health Research (NIHR) for a total of 47 documents using meta-analyses (MA), network meta-analyses (NMA), or a combination of the two.

About half (51%) of the items reported at least one continuous outcome, while just over half (55%) reported at least one time-to-event outcome. Continuous outcomes were most commonly presented as a mean difference (MD). The most commonly used time-to-event outcomes were overall and progression-free survival, presented as a hazard ratio. Notably, no articles reported the methods used to handle multiplicity of either continuous or time-to-event outcomes. The existence of multiple time-points was largely handled by presenting multiple separate meta-analyses analyzing the appropriate time-points against one another.

Although most of the analyzed documents provided a decision model based on continuous or time-to-event outcomes, but many of them were based on the results of a single trial only, despite the fact that meta-analyses were undertaken.

Reporting of Decision Models Across Publications Using Continuous Outcomes. Click to enlarge.
Reporting of Decision Models Across Publications Using Time-to-Event Outcomes. Click to enlarge.

The authors present a list of the key challenges faced by authors of meta-analyses using these outcomes, such as the use of continuous outcomes that are reported with different scales, the multiplicity of related outcomes from the same study or various time-points in time-to-event outcomes, and nonproportional hazards (hazards that change over the course of time) in time-to-event outcomes. They present the following suggestions for better managing these issues:
  • Increased availability of statistical expertise on MA and NMA teams
  • Development of user-friendly software that allows users to approach more complex statistical techniques – for instance, those that allow for multiple outcomes from the same study to be analyzed simultaneously – with the same ease and accessibility as a point-and-click software such as RevMan.
  • Increased reporting of outcomes within individual trials, as well as the reporting of individual patient data by trial authors.
Freeman, S.C., Sutton, A.J., & Cooper, N.J. Update of methodological advances for synthesis of continuous and time-to-event outcomes would maximize use of evidence base. J Clin Epidemiol, 2020; 124: 94-105.

Manuscript available from publisher's website here. 


Wednesday, July 15, 2020

The Median Cochrane Systematic Review Takes 2 Years to Publish, but This Window May be Lengthening

Recent posts (1, 2, 3, 4) have discussed the use of machine learning, automation, crowdsourcing, and other strategies that can substantially speed up the process of a systematic review. However, for most systematic review authors in the year 2020, a project with a timeline on the order of weeks or months – as opposed to years – remains a far-off futuristic ideal.

In fact, according to a recently published review analyzing over 6,700 systematic reviews published in the Cochrane Database since 1995, the median systematic review took 2 years from the publication of the protocol to the resulting review. Reviews published over the past decade alone, however, took a median 2.4 months longer than this, suggesting that the window of time from protocol to publication is increasing. Additionally, the majority (60%) of reviews published over the past five years took longer than two years to complete, compared with just over half (51%) of all reviews analyzed. This is an important finding, as with a longer lag time comes a greater risk of the review becoming out-of-date before it even has the chance to be disseminated.

A figure from Andersen et al. compares the Kaplan-Meier curve of turnaround times for Cochrane reviews published within each 5-year interval, with marked slowdown for the most recent cohort (2015-2019). Click to enlarge.






There was also a high amount of variability in turnover time between different Review Groups, with the fastest group publishing reviews 2.6 times faster than the slowest.

As Anderson and colleagues explain, it is worth noting that Cochrane reviews tend to take longer to publish than other systematic reviews, perhaps due to their higher standards of rigor. However, the median publication time appears to have increased over the latter half of the last quarter-century – perhaps due to the increasing complexity and rigors of reporting systematic reviews in addition to the complexity of the studies they are charged with analyzing.

Thus, although technological advancements may someday shorten the timeline of the typical systematic review, the data suggest that the turnaround time has only increased in recent years. Qualitative research comparing the workflow and processes of the different Cochrane review groups may provide insight into best practices for improving the efficiency of a systematic review’s production, and with it, the currency of its findings upon publication.

Andersen, M.Z., Gülen, S., Fonnes, S., Andresen, K., & Rosenberg, J. Half of Cochrane reviews were published more than 2 years after the protocol. J Clin Epidemiol, 2020; 124: 85-93.

Manuscript available from the publisher's website here.

Friday, July 10, 2020

Room for Improvement: Use of Cochrane RoB tool in non-Cochrane Systematic Reviews is Largely Incomplete

The Cochrane Risk of Bias (RoB) tool for randomized controlled trials (RCTs) is commonly used in both Cochrane and non-Cochrane systematic reviews as a standardized way to assess and report the risk of bias within a study or a body of evidence. The tool comprises seven domains, each representing a potential source of bias within the design or execution of an RCT. Judgments for each domain (for instance, allocation concealment, or selective outcome reporting) are made between whether the study possessed a low, high, or unclear risk of bias from that source.

A new review of non-Cochrane systematic reviews (NCSRs) published in this month’s edition of the Journal of Clinical Epidemiology reports that the use of the Cochrane RoB tool in these reviews is incomplete or inadequate in most cases. Within 508 eligible systematic reviews that used the original (2011) Cochrane RoB tool published through 3 July 2018, the majority (85%) reported the analysis of risk of bias; within these papers, about half (53%) used the Cochrane tool specifically, leaving a total of 269 reviews for further analysis.

A non-negligible minority of studies included in the review by Puljak et al. either did not include certain domains of the Cochrane RoB tool, or did not report which domains were used. Only 40% of the reviews analyzed RoB through all seven domains. Click to enlarge.

Less than two-thirds (60%) of the 269 included reviews used all seven domains of the Cochrane tool, report Puljak and colleagues, and only 16 of the included reviews (5.9%) reported both a judgment and a comment explaining each judgment either within the manuscript or in a supplementary file. Within these 15 reviews, the proportion of inadequate judgments (either those in which the comment was not in line with the judgment or in which there was no supporting comment) ranged from 25% (Other Bias domain) to 65% (Selective Reporting Bias domain). The reviews “rarely” included full tables illustrating the RoB judgments for the different domains.

The authors’ findings highlight that both a judgment (low/high/unclear risk of bias) as well as a comment explaining the judgment within each domain should be included in systematic reviews that report use of the Cochrane RoB tool.

Puljak, L., Ramic, I., Naharro, C.A., Brezova, J., Lin, Y.C., Surdila, A.A,.... & Salvado, M.S. Cochrane risk of bias tool was used inadequately in the majority of non-Cochrane systematic reviews. J Clin Epidemiol, 2020; 123: 114-119.

Manuscript available from publisher's website here. 

Tuesday, July 7, 2020

Adventures in Protocol Publication Pt. II: Survey of PROSPERO Registrants Finds Peer-Reviewed Protocol Publication is a Mixed Bag

As we discussed in Part I of this series, the posting of a systematic review – for instance, in an online registry such as PROSPERO -  is a well-established practice that helps prevent results-driven biases from being introduced into a systematic review as well as reduces unintentional duplication of efforts. The additional publication of such a protocol in a peer-reviewed journal also has its benefits – such as additional citations, saving room in the methods portion of the final manuscript, and the facilitation of the actual systematic review’s publication. However, it also has its drawbacks, including the potential cost of publishing and the time required from submission to acceptance, which may result in a delayed SR timeline.

In the latest issue of the Journal of Clinical Epidemiology, Rombey and colleagues report the findings of their survey examining the practices and attitudes related to peer-reviewed protocol publication among over 4,000 authors of non-Cochrane systematic review protocols published in PROSPERO in 2018.

In order to identify potential “inhibiting factors” of peer-reviewed protocol publication, respondents who reported publishing their protocol in peer-reviewed journal in addition to PROSPERO were asked questions related to publication costs, while respondents who had not pursued this option were asked for their reasoning behind this decision.

Nearly half (44.7%) of the 4,054 respondents answered that they had published or plan to publish their protocol in a peer-reviewed journal, while the remaining 55.3% chose not to pursue this option. Of these respondents, the most common reasons given were that publishing the protocol in PROSPERO was deemed sufficient; that it was not an aim/priority of the authors; and that the time required to publish in a peer-reviewed journal would delay the review itself.

A figure from Rombey et al. shows level of agreement with statements related to peer-reviewed protocol publication among >4,000 respondents. Click to enlarge.

Of those who did report publishing their protocol, about one-third (67.9%) indicated that there were no costs associated with publication, while one-quarter (25.4%) indicated that there were costs, and the remaining 6.7% were not sure.

Respondents from Africa, Asia, and South America were twice as likely to report having published a protocol in a peer-reviewed journal than those from Europe, North America, or Oceania; however, likelihood of publication did not appear to vary by gender or experience level. Qualitative analysis of free-response text revealed that some respondents were not aware that publication of protocols in peer-reviewed journals was done at all.

Overall, while cost of publishing a protocol did not appear to be a major inhibiting factor for most respondents, issues related to time from submission to publication as well as opinions regarding the additional value of publishing beyond PROSPERO were the most commonly cited reasons for not pursuing peer-reviewed publication of a systematic review protocol.

Rombey, T., Puljak, L., Allers, K., Ruano, J., & Pieper, D. Inconsistent views among systematic review authors toward publishing protocols as peer-reviewed articles: An international survey. J Clin Epidemiol, 2020; 123:9-17.

Manuscript available from the publisher's website here.

Wednesday, July 1, 2020

A Not-So-Non-Event?: New Systematic Review Finds Exclusion of Studies with No Events from a Meta-Analysis Can Affect Direction and Statistical Significance of Findings

Studies with no events in either arm have been considered non-informative within a meta-analytical context, and thus have been left out of these analyses. A new systematic review of 442 such meta-analyses, however, reports that this practice may actually affect the resulting conclusions.

In the July 2020 issue of the Journal of Clinical Epidemiology, Xu and colleagues report their study of meta-analyses of binary outcomes in which at least one included study had no events in either arm. The authors then reanalyzed the data from 442 included papers taken from the Cochrane Database of Systematic Reviews, using modeling to determine the effect of reincorporating the excluded study.

The authors found that in 8 (1.8%) of the 442 meta-analyses, inclusion of the previously excluded studies changed the direction of the pooled odds ratio (“direction flipping”). In 12 (2.72%) of the meta-analyses, the pooled odds ratio (OR) changed by more than the predetermined threshold of 0.2. Additionally, in 41 (9.28%) of these studies, the statistical significance of that findings changed when assuming a p = 0.05 threshold (“significance flipping”). In most of these 41 meta-analyses, excluded (“non-event”) studies made up between 5 and 30% of the total sample size. About half of these alterations led to an expansion of the confidence interval; while in the other half, the incorporation of non-events reduced the confidence interval.

The figure above from Xu et al. shows the proportion of studies reporting no events within the meta-analyses that showed a substantial change in p value when these studies were included. The proportion of the total sample tended to cluster between 5 and 30%. Click to enlarge.

Post hoc simulation studies confirmed the robustness of these findings, and also found that exclusion of studies with no events preferentially affected the pooled ORs of studies that found no effect (OR = 1), whereas a large magnitude of effect was protective against these changes. The opposite was found for the effect of excluding studies with no events on the resulting p values (i.e., large magnitudes of effects were more likely to be affected whereas conclusions of no effect were protected).

In sum, though a common practice in meta-analysis, the exclusion of studies with no events in either arm may affect the direction, magnitude, or statistical significance of the resulting conclusions in a small but non-negligible number of analyses.

Xu, C., Li, L, Lin, L., Chu, H., Thabane, L., Zou, K., & Sun, X. Exclusion of studies with no events in both arms in meta-analysis impacted the conclusions. J Clin Epidemiol, 2020; 123: 91.99.

Manuscript available from the publisher's website here.