Wednesday, August 26, 2020

Rapid, Up-to-Date Evidence Synthesis in the Time of COVID

In emergent situations with sparse and rapidly evolving bodies of research, evidence synthesis programs must be able to adapt to a shortened timeline to provide clinicians with the best available evidence for decision-making. (See our previous posts on rapid systematic review and guideline development, here, here, here, and here). But perhaps no health crisis in the modern era has made this more clear than the coronavirus disease 2019 (COVID-19) pandemic.

Recently, Murad and colleagues published a framework detailing a four-pillar program through which they have been able to synthesize evidence related to the COVID-19 pandemic. This system has been tried and tested within the Mayo Clinic, a multi-state academic center with more than 1.2 million patients per year.

 

Launched within two weeks of the World Health Organization’s declaration of COVID-19 as a pandemic, Mayo Clinic’s evidence synthesis program consisted of four major components:

  • What is New?: an automatically generated list of COVID-19-related studies published within the last three days and categorized into topic areas such as diagnosis or prevention
  • Repository of Studies: a running list of previously published studies since the first case report of COVID-19, including those that move from the “What is New?” list after three days’ time
  • Rapid Reviews: reviews published within three to four days in response to pressing clinical questions from those on the frontlines and utilizing the study repository. To facilitate evidence synthesis, studies are often screened and selected by a single reviewer and evidence is rarely meta-analyzed.
  • Repository of Reviews: a collection of reviews including those developed at Mayo and elsewhere, identified in twice-weekly searches and through a list of predetermined websites. To supplement knowledge, some reviews included indirect evidence borrowed from studies of other coronaviruses or respiratory infections, when appropriate.
Click to enlarge.

Within one month of the framework’s establishment, the team had conducted seven in-house rapid reviews and had indexed more than 100 newly published reviews into a database housing over 2,000 total.
 

The authors conclude that while an intensive system such as this may not be feasible in smaller health systems, cross-collaboration and sharing of knowledge can allow for informed and up-to-date clinical care that adapts in the face of a rapidly changing landscape of evidence.


Murad, M.H., Nayfeh, T., Suarez, M.U., Seisa, M.O., Abd-Rabu, R., Farah, M.H.E..., & Saadi, S.M. 2020. A framework for evidence synthesis programs to respond to a pandemic. Mayo Clin Proc 95(7):1426-1429.


Manuscript available at the publisher's website here.

Friday, August 14, 2020

New Elaboration of CONSORT Items Aims to Improve the Reporting of Deprescribing Trials

Deprescribing is the act of withdrawing a treatment prescription from patients for whom a medication has become inappropriate or in whom the risks may now outweigh the benefits. However, trials examining the effects of deprescribing are often complex and multi-faceted, and reporting of these trials can miss important aspects such as patient selection and length of follow-up. 

A recently published paper by Blom et al. used a multistep process to develop a reporting guideline for deprescribing trials based on a systematic review of this body of research, paying close attention to those aspects that most commonly went unreported. The result was an elaboration of the Consolidated Standards of Reporting Trials (CONSORT) statement, with the addition of items reviewed by a panel of 14 experts in the areas of ranging from pharmacology and geriatric medicine to statistics and reporting guidelines. The process, which ended with a one-day face-to-face meeting to approve the elaborated items, also took into account the Template for Intervention Description and Replication (TIDieR) checklist to ensure that a comprehensive list was created.


Click to enlarge.


The panel determined that all items of the original CONSORT checklist are applicable to deprescribing trials, but that certain items required further detail. The CONSORT items that required the most attention with regards to deprescribing studies included the following:

  • description of trial design
  • participant selection 
  • detailed information that would allow replication of the intervention studied
  • pre-specification of primary and secondary outcome
  • discussion of adverse events and harms, including those related to drug withdrawal
  • defined periods of recruitment and follow-up

 

In addition to improving the quality of reporting in deprescribing trials, the authors also recommend increasing the amount of dedicated funds available for deprescribing studies, which are currently scarce and not incentivized by common streams of research funding.


Blom, J.W., Muth, C., Glasziou, P., McCormarck, J.P., Perera, R., Poortvliet, R.K.E..., & Knottnerus, J.A. 2020. Describing deprescribing trials better: An elaboration of the CONSORT statement. J Clin Epidemiol 127: 87-95.


Manuscript available from the publisher's website here.

Thursday, August 6, 2020

New Systematic Review Suggests Noncordance with COI Disclosure to Reporting Databases is Widespread, but Methodological Quality of Studies is Variable

Disclosure of conflict of interest (COI) is a major point of concern in the development of guidelines as well as original research papers. Over the years, multiple studies have aimed to elucidate just how closely the disclosures of individual authors tracks with their reported COI in open databases. A new systematic review of 27 such studies, recently published online in the Journal of Clinical Epidemiology, compiles the findings of these studies into some eyebrow-raising statistics while also taking a look at the methodological quality of these studies.

 

In their review, El-Rayass and colleagues found that although the methodological quality for assessing the concordance of authors’ COI disclosures within papers and according to public databases varied widely, a median of 81.2% of authors across 20 studies had “noncorcordant” disclosures, (ranging from 41.8% to 98.6% across all studies) and that more than half (43.4% of all authors) of these were “completely nonconcordant” (ranging from 15% to 89.5% across all studies). What’s more, among seven studies that analyzed company reporting on the individual level, between 23.1% and 85.4% of companies did not report their payments to authors.


Click to enlarge.

For the five studies that analyzed disclosures on the study rather than the individual author level, all found at least some degree of discordance between in-study disclosures and database reports. The rate of nonconcordant disclosures among these studies ranged from 6 to 92.6%

 

The authors note that ulterior motives of authors are just one potential explanation for the high observed rate of nonconcordant COI disclosure and reporting. Vague instructions and parameters set by journals during the article submission process may undermine efforts to transparently report any and all potential sources of conflict, be they financial, intellectual or otherwise. In addition, the authors found that studies of COI reporting that tended to have higher methodological quality also tended to report lower estimates of nonconcordance, meaning that the overall combined estimates may be artificially inflated – for instance, due to some studies not making a distinction about the relevancy of potential COI sources to the topic of the articles analyzed. The authors note potential sources of nondirectional error as well, such as how differences in COI categories between in-paper disclosures and reference databases were handled, which additionally lowers confidence in the current estimate.


Click to enlarge.


In sum, the recent review by El-Rayess et al. points out that issues with concordance between authors’ COI disclosures in their published works seem to be at odds with publicly available reports of these relationships; however, the degree of nonconcordance overall is still uncertain. Those looking to complete future analyses of COI disclosure policies may want to use this paper as a roadmap to improving our certainty in the actual magnitude of the issue.


El-Rayess, H., Khamis, A.M., Haddad, S., Ghaddara, H.A., Hakoum, M., Ichkhanian, Y., Bejjani, M., and Akl, E.A. Assessing concordance of financial conflicts of interest disclosures with payments' databases: A systematic survey of the health literature. J Clin Epidemiol 127:19-28.


Manuscript available at the publisher's website here.

Thursday, July 30, 2020

Research Revisited: "Quality of Evidence is a Key Determinant for Making a Strong GRADE Guidelines Recommendation" (2015)

This month in 2015, Djulbegovic and colleagues published a paper that examined the impact of quality of evidence, balance between benefits and harms, patient values and preferences, and resource use (the four GRADE factors) on the strength of resulting clinical recommendations.

The four major GRADE factors that drive clinical recommendations 






















The authors circulated a survey among 18 members of a guideline panel of the American Association of Blood Banking (AABB) who had recently convened to develop guidelines for the use of prophylactic versus therapeutic platelet transfusion in patients with thrombocytopenia. Using the panel members’ assessments of the GRADE factors with regards to the evidence that had been presented and their resulting recommendations for strong or weak recommendations, a logistic regression was conducted in order to examine the relative impact of each of the four GRADE factors.

The guideline panel had reviewed the evidence for ten key questions. Overall, the consistency of judgments across panel members was good (Cronbach’s alpha = 0.86). Those questions with a high quality of evidence were 4.5 times more likely to result in a strong recommendation (p < 0001), whereas none of the three remaining GRADE factors were significantly associated with the strength of the resulting recommendations. Moreover, the model suggested that in cases where the quality of evidence was high, there was a 90% chance of the resulting recommendation being strong; when the quality of evidence was very low, this chance dropped to 10%.

The figure from Djulbegovic shows the associations between increasing quality of evidence and strength of resulting recommendations

The authors concluded that the quality of evidence was far and away the most important contributing factor to the resulting strength of recommendations, at least within the studied guideline panel. However, it’s important to note that patient values and preferences, the balance between benefits and harms, and issues of resource use should all be involved in the process of moving from evidence to decisions, but the relative nebulousness around these considerations and a lack of determined structures for eliciting the data they require (such as the use of a patient panel or survey) likely make them less impactful on the overall strength of recommendation. 

The Evidence-to-Decision framework, which makes these additional considerations more explicit in the formulation of recommendations, was introduced just a year later. It’s worth wondering whether the relative impact of the remaining three GRADE factors has changed since the introduction and adoption of this framework – perhaps presenting an opportunity to revisit this research.

Djulbegovic, B., Kumar, A., Kaufman, R.M., Tobian, A., and Guyatt, G.H. Quality of evidence is a key determinant for making a strong GRADE guidelines recommendation. J Clin Epidemiol 68(7): 727-732. 

Manuscript is available at the publisher's website here. 

Thursday, July 23, 2020

Systematic Review Updates Often Improve Precision of Estimate, but not Overall Findings

When done well, systematic reviews with meta-analysis provide a comprehensive view of the current state of the evidence on a topic as well as the general strength and direction of findings. Sometimes, however, what they find is that little evidence so far exists, or that the findings are relatively heterogenous. In these cases, the addition of more studies will likely improve their precision; thus, future updates are warranted to reanalyze the data with the infusion of newly published studies and grey literature. (Check out our series on living systematic reviews for one way these reviews could be kept continuously up-to-date – Part 1, Part 2, Part 3).

A recent analysis published in the September issue of the Journal of Clinical Epidemiology supports this concept. Analyzing the original and updated versions of 30 meta-analyses published between 1994 and 2018 across 19 countries, Gao and colleagues found the following:
  • The average time from original publication to update was 4 years, 8 months.
  • Most of the updates (80%) included more randomized controlled trials than the original. In the majority of cases (73%), an update also led to a higher total number of patients analyzed.
  • The proportion of reviews reporting the methodological quality of included studies was slightly higher among the updates (76.7%) than the original publications (70%).
  • The quality of most included trials was low, and in most cases, the authors did not report the results of the assessment of individual items within their methodological appraisal.
A figure from Gao et al. shows the frequencies of publication years of original systematic reviews analyzed (in blue) and their updates (in green). Click to enlarge.



Among the 30 pairs of included SRs, the authors identified 130 comparable outcomes in just over half (16) of the reviews that could be analyzed for changes over time; most of these (74.6%) were binary outcomes. Overall, just three (2.3%) of the outcomes had a significant change that affected the estimates to a statistically significant degree. In 88.3% of the 94 comparable outcomes that incorporated new evidence, the precision improved (i.e., the width of the confidence interval narrowed). Precision was reduced in five outcomes (5.3%) and did not change in the remaining six (6.4%).

The authors conclude that while update of meta-analyses within systematic reviews often improves the precision of an estimate (as assessed by the width of the confidence interval around it), it rarely changes the estimate itself. In addition, systematic review authors should take care to more thoroughly assess and report the methodological appraisal of individual items within each included study. This may serve as a guide for determining whether a future update may dramatically impact current findings, and how frequently updates should be conducted.

Gao, Y., Yang, K., Cai, Y., Shi, S., Liu, M., Zhang, J... & Song, F. Updating systematic reviews can improve the precision of outcomes: A comparative study. J Clin Epidemiol, 2020; 125: 108-119.

Manuscript available here. 

Monday, July 20, 2020

New Review Provides Insight into Unique Challenges of Continuous and TTE Outcomes, Potential Solutions

Health Technology Assessments (HTAs) and guidelines often meta-analyze non-binary outcomes, such as continuous and time-to-event outcomes, in order to elucidate the observed effect of a health intervention. However, these types of outcomes may require more sophisticated analysis and modeling techniques, making it more difficult to be synthesized by authors with limited statistical knowledge or resources.

A newly published review by Freeman and colleagues aimed to describe the use and presentation of these outcomes, and in doing so, identify potential challenges and facilitators to improving their application in future publications. The study analyzed a total of 25 technology appraisals and 15 guidelines from the UK’s National Institute for Health Care Excellence (NICE) and 7 HTA reports from the National Institute of Health Research (NIHR) for a total of 47 documents using meta-analyses (MA), network meta-analyses (NMA), or a combination of the two.

About half (51%) of the items reported at least one continuous outcome, while just over half (55%) reported at least one time-to-event outcome. Continuous outcomes were most commonly presented as a mean difference (MD). The most commonly used time-to-event outcomes were overall and progression-free survival, presented as a hazard ratio. Notably, no articles reported the methods used to handle multiplicity of either continuous or time-to-event outcomes. The existence of multiple time-points was largely handled by presenting multiple separate meta-analyses analyzing the appropriate time-points against one another.

Although most of the analyzed documents provided a decision model based on continuous or time-to-event outcomes, but many of them were based on the results of a single trial only, despite the fact that meta-analyses were undertaken.

Reporting of Decision Models Across Publications Using Continuous Outcomes. Click to enlarge.
Reporting of Decision Models Across Publications Using Time-to-Event Outcomes. Click to enlarge.

The authors present a list of the key challenges faced by authors of meta-analyses using these outcomes, such as the use of continuous outcomes that are reported with different scales, the multiplicity of related outcomes from the same study or various time-points in time-to-event outcomes, and nonproportional hazards (hazards that change over the course of time) in time-to-event outcomes. They present the following suggestions for better managing these issues:
  • Increased availability of statistical expertise on MA and NMA teams
  • Development of user-friendly software that allows users to approach more complex statistical techniques – for instance, those that allow for multiple outcomes from the same study to be analyzed simultaneously – with the same ease and accessibility as a point-and-click software such as RevMan.
  • Increased reporting of outcomes within individual trials, as well as the reporting of individual patient data by trial authors.
Freeman, S.C., Sutton, A.J., & Cooper, N.J. Update of methodological advances for synthesis of continuous and time-to-event outcomes would maximize use of evidence base. J Clin Epidemiol, 2020; 124: 94-105.

Manuscript available from publisher's website here. 


Wednesday, July 15, 2020

The Median Cochrane Systematic Review Takes 2 Years to Publish, but This Window May be Lengthening

Recent posts (1, 2, 3, 4) have discussed the use of machine learning, automation, crowdsourcing, and other strategies that can substantially speed up the process of a systematic review. However, for most systematic review authors in the year 2020, a project with a timeline on the order of weeks or months – as opposed to years – remains a far-off futuristic ideal.

In fact, according to a recently published review analyzing over 6,700 systematic reviews published in the Cochrane Database since 1995, the median systematic review took 2 years from the publication of the protocol to the resulting review. Reviews published over the past decade alone, however, took a median 2.4 months longer than this, suggesting that the window of time from protocol to publication is increasing. Additionally, the majority (60%) of reviews published over the past five years took longer than two years to complete, compared with just over half (51%) of all reviews analyzed. This is an important finding, as with a longer lag time comes a greater risk of the review becoming out-of-date before it even has the chance to be disseminated.

A figure from Andersen et al. compares the Kaplan-Meier curve of turnaround times for Cochrane reviews published within each 5-year interval, with marked slowdown for the most recent cohort (2015-2019). Click to enlarge.






There was also a high amount of variability in turnover time between different Review Groups, with the fastest group publishing reviews 2.6 times faster than the slowest.

As Anderson and colleagues explain, it is worth noting that Cochrane reviews tend to take longer to publish than other systematic reviews, perhaps due to their higher standards of rigor. However, the median publication time appears to have increased over the latter half of the last quarter-century – perhaps due to the increasing complexity and rigors of reporting systematic reviews in addition to the complexity of the studies they are charged with analyzing.

Thus, although technological advancements may someday shorten the timeline of the typical systematic review, the data suggest that the turnaround time has only increased in recent years. Qualitative research comparing the workflow and processes of the different Cochrane review groups may provide insight into best practices for improving the efficiency of a systematic review’s production, and with it, the currency of its findings upon publication.

Andersen, M.Z., Gülen, S., Fonnes, S., Andresen, K., & Rosenberg, J. Half of Cochrane reviews were published more than 2 years after the protocol. J Clin Epidemiol, 2020; 124: 85-93.

Manuscript available from the publisher's website here.