Thursday, September 24, 2020

Pre-Print of PRISMA 2020 Updated Reporting Guidelines Released

Upon their publication in 2009, the PRISMA guidelines have become the standard for reporting in systematic reviews and meta-analyses. Now, 11 years later, the PRISMA checklist has received a fresh facelift for 2020 that incorporates the methodological advances that have taken place over the intervening years.

In a recently released pre-print, Page and colleagues describe their approach to designing the new and improved PRISMA. Sixty reporting documents were reviewed to identify any new items deserving of consideration and 110 systematic review methodologists and journal editors were surveyed for feedback. The new PRISMA 2020 draft was then developed based on discussion at an in-person meeting and iteratively revised based on co-author input and a sample of 15 experts.

Click to enlarge.

The result is an expanded, 27-item checklist replete with elaboration of the purpose for each item, a sub-checklist specifically for reporting within the abstract, and revised flow diagram templates for both original and updated systematic reviews. Here are some of the major changes and additions to be aware of:

  • Recommendation to present search strategies for all databases instead of just one.
  • Recommendation that authors list "near-misses," or studies that met many but not all inclusion criteria, in the results section.
  • Recommendation to assess certainty of synthesized evidence.
  • New item for declaration of Conflicts of Interest.
  • New item to indicate whether data, analytic code, or other materials have been made publicly available.
Page, M., McKenzie, J., Bossuyt, P., Boutron, I., Hoffman, T., Mulow, C., ... & Moher, D. 2020. The PRISMA 2020 Statement: An updated guideline for reporting systematic reviews. 

Pre-print available from MetaArXiv here. 

Friday, September 18, 2020

WHO Guidelines are Considering Health Equity More Frequently, but Reporting of Judgments is Often Incomplete

The GRADE evidence-to-decision (EtD) framework was developed as a way to more explicitly and transparently inform the considerations of the implications of clinical recommendations, such as the potential positive or negative impacts on health equity. A new analysis of World Health Organization (WHO) guidelines published between 2014 and 2019 - over half (54%) of which used the EtD framework - examines the consideration of health equities in the guidelines' resulting recommendations.

Dewidar and colleagues found that the guidelines utilizing the EtD framework were more likely to be addressing health issues in socially disadvantaged populations (42% of those developed with the EtD versus 24% of those without). What's more, the use of the EtD framework has risen over time, from 10% of guidelines published in 2016 (the year of the EtD's introduction) to 100% of those published within the first four months of 2019. Use of the term "health equity" increased to a similar degree over this period.

Just over one-third (38%) of recommendations were judged to increase or probably increase health equity, while 15% selected the judgment "Don't know/uncertain" and 8% provided no judgment. Just over one-quarter (28%) of the recommendations utilizing the EtD framework provided evidence for the judgment. When detailed judgments were provided, they were more likely to discuss the potential impacts of place of residence and socioeconomic status and less likely to explicitly consider gender, education, race, social capital, occupation, or religion.

Click to enlarge.

The authors conclude that while consideration of the potential impacts of recommendations on health equity has increased considerably in recent years, reporting of these judgments is still often incomplete. Reporting which published research evidence or additional considerations were used to make a judgment, as well as considering the various PROGRESS factors (Place, Race, Occupation, Gender, Religion, Education, Socioeconomic status, and Social capital) will likely improve the transparency of recommendations in future guidelines where health equity impacts are of concern.

Dwidr, O., Tsang, P., León-Garcia, M., Mathew, C., Antequera, A., Baldeh, T., ... & Welch, V. 2020. Over half of WHO guidelines published from 2014 to 2019 explicitly considered health equity issues: A cross-sectional suvey. J Clin Epidemiol 127:125-133.

Manuscript available from the publisher's website here.



Monday, September 14, 2020

Timing and Nature of Financial Conflicts of Interest Often Go Unreported, Systematic Survey Finds

The proper disclosure and management of financial Conflicts of Interest (FCOI) within the context of a published randomized controlled trial is vital to alerting the reader to the sources of funding for the research and other financial factors that may influence the design, conduct, or reporting of the trial.

A recently published cross-sectional survey by Hakoum and colleagues examined the nature of FCOI reporting in a sample of 108 published trials found that 99% of these reported individual author disclosures, while only 6% reported potential sources of FCOI at the institutional level. Individual authors reported a median of 2 FCOIs. Among the 2,972 FCOIs reported by 806 individuals, the greatest proportion came from personal fees other than employment income (50%) and from grants (34%). Further, of those disclosing individual FCOI, a large majority (85%) were provided by private-for-profit entities. Notably, only one-third (33%) of these disclosures included the timing of the funding in relation to the trial, 17% reported the relationship between the funding source and the trial, and just 1% reported the monetary value.


Click to enlarge.
 

Using a multivariate regression, the authors found that the reporting of FCOI by individual authors was positively associated with nine factors, most strongly with the authors being from an academic institution (OR: 2.981; 95% CI: 2.415 – 3.680), with the funding coming from an entity other than private-for-profit (OR: 2.809; 95% CI: 2.274 – 3.470), and the first author’s affiliation being from a low- or middle-income country (OR: 2.215; 95% CI: 1.512 – 3.246).

 

More explicit and complete reporting of FCOIs, the authors conclude, may improve readers’ level of trust in the results of a published trial and in the authors presenting them. To improve the nature and transparency of FCOI reporting, researchers may consider disclosing details related to the funding’s source, including the timing of the funding in relation to the conduct and publication of the trial, the relationship between the funding source and the trial, and the monetary value of the support.

Hakoum, M.B., Noureldine, H., Habib, J.R., Abou-Jaoude, E.A., Raslan, R., Jouni, H., ... & Akl, E.A. (2020). Authors of clinical trials seldom reported details when declaring their individual and institutional financial conflicts of interest: A cross-sectional survey. J Clin Epidemiol 127:49-58.

Manuscript available from the publisher's website here

Tuesday, September 8, 2020

Assessing Health-Related Quality of Life Improvement in the Modern Anticancer Therapy Era

Recent breakthroughs in anticancer therapies such as small-molecule drugs and immunotherapies have made improvements in Health-Related Quality of Life (HRQOL) possible among cancer patients over the course of treatment. In a recent paper published in the Journal of Clinical Epidemiology, Cottone and colleagues are the first to propose the framework for assessing the change in HRQOL over time in these patients: Time to HRQOL Improvement (TTI), and Time to Sustained HRQOL Improvement (TTSI).

In the proposed framework, TTI is based on the time to the “first clinically meaningful improvement occurring in a given scale or in at least one among different scales” – for instance, a minimal important difference (MID) of 5 points on the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire – Core 30 (QLQ-C30). The authors suggest utilizing the first posttreatment score as the baseline measurement for monitoring improvements over time. “Sustained improvement” was defined as the first improvement that is not followed by a deterioration that meets or exceeds the MID.

 

The use of Kaplan-Meier curves and Cox proportional hazards is inappropriate for these outcomes, the authors argue, as it does not allow for possible competing events, such as disease progression, toxicity, or the possibility of an earlier improvement in another scale when multiple scales are used. They propose the use of the Fine-Gray model for the evaluation of TTI and TTSI and pilot it with a case study of 124 newly diagnosed chronic myeloid leukemia patients undergoing first-line treatment with nilotinib.


Time To Improvement (TTI) and Time to Sustained Improvement (TTSI) can be used to elucidate differences in HRQOL responses to treatment based on baseline characteristics. Here, the figure shows TTSI in fatigue scores based on hemoglobin level at baseline. Click to enlarge.


Using this model, the authors found that improvements in fatigue scores appeared more quickly than those in physical functioning when measuring scores from baseline (pre-treatment), but upon using first post-treatment score as the baseline, the differences between improvement rates in fatigue and physical functioning diminished. Additionally, a lower baseline hemoglobin level was associated with earlier sustained improvements in fatigue.

 

While the proposed method of evaluating TTI and TTSI has some limitations, such as lower statistical power than other ways of tracking changes in HRQOL over time, it also has notable strengths. In particular, this method can be used to elucidate differences between treatment approaches that show similar survival outcomes so that the approach with shorter TTI and TTSI can be favored.


Cottone, F., Collins, G.S., Anota, A., Sommer, K., Giesinger, J.M., Kieffer, J.M., ... & Efficace, F. (2020). Time to health-related quality of life improvement analysis was developed to enhance evaluation of modern anticancer therapies. J Clin Epidemiol 127:9-18.


Manuscript available from publisher's website here. 

Wednesday, September 2, 2020

A New Tool for Assessing the Credibility of Effect Modification Cometh: Introducing the ICEMAN

Effect modification goes by many other names: “subgroup effect,” “statistical interaction,” and “moderation,” to name a few. Regardless of what it’s called, the existence of effect modification in the context of an individual study means that the effect of an intervention varies between individuals based on an attribute such as age, sex, or severity of underlying disease. Similarly, a systematic review may aim to identify effect modification between individual studies based on their setting, year of publication, or methodological differences (often called a “subgroup analysis”).

As many as one-quarter of randomized controlled trials (RCTs) and meta-analyses examine their findings for potential evidence of effect modification, according to a paper by Schandelmaier and colleagues published in the latest edition of CMAJ. However, it is not uncommon for claims of effect modification to be later proved spurious, which may negatively affect the quality of care in those subgroups of patients. Potential sources of these claims range from simple random chance to issues with selective reporting and misguided application of statistical analyses.


Click to enlarge.


In “Development of the Instrument to assess the Credibility of Effect Modification in Analyses (ICEMAN) in randomized controlled trials and meta-analyses,” the authors present a novel tool for evaluating the presence of a potential modifier. While several sets of criteria have been developed in the past for this purpose, the ICEMAN is the first to be based on a rigorous development process and refined with formal user testing.

 

First, the authors conducted a systematic survey of the literature to ensure a comprehensive understanding of the previously proposed criteria for evaluating effect modification. Thirty sets were identified, none of which adequately reflected the authors’ conceptual framework. Second, an expert panel of 15 members was identified randomly from a list of 40 identified through the systematic survey. These experts then pared down the initial list of 36 candidate criteria to 20 required and eight optional items. After developing a manual for its use, the authors tested the instrument among a diverse group of 17 potential users, including authors of Cochrane reviews and RCTs and journal editors using a semi-structured interview technique.


Schandelmaier, S., Briel, M., Varadhan, R., Schmid, C.H., Devasenapathy, N., Hayward, R.A., Gagnier, J., ... & Guyatt, G.H. 2020. Development of the Instrument to assess the Credibility of Effect Modification Analyses (ICEMAN) in randomized controlled trials and meta-analyses. CMAJ 192:E901-906.


Manuscript available at the publisher's website here

Wednesday, August 26, 2020

Rapid, Up-to-Date Evidence Synthesis in the Time of COVID

In emergent situations with sparse and rapidly evolving bodies of research, evidence synthesis programs must be able to adapt to a shortened timeline to provide clinicians with the best available evidence for decision-making. (See our previous posts on rapid systematic review and guideline development, here, here, here, and here). But perhaps no health crisis in the modern era has made this more clear than the coronavirus disease 2019 (COVID-19) pandemic.

Recently, Murad and colleagues published a framework detailing a four-pillar program through which they have been able to synthesize evidence related to the COVID-19 pandemic. This system has been tried and tested within the Mayo Clinic, a multi-state academic center with more than 1.2 million patients per year.

 

Launched within two weeks of the World Health Organization’s declaration of COVID-19 as a pandemic, Mayo Clinic’s evidence synthesis program consisted of four major components:

  • What is New?: an automatically generated list of COVID-19-related studies published within the last three days and categorized into topic areas such as diagnosis or prevention
  • Repository of Studies: a running list of previously published studies since the first case report of COVID-19, including those that move from the “What is New?” list after three days’ time
  • Rapid Reviews: reviews published within three to four days in response to pressing clinical questions from those on the frontlines and utilizing the study repository. To facilitate evidence synthesis, studies are often screened and selected by a single reviewer and evidence is rarely meta-analyzed.
  • Repository of Reviews: a collection of reviews including those developed at Mayo and elsewhere, identified in twice-weekly searches and through a list of predetermined websites. To supplement knowledge, some reviews included indirect evidence borrowed from studies of other coronaviruses or respiratory infections, when appropriate.
Click to enlarge.

Within one month of the framework’s establishment, the team had conducted seven in-house rapid reviews and had indexed more than 100 newly published reviews into a database housing over 2,000 total.
 

The authors conclude that while an intensive system such as this may not be feasible in smaller health systems, cross-collaboration and sharing of knowledge can allow for informed and up-to-date clinical care that adapts in the face of a rapidly changing landscape of evidence.


Murad, M.H., Nayfeh, T., Suarez, M.U., Seisa, M.O., Abd-Rabu, R., Farah, M.H.E..., & Saadi, S.M. 2020. A framework for evidence synthesis programs to respond to a pandemic. Mayo Clin Proc 95(7):1426-1429.


Manuscript available at the publisher's website here.

Friday, August 14, 2020

New Elaboration of CONSORT Items Aims to Improve the Reporting of Deprescribing Trials

Deprescribing is the act of withdrawing a treatment prescription from patients for whom a medication has become inappropriate or in whom the risks may now outweigh the benefits. However, trials examining the effects of deprescribing are often complex and multi-faceted, and reporting of these trials can miss important aspects such as patient selection and length of follow-up. 

A recently published paper by Blom et al. used a multistep process to develop a reporting guideline for deprescribing trials based on a systematic review of this body of research, paying close attention to those aspects that most commonly went unreported. The result was an elaboration of the Consolidated Standards of Reporting Trials (CONSORT) statement, with the addition of items reviewed by a panel of 14 experts in the areas of ranging from pharmacology and geriatric medicine to statistics and reporting guidelines. The process, which ended with a one-day face-to-face meeting to approve the elaborated items, also took into account the Template for Intervention Description and Replication (TIDieR) checklist to ensure that a comprehensive list was created.


Click to enlarge.


The panel determined that all items of the original CONSORT checklist are applicable to deprescribing trials, but that certain items required further detail. The CONSORT items that required the most attention with regards to deprescribing studies included the following:

  • description of trial design
  • participant selection 
  • detailed information that would allow replication of the intervention studied
  • pre-specification of primary and secondary outcome
  • discussion of adverse events and harms, including those related to drug withdrawal
  • defined periods of recruitment and follow-up

 

In addition to improving the quality of reporting in deprescribing trials, the authors also recommend increasing the amount of dedicated funds available for deprescribing studies, which are currently scarce and not incentivized by common streams of research funding.


Blom, J.W., Muth, C., Glasziou, P., McCormarck, J.P., Perera, R., Poortvliet, R.K.E..., & Knottnerus, J.A. 2020. Describing deprescribing trials better: An elaboration of the CONSORT statement. J Clin Epidemiol 127: 87-95.


Manuscript available from the publisher's website here.

Thursday, August 6, 2020

New Systematic Review Suggests Noncordance with COI Disclosure to Reporting Databases is Widespread, but Methodological Quality of Studies is Variable

Disclosure of conflict of interest (COI) is a major point of concern in the development of guidelines as well as original research papers. Over the years, multiple studies have aimed to elucidate just how closely the disclosures of individual authors tracks with their reported COI in open databases. A new systematic review of 27 such studies, recently published online in the Journal of Clinical Epidemiology, compiles the findings of these studies into some eyebrow-raising statistics while also taking a look at the methodological quality of these studies.

 

In their review, El-Rayass and colleagues found that although the methodological quality for assessing the concordance of authors’ COI disclosures within papers and according to public databases varied widely, a median of 81.2% of authors across 20 studies had “noncorcordant” disclosures, (ranging from 41.8% to 98.6% across all studies) and that more than half (43.4% of all authors) of these were “completely nonconcordant” (ranging from 15% to 89.5% across all studies). What’s more, among seven studies that analyzed company reporting on the individual level, between 23.1% and 85.4% of companies did not report their payments to authors.


Click to enlarge.

For the five studies that analyzed disclosures on the study rather than the individual author level, all found at least some degree of discordance between in-study disclosures and database reports. The rate of nonconcordant disclosures among these studies ranged from 6 to 92.6%

 

The authors note that ulterior motives of authors are just one potential explanation for the high observed rate of nonconcordant COI disclosure and reporting. Vague instructions and parameters set by journals during the article submission process may undermine efforts to transparently report any and all potential sources of conflict, be they financial, intellectual or otherwise. In addition, the authors found that studies of COI reporting that tended to have higher methodological quality also tended to report lower estimates of nonconcordance, meaning that the overall combined estimates may be artificially inflated – for instance, due to some studies not making a distinction about the relevancy of potential COI sources to the topic of the articles analyzed. The authors note potential sources of nondirectional error as well, such as how differences in COI categories between in-paper disclosures and reference databases were handled, which additionally lowers confidence in the current estimate.


Click to enlarge.


In sum, the recent review by El-Rayess et al. points out that issues with concordance between authors’ COI disclosures in their published works seem to be at odds with publicly available reports of these relationships; however, the degree of nonconcordance overall is still uncertain. Those looking to complete future analyses of COI disclosure policies may want to use this paper as a roadmap to improving our certainty in the actual magnitude of the issue.


El-Rayess, H., Khamis, A.M., Haddad, S., Ghaddara, H.A., Hakoum, M., Ichkhanian, Y., Bejjani, M., and Akl, E.A. Assessing concordance of financial conflicts of interest disclosures with payments' databases: A systematic survey of the health literature. J Clin Epidemiol 127:19-28.


Manuscript available at the publisher's website here.

Thursday, July 30, 2020

Research Revisited: "Quality of Evidence is a Key Determinant for Making a Strong GRADE Guidelines Recommendation" (2015)

This month in 2015, Djulbegovic and colleagues published a paper that examined the impact of quality of evidence, balance between benefits and harms, patient values and preferences, and resource use (the four GRADE factors) on the strength of resulting clinical recommendations.

The four major GRADE factors that drive clinical recommendations 






















The authors circulated a survey among 18 members of a guideline panel of the American Association of Blood Banking (AABB) who had recently convened to develop guidelines for the use of prophylactic versus therapeutic platelet transfusion in patients with thrombocytopenia. Using the panel members’ assessments of the GRADE factors with regards to the evidence that had been presented and their resulting recommendations for strong or weak recommendations, a logistic regression was conducted in order to examine the relative impact of each of the four GRADE factors.

The guideline panel had reviewed the evidence for ten key questions. Overall, the consistency of judgments across panel members was good (Cronbach’s alpha = 0.86). Those questions with a high quality of evidence were 4.5 times more likely to result in a strong recommendation (p < 0001), whereas none of the three remaining GRADE factors were significantly associated with the strength of the resulting recommendations. Moreover, the model suggested that in cases where the quality of evidence was high, there was a 90% chance of the resulting recommendation being strong; when the quality of evidence was very low, this chance dropped to 10%.

The figure from Djulbegovic shows the associations between increasing quality of evidence and strength of resulting recommendations

The authors concluded that the quality of evidence was far and away the most important contributing factor to the resulting strength of recommendations, at least within the studied guideline panel. However, it’s important to note that patient values and preferences, the balance between benefits and harms, and issues of resource use should all be involved in the process of moving from evidence to decisions, but the relative nebulousness around these considerations and a lack of determined structures for eliciting the data they require (such as the use of a patient panel or survey) likely make them less impactful on the overall strength of recommendation. 

The Evidence-to-Decision framework, which makes these additional considerations more explicit in the formulation of recommendations, was introduced just a year later. It’s worth wondering whether the relative impact of the remaining three GRADE factors has changed since the introduction and adoption of this framework – perhaps presenting an opportunity to revisit this research.

Djulbegovic, B., Kumar, A., Kaufman, R.M., Tobian, A., and Guyatt, G.H. Quality of evidence is a key determinant for making a strong GRADE guidelines recommendation. J Clin Epidemiol 68(7): 727-732. 

Manuscript is available at the publisher's website here. 

Thursday, July 23, 2020

Systematic Review Updates Often Improve Precision of Estimate, but not Overall Findings

When done well, systematic reviews with meta-analysis provide a comprehensive view of the current state of the evidence on a topic as well as the general strength and direction of findings. Sometimes, however, what they find is that little evidence so far exists, or that the findings are relatively heterogenous. In these cases, the addition of more studies will likely improve their precision; thus, future updates are warranted to reanalyze the data with the infusion of newly published studies and grey literature. (Check out our series on living systematic reviews for one way these reviews could be kept continuously up-to-date – Part 1, Part 2, Part 3).

A recent analysis published in the September issue of the Journal of Clinical Epidemiology supports this concept. Analyzing the original and updated versions of 30 meta-analyses published between 1994 and 2018 across 19 countries, Gao and colleagues found the following:
  • The average time from original publication to update was 4 years, 8 months.
  • Most of the updates (80%) included more randomized controlled trials than the original. In the majority of cases (73%), an update also led to a higher total number of patients analyzed.
  • The proportion of reviews reporting the methodological quality of included studies was slightly higher among the updates (76.7%) than the original publications (70%).
  • The quality of most included trials was low, and in most cases, the authors did not report the results of the assessment of individual items within their methodological appraisal.
A figure from Gao et al. shows the frequencies of publication years of original systematic reviews analyzed (in blue) and their updates (in green). Click to enlarge.



Among the 30 pairs of included SRs, the authors identified 130 comparable outcomes in just over half (16) of the reviews that could be analyzed for changes over time; most of these (74.6%) were binary outcomes. Overall, just three (2.3%) of the outcomes had a significant change that affected the estimates to a statistically significant degree. In 88.3% of the 94 comparable outcomes that incorporated new evidence, the precision improved (i.e., the width of the confidence interval narrowed). Precision was reduced in five outcomes (5.3%) and did not change in the remaining six (6.4%).

The authors conclude that while update of meta-analyses within systematic reviews often improves the precision of an estimate (as assessed by the width of the confidence interval around it), it rarely changes the estimate itself. In addition, systematic review authors should take care to more thoroughly assess and report the methodological appraisal of individual items within each included study. This may serve as a guide for determining whether a future update may dramatically impact current findings, and how frequently updates should be conducted.

Gao, Y., Yang, K., Cai, Y., Shi, S., Liu, M., Zhang, J... & Song, F. Updating systematic reviews can improve the precision of outcomes: A comparative study. J Clin Epidemiol, 2020; 125: 108-119.

Manuscript available here. 

Monday, July 20, 2020

New Review Provides Insight into Unique Challenges of Continuous and TTE Outcomes, Potential Solutions

Health Technology Assessments (HTAs) and guidelines often meta-analyze non-binary outcomes, such as continuous and time-to-event outcomes, in order to elucidate the observed effect of a health intervention. However, these types of outcomes may require more sophisticated analysis and modeling techniques, making it more difficult to be synthesized by authors with limited statistical knowledge or resources.

A newly published review by Freeman and colleagues aimed to describe the use and presentation of these outcomes, and in doing so, identify potential challenges and facilitators to improving their application in future publications. The study analyzed a total of 25 technology appraisals and 15 guidelines from the UK’s National Institute for Health Care Excellence (NICE) and 7 HTA reports from the National Institute of Health Research (NIHR) for a total of 47 documents using meta-analyses (MA), network meta-analyses (NMA), or a combination of the two.

About half (51%) of the items reported at least one continuous outcome, while just over half (55%) reported at least one time-to-event outcome. Continuous outcomes were most commonly presented as a mean difference (MD). The most commonly used time-to-event outcomes were overall and progression-free survival, presented as a hazard ratio. Notably, no articles reported the methods used to handle multiplicity of either continuous or time-to-event outcomes. The existence of multiple time-points was largely handled by presenting multiple separate meta-analyses analyzing the appropriate time-points against one another.

Although most of the analyzed documents provided a decision model based on continuous or time-to-event outcomes, but many of them were based on the results of a single trial only, despite the fact that meta-analyses were undertaken.

Reporting of Decision Models Across Publications Using Continuous Outcomes. Click to enlarge.
Reporting of Decision Models Across Publications Using Time-to-Event Outcomes. Click to enlarge.

The authors present a list of the key challenges faced by authors of meta-analyses using these outcomes, such as the use of continuous outcomes that are reported with different scales, the multiplicity of related outcomes from the same study or various time-points in time-to-event outcomes, and nonproportional hazards (hazards that change over the course of time) in time-to-event outcomes. They present the following suggestions for better managing these issues:
  • Increased availability of statistical expertise on MA and NMA teams
  • Development of user-friendly software that allows users to approach more complex statistical techniques – for instance, those that allow for multiple outcomes from the same study to be analyzed simultaneously – with the same ease and accessibility as a point-and-click software such as RevMan.
  • Increased reporting of outcomes within individual trials, as well as the reporting of individual patient data by trial authors.
Freeman, S.C., Sutton, A.J., & Cooper, N.J. Update of methodological advances for synthesis of continuous and time-to-event outcomes would maximize use of evidence base. J Clin Epidemiol, 2020; 124: 94-105.

Manuscript available from publisher's website here. 


Wednesday, July 15, 2020

The Median Cochrane Systematic Review Takes 2 Years to Publish, but This Window May be Lengthening

Recent posts (1, 2, 3, 4) have discussed the use of machine learning, automation, crowdsourcing, and other strategies that can substantially speed up the process of a systematic review. However, for most systematic review authors in the year 2020, a project with a timeline on the order of weeks or months – as opposed to years – remains a far-off futuristic ideal.

In fact, according to a recently published review analyzing over 6,700 systematic reviews published in the Cochrane Database since 1995, the median systematic review took 2 years from the publication of the protocol to the resulting review. Reviews published over the past decade alone, however, took a median 2.4 months longer than this, suggesting that the window of time from protocol to publication is increasing. Additionally, the majority (60%) of reviews published over the past five years took longer than two years to complete, compared with just over half (51%) of all reviews analyzed. This is an important finding, as with a longer lag time comes a greater risk of the review becoming out-of-date before it even has the chance to be disseminated.

A figure from Andersen et al. compares the Kaplan-Meier curve of turnaround times for Cochrane reviews published within each 5-year interval, with marked slowdown for the most recent cohort (2015-2019). Click to enlarge.






There was also a high amount of variability in turnover time between different Review Groups, with the fastest group publishing reviews 2.6 times faster than the slowest.

As Anderson and colleagues explain, it is worth noting that Cochrane reviews tend to take longer to publish than other systematic reviews, perhaps due to their higher standards of rigor. However, the median publication time appears to have increased over the latter half of the last quarter-century – perhaps due to the increasing complexity and rigors of reporting systematic reviews in addition to the complexity of the studies they are charged with analyzing.

Thus, although technological advancements may someday shorten the timeline of the typical systematic review, the data suggest that the turnaround time has only increased in recent years. Qualitative research comparing the workflow and processes of the different Cochrane review groups may provide insight into best practices for improving the efficiency of a systematic review’s production, and with it, the currency of its findings upon publication.

Andersen, M.Z., Gülen, S., Fonnes, S., Andresen, K., & Rosenberg, J. Half of Cochrane reviews were published more than 2 years after the protocol. J Clin Epidemiol, 2020; 124: 85-93.

Manuscript available from the publisher's website here.

Friday, July 10, 2020

Room for Improvement: Use of Cochrane RoB tool in non-Cochrane Systematic Reviews is Largely Incomplete

The Cochrane Risk of Bias (RoB) tool for randomized controlled trials (RCTs) is commonly used in both Cochrane and non-Cochrane systematic reviews as a standardized way to assess and report the risk of bias within a study or a body of evidence. The tool comprises seven domains, each representing a potential source of bias within the design or execution of an RCT. Judgments for each domain (for instance, allocation concealment, or selective outcome reporting) are made between whether the study possessed a low, high, or unclear risk of bias from that source.

A new review of non-Cochrane systematic reviews (NCSRs) published in this month’s edition of the Journal of Clinical Epidemiology reports that the use of the Cochrane RoB tool in these reviews is incomplete or inadequate in most cases. Within 508 eligible systematic reviews that used the original (2011) Cochrane RoB tool published through 3 July 2018, the majority (85%) reported the analysis of risk of bias; within these papers, about half (53%) used the Cochrane tool specifically, leaving a total of 269 reviews for further analysis.

A non-negligible minority of studies included in the review by Puljak et al. either did not include certain domains of the Cochrane RoB tool, or did not report which domains were used. Only 40% of the reviews analyzed RoB through all seven domains. Click to enlarge.

Less than two-thirds (60%) of the 269 included reviews used all seven domains of the Cochrane tool, report Puljak and colleagues, and only 16 of the included reviews (5.9%) reported both a judgment and a comment explaining each judgment either within the manuscript or in a supplementary file. Within these 15 reviews, the proportion of inadequate judgments (either those in which the comment was not in line with the judgment or in which there was no supporting comment) ranged from 25% (Other Bias domain) to 65% (Selective Reporting Bias domain). The reviews “rarely” included full tables illustrating the RoB judgments for the different domains.

The authors’ findings highlight that both a judgment (low/high/unclear risk of bias) as well as a comment explaining the judgment within each domain should be included in systematic reviews that report use of the Cochrane RoB tool.

Puljak, L., Ramic, I., Naharro, C.A., Brezova, J., Lin, Y.C., Surdila, A.A,.... & Salvado, M.S. Cochrane risk of bias tool was used inadequately in the majority of non-Cochrane systematic reviews. J Clin Epidemiol, 2020; 123: 114-119.

Manuscript available from publisher's website here. 

Tuesday, July 7, 2020

Adventures in Protocol Publication Pt. II: Survey of PROSPERO Registrants Finds Peer-Reviewed Protocol Publication is a Mixed Bag

As we discussed in Part I of this series, the posting of a systematic review – for instance, in an online registry such as PROSPERO -  is a well-established practice that helps prevent results-driven biases from being introduced into a systematic review as well as reduces unintentional duplication of efforts. The additional publication of such a protocol in a peer-reviewed journal also has its benefits – such as additional citations, saving room in the methods portion of the final manuscript, and the facilitation of the actual systematic review’s publication. However, it also has its drawbacks, including the potential cost of publishing and the time required from submission to acceptance, which may result in a delayed SR timeline.

In the latest issue of the Journal of Clinical Epidemiology, Rombey and colleagues report the findings of their survey examining the practices and attitudes related to peer-reviewed protocol publication among over 4,000 authors of non-Cochrane systematic review protocols published in PROSPERO in 2018.

In order to identify potential “inhibiting factors” of peer-reviewed protocol publication, respondents who reported publishing their protocol in peer-reviewed journal in addition to PROSPERO were asked questions related to publication costs, while respondents who had not pursued this option were asked for their reasoning behind this decision.

Nearly half (44.7%) of the 4,054 respondents answered that they had published or plan to publish their protocol in a peer-reviewed journal, while the remaining 55.3% chose not to pursue this option. Of these respondents, the most common reasons given were that publishing the protocol in PROSPERO was deemed sufficient; that it was not an aim/priority of the authors; and that the time required to publish in a peer-reviewed journal would delay the review itself.

A figure from Rombey et al. shows level of agreement with statements related to peer-reviewed protocol publication among >4,000 respondents. Click to enlarge.

Of those who did report publishing their protocol, about one-third (67.9%) indicated that there were no costs associated with publication, while one-quarter (25.4%) indicated that there were costs, and the remaining 6.7% were not sure.

Respondents from Africa, Asia, and South America were twice as likely to report having published a protocol in a peer-reviewed journal than those from Europe, North America, or Oceania; however, likelihood of publication did not appear to vary by gender or experience level. Qualitative analysis of free-response text revealed that some respondents were not aware that publication of protocols in peer-reviewed journals was done at all.

Overall, while cost of publishing a protocol did not appear to be a major inhibiting factor for most respondents, issues related to time from submission to publication as well as opinions regarding the additional value of publishing beyond PROSPERO were the most commonly cited reasons for not pursuing peer-reviewed publication of a systematic review protocol.

Rombey, T., Puljak, L., Allers, K., Ruano, J., & Pieper, D. Inconsistent views among systematic review authors toward publishing protocols as peer-reviewed articles: An international survey. J Clin Epidemiol, 2020; 123:9-17.

Manuscript available from the publisher's website here.

Wednesday, July 1, 2020

A Not-So-Non-Event?: New Systematic Review Finds Exclusion of Studies with No Events from a Meta-Analysis Can Affect Direction and Statistical Significance of Findings

Studies with no events in either arm have been considered non-informative within a meta-analytical context, and thus have been left out of these analyses. A new systematic review of 442 such meta-analyses, however, reports that this practice may actually affect the resulting conclusions.

In the July 2020 issue of the Journal of Clinical Epidemiology, Xu and colleagues report their study of meta-analyses of binary outcomes in which at least one included study had no events in either arm. The authors then reanalyzed the data from 442 included papers taken from the Cochrane Database of Systematic Reviews, using modeling to determine the effect of reincorporating the excluded study.

The authors found that in 8 (1.8%) of the 442 meta-analyses, inclusion of the previously excluded studies changed the direction of the pooled odds ratio (“direction flipping”). In 12 (2.72%) of the meta-analyses, the pooled odds ratio (OR) changed by more than the predetermined threshold of 0.2. Additionally, in 41 (9.28%) of these studies, the statistical significance of that findings changed when assuming a p = 0.05 threshold (“significance flipping”). In most of these 41 meta-analyses, excluded (“non-event”) studies made up between 5 and 30% of the total sample size. About half of these alterations led to an expansion of the confidence interval; while in the other half, the incorporation of non-events reduced the confidence interval.

The figure above from Xu et al. shows the proportion of studies reporting no events within the meta-analyses that showed a substantial change in p value when these studies were included. The proportion of the total sample tended to cluster between 5 and 30%. Click to enlarge.

Post hoc simulation studies confirmed the robustness of these findings, and also found that exclusion of studies with no events preferentially affected the pooled ORs of studies that found no effect (OR = 1), whereas a large magnitude of effect was protective against these changes. The opposite was found for the effect of excluding studies with no events on the resulting p values (i.e., large magnitudes of effects were more likely to be affected whereas conclusions of no effect were protected).

In sum, though a common practice in meta-analysis, the exclusion of studies with no events in either arm may affect the direction, magnitude, or statistical significance of the resulting conclusions in a small but non-negligible number of analyses.

Xu, C., Li, L, Lin, L., Chu, H., Thabane, L., Zou, K., & Sun, X. Exclusion of studies with no events in both arms in meta-analysis impacted the conclusions. J Clin Epidemiol, 2020; 123: 91.99.

Manuscript available from the publisher's website here. 

Friday, June 26, 2020

CONSORTing with Incorrect Reporting?: Most Publications Aren’t Using Reporting Guidelines Appropriately, New Systematic Review Finds


Reporting guidelines such as PRISMA for systematic reviews and meta-analyses and CONSORT for randomized controlled trials are often touted as a way to improve the thoroughness and transparency of reporting in academic research. However, while intended as a guide for improving the reporting of research, a new systematic review of a random sample of different publication types found that in many cases, these guidelines were cited incorrectly as a way of guiding the design and conduct of the research itself, of assessing the quality of published research, or for an unclear purpose.

In the review published earlier this month, Caulley and colleagues worked with an experienced librarian to devise a systematic search strategy that would pick up on any publication citing one of four major reporting guidelines documents from inception to 2018: ARRIVE (used in in vivo animal research), CHEERS (used in health economic evaluations), CONSORT (used in randomized controlled trials) and PRISMA (used in systematic reviews and meta-analyses). Then, a random sample of 50 of each publication type were reviewed independently by two authors for their citation of the reporting guideline.

Overall, only 39% of the 200 reviewed items correctly stated that the guidelines were followed in the reporting of the study, whereas an additional 41% incorrectly cited the guidelines, usually by stating that they informed the design or conduct of the research. Finally, in 20% of the reviewed items, the intended purpose of the cited reporting guidelines was unclear.

Examples of appropriate, inappropriate, and unclear use of reporting guidelines provided by Caulley et al. Click to enlarge.

















Between publication types, RCTs the most likely to appropriately cite the use of CONSORT guidelines (64%) versus 42% of economic evaluations correctly citing CHEERS, 28% of systematic reviews and meta-analyses appropriately discussing the use of PRISMA, and just 22% of in vivo animal research studies correctly citing ARRIVE.

Appropriate, Inappropriate, and Unclear Use of Reporting Guidelines, by Publication Type. Click to enlarge.
















In addition, the appropriate use of the reporting guidelines did not appear to increase as time elapsed since the publication of those guidelines.

The authors suggest that improved education about the appropriate use of these guidelines – such as the web-based interventions and tools that are available to those looking to use CONSORT - may improve their correct application in future publications.

Caulley, L., Catalá-López, F., Whelan, J., Khoury, M., Ferraro, J., Cheng, W., ... & Moher, D. Reporting guidelines of health research studies are frequently used in appropriately. J Clin Epidemiol, 2020; 122: 87-94. 

Manuscript available from the publisher's website here.