Tuesday, September 27, 2022

Only One-Third of a Sample of RCTs Had Made Protocols Publicly Available, New Report Finds

Earlier this year, a study in PloS Medicine found that nearly one-third (30%) of a sample of randomized controlled trials (RCTs) had been discontinued prematurely, a number that had not improved over the previous decade. Furthermore, for every 10% increase in adherence to SPIRIT protocol reporting guidelines, RCTs were 29% less likely to go unpublished (OR: 0.71; 95% confidence interval: from 0.55 to 0.92), and only about 1 in every 5 unpublished trials had been registered.

Now, in this month's issue of Journal of Clinical Epidemiology, Schönenberger and colleagues have released a study of the availability of RCT protocols from a sample of published works.

Public availability of study protocols, the authors argue, improves research quality by promoting thoughtfulness in methodological design, reducing selective outcomes reporting or "cherry-picking," and reducing the misreporting of results while promoting ethical compliance. This is especially the case when trial protocols are made available before the publication of study results. 

From a random sample of RCTs approved by ethics committees in Switzerland, Germany, Canada, and the United Kingdom in 2012, the authors examined the proportion of studies that had publicly available protocols and the nature of how the protocols were cited and disseminated. Of the resulting 326 RCTs, 118 (36.2%) had publicly available protocols. Of the protocols, nearly half (47.5%) were available as standalone peer-reviewed publications while 40.7% were available as supplementary material with the published results. A smaller proportion (10.2%) of protocols were available on a trial registry. 

Studies with a sample size of >500 or that were investigator- (non-industry)-sponsored were more likely to have publicly available protocols. The nature of the intervention (drug versus non-drug) did not appear to affect protocol availability, nor did whether the trial was conducted in a multicenter or single-center setting. The majority (91.8%) of protocols were made available after the enrollment of the first patient, and just 2.7% were made available after publication of trial results. Protocols were commonly published shortly before the trial results, at a median of 90% of the time between the start of the trial and its publication.

As this sample comprised only RCTs published in 2012 and by relatively high-income countries, it is unclear whether public protocol availability has improved over time or may be different in other global regions. However, the authors argue, these numbers lend credence to the need for efforts to improve the public availability of RCT protocols, such as through trial registries or requirements by publishing or funding bodies.

Schönenberger, C.M., Griessbach, A., Heravi, A.T., et al. (2022). A meta-research study of randomized controlled trials found infrequent and delayed availability of protocols. J Clin Epidemiol 149:45-52. Manuscript available at publisher's website here










Wednesday, September 21, 2022

Second USGN Systematic Review Workshop Features a Record Nine Scholars From Around the Globe

Earlier this month, the U.S. GRADE Network held its second two-day, comprehensive, all-virtual systematic review workshop. This workshop allowed participants to learn about each step of the systematic review process, from designing a search strategy to meta-analysis to preparing a manuscript for publication. True to USGN style, the workshops consisted of a mixture of large-group lectures and smaller-group experiential learning components in which participants received a tutorial of Rayyan (a free online screening tool), assessed studies for risk of bias, and created risk of bias and forest plots in Cochrane's Review Manager software.

Uniquely, this workshop featured nine Evidence Foundation scholars who attended the workshop free of charge. As part of their applications for this scholarship, these participants described a proposed or current systematic review project related to health care, with a preference given to projects aimed at addressing inequities or with a focus on underserved populations. The nine accepted applicants provided a diverse array of exciting projects - from HIV pre-exposure prophylaxis to interventions for rabies control to weight-bearing exercise in pregnant patients - and hailed from across the globe, from Canada and Benin to Turkey, Syria, Dubai, and Kazakhstan.


Be the first to hear about these and other trainings @USGRADEnet on twitter or at www.systematicreview.org.

Note: applications for scholarships to attend the upcoming GRADE Guideline Development Workshop, held virtually November 30 - December 2, 2022, close September 30. See application details here.












Standardized Mean Difference Estimates Can Vary Widely Depending on the Methods Used, New Review Finds

In meta-analyses of outcomes that utilize multiple scales of measurement, a standardized mean difference (SMD) may be used. Randomized controlled trials may also use SMDs to help interpret the effect size for readers. Most commonly, the SMD reports the effect size with Cohen's d, a metric of how many standard deviations are contained in the mean difference within or between groups (e.g., an intervention caused the outcome to increase or decrease by x number of standard deviations, or the two groups were x number of standard deviations different from one another with regards to the outcome). This is typically done by dividing the difference between groups, or from pretest to posttest in a single group, by some form of standard deviation (e.g., pooled standard deviation at baseline, posttest, or the standard deviation of change scores). Cohen's d is often utilized because a general rule of interpretation has been suggested: 0.2 is a small effect, 0.5 is a medium-sized effect, and 0.8 is large.

However, there are multiple ways to approach the calculation of SMDs, and these may result in varying interpretations of the size of the effect. To further investigate this, Luo and colleagues recently published a review of 161 articles using SMDs and the way they can be calculated. Of the 161 randomized controlled trials published since 2000 and reporting outcomes with some form of SMD, the authors calculated potential between-group SMDs using reported data and up to seven different methodological approaches.

Some studies reported more than one type of SMD, meaning that 171 total SMD approaches were reported across the 161 studies. Of these, 34 (19.9%) did not describe the chosen method at all, 84 (49.1%) reported but in insufficient detail, and 53 (31%) reported the approach in sufficient detail. The confidence interval was only reported for 52 (30.4%) of SMDs. Of the 161 individual articles, the rule for interpretation was clearly stated in only 28 (17.4%). 

The most common method of calculating SMD was using a standard deviation of baseline scores, seen in 70 (40.9%) of studies. Meanwhile, 30 (17.5%) used posttest standard deviations and 43 (25.1%) used the standard deviation of change scores.

Figure displaying the variability of SMD estimates across 161 included studies. Click to enlarge.

Of all the potential ways to calculate SMD, the median article varied by 0.3 - which could potentially be the difference between a "small" and "moderate" or "between a "moderate" and "large effect size for Cohen's d using Cohen's suggested rule of thumbThe studies with the largest variation tended to have smaller sample sizes and greater reported effect sizes.

This work raises an important point, which is that while no one method for the calculation of SMDs is considered superior to another, if calculation approaches are not prespecified by researchers, different methods could be tried until the most impressive effect size is reached. To help prevent these issues, the authors suggest prespecifying the analytical approach and reporting SMDs together with raw mean differences and standard deviations to further aid interpretation and provide context. 

Luo, Y., Funada, S., Yoshida, K., et al. (2022). Large variation existed in standardized mean difference estimates using different calculation methods in clinical trials. J Clin Epidemiol 149: 89-97. Manuscript available at the publisher's website here.










Thursday, September 8, 2022

New Study Sheds Light on the Perks of Copublication of Cochrane Systematic Reviews

Since 1994, Cochrane has allowed the publication of certain systematic reviews to extend beyond the eponymous Database of Systematic Reviews and into the pages of a medical specialty journal with the hopes of increasing dissemination. Often, these copublished reviews will include an abridged version of the Cochrane review as well as commentary or other additional features explaining the review and its findings.



A new retrospective cohort study published in this month's issue of the Journal of Clinical Epidemiology highlights the benefits of such an approach. In brief, Zhu and colleagues investigated the rate of citation of Cochrane systematic reviews that had been copublished in a second journal versus those that were published only in the Database. Using a ratio of 2:1, the authors matched two randomly selected noncopublished reviews for each copublished review that came up in an indexed journal that had copublication agreements with Cochrane.

Out of the resulting sample of 101 copublished and 202 noncopublished reviews, the median number of citations over the first five years since publication was higher for reviews that had been copublished (71 versus 32.5, or approximately 118% higher in the copublished reviews). The median as higher for copublished reviews for the first, second, third, and fifth years specifically as well. In 19% of journals, the copublication of a review led to a subsequent increase in impact factor over the following year; this was true for 27.3% of journals during the second year. There was no clear trend over time for the rate of copublication across journals, though the total number of Cochrane reviews published during the same period generally increased. 

Zhu, L., Zhang, Y., Yang, R., et al. (2022). Copublication improved the dissemination of Cochrane reviews and benefited copublishing journals: A retrospective cohort study. J Clin Epidemiol 149:110-117. Manuscript available at publisher's website here. 









Thursday, September 1, 2022

When is Imprecision "Very Serious" for a Systematic Review? New GRADE Guidance for Rating Down Two Levels Using a Minimally Contextualized Approach

Within the GRADE framework, the assessment of imprecision of an effect estimate relies on two major aspects: Optimal Information Size (OIS), which considers whether the sample size of the pooled estimate has adequate power to detect a minimally important difference, and the confidence interval (CI) approach, which requires inspecting the width of the 95% confidence interval and determining whether it crosses a meaningful threshold of effect (for instance, ranges from a null or trivial effect on one end to a potentially meaningful effect on the other).

GRADE users who are applying the framework to the assessment of the certainty of evidence in the context of a systematic review but not a practice guideline have, in previous guidance, been instructed to prioritize the OIS rather than the CI approach, which typically requires some judgment about a meaningful effect threshold, or "contextualization." However, given the fact that systematic review authors and readers can also benefit from the application of effect thresholds when assessing imprecision, new guidance has been published to help systematic reviewers apply a "minimally contextualized approach" and consider both reasons for rating down by one, two, or three levels if necessary. 




In addition to providing multiple examples in which rating down by two levels based on the span of the CI may be warranted, the paper also suggests circumstances in which systematic reviewers may rate down by two levels in the absence of concerns about the CI but in the presence of a particularly small information size. Specifically, when the CI does not overlap with threshold(s) of interest and the effect is sufficiently large, authors may still consider rating down for imprecision if the OIS (the sample size needed to detect an effect within one adequately powered trial as determined by a conventional power analysis) is not met. 

For dichotomous outcomes, in this case, one may consider rating down by two levels immediately if the ratio of the upper- to the lower-boundary of the CI is higher than 2.5 for odds ratios or 3.0 for relative risk ratios. If these criteria are not fulfilled, authors should still compare the pooled sample size to the determined OIS. For continuous outcomes, a total sample size that does not exceed 30-50% of the OIS is considered reason for rating down by two levels to "very serious" imprecision.

Finally, if a CI is so wide that it causes authors to be very uncertain about any estimate of effect, it may be acceptable to rate down by three levels for imprecision.

Zeng, L., Brignardello-Petersen, R., Hultcrantz, M., et al. (2022). GRADE Guidance 34: update on rating imprecision using a minimally contextualized approach. J Clin Epidemiol, online ahead of print. Manuscript available at the publisher's web site here.