- In the methods rather than only in tables or supplementary material, explicitly state the sample population from which the functional outcomes were drawn, whether it's survivors-only or another type of analysis.
- If a survivors-only analysis is used, the authors should report the baseline characteristics between the groups analyzed and transparently discuss this as a limitation within the discussion section.
- If all randomized participants are analyzed regardless of mortality, authors should report the assumptions upon which these analyses are based; for instance, if death is one outcome ranked among others in a worst-rank analysis, the justification for the ranking of outcomes should be discussed in the methods, and the implications of these decisions included in the discussion section.
Wednesday, April 21, 2021
In Studies of Patients at High Risk of Death, More Explicit Reporting of Functional Outcomes is Needed
Thursday, April 8, 2021
Digging Deeper: 5 Ways to Help Guide Decision-Making When Research Evidence is "Insufficient"
A key tenet underlying the GRADE framework is that the certainty of available research evidence is a key factor to be considered in the course of clinical decision-making. But what if little to no published research exists off of which to base a recommendation? At the end of the day, clinicians, patients, policymakers, and others will still need to make a decision, and will look to a guideline for direction. Thankfully, there are other options to pursue within the context of a systematic review or guideline that ensures that as much of the available evidence is presented as possible, although it may be from less traditional or direct sources.
A new project conducted by the Evidence-based Practice Center (EPC) Program of the Agency for Healthcare Research and Quality (AHRQ) developed guidance for supplementing a review of evidence when the available research evidence is sparse or insufficient. This guidance was based on a three-pronged approach, including:
- a literature review of articles that have defined and dealt with insufficient evidence,
- a convenience sample of recent systematic reviews conducted by EPCs that included at least one outcome for which the evidence was rated as insufficient, and
- an audit of technical briefs from the EPCs, which tend to be developed when a given topic is expected to yield little to no published evidence and which often contain supplementary sources of information such as grey literature and expert interviews.
- Reconsider eligible study designs: broaden your search to capture a wider variety of published evidence, such as cohort or case studies.
- Summarize evidence outside the prespecified review parameters: use indirect evidence that does not perfectly match the PICO of your topic in order to better contextualize the decision being presented.
- Summarize evidence on contextual factors (factors other than benefits/harms): these include key aspects of the GRADE Evidence-to-Decision framework, such as patient values and preferences and the acceptability, feasibility, and cost-effectiveness of a given intervention.
- Consider modeling if appropriate, and if expertise is available: if possible, certain types of modeling can help fill in the gaps and make useful predictions for outcomes in lieu of real-life research.
- Incorporate health system data: "real-world" evidence such as electronic health records and registries can supplement more mechanistic or explanatory RCTs.
Friday, April 2, 2021
New Review of Pragmatic Trials Reveals Insights, Identifies Gaps
As opposed to an "explanatory" or "mechanistic" randomized controlled trial (RCT), which seeks to examine the effect of an intervention under tightly controlled circumstances, "pragmatic" or "naturalistic" trials study interventions and their outcomes when used in more real-world, generalizable settings. One example of such a study might include the use of registry data to examine interventions and outcomes as they occur in the "real world" of patient care. However, there are currently few standards for identifying, reporting, and discussing the results of such "pragmatic RCTs." A new paper by Nicholls and colleagues aims to provide an overview of the current landscape of this methodological genre.
The authors searched for and synthesized 4,337 trials using keywords such as "pragmatic," "real world," "registry based," and "comparative effectiveness" to better map an understanding of how pragmatic trials are presented in the RCT literature. Overall, only about 22% (964) of these trials were identified as "pragmatic" RCTs in the title, abstract, or full text; about half of these (55%) used this term in the title or abstract, while the remaining 45% described the work as a pragmatic trial only in the full text.
About 78.1% (3,368) of the trials indicated that they were registered. However, only about 6% were indexed in PubMed as a pragmatic trial, and only 0.5% were labeled with the MeSH topic of Pragmatic Clinical Trial. The target enrollment of pragmatic trials was a median of 440 participants within an interquartile range (IQR) of 244 to 1,200; the actual achieved accrual was 414 (IQR: 216 - 1,147). The largest trial included 933,789 participants; the smallest enrolled 60.
Overall, pragmatic trials were more likely to be centered in North America and Europe and to be funded by non-industry sources. Behavioral, rather than drug or device-based, interventions were most common in these trials. Not infrequently, the trials were mislabeled or contained erroneous data in their registration information. The fact that only about half of the sample were clearly labeled as "pragmatic" may mean that these trials may go undetected with less sensitive search mechanisms than the authors used.
Authors of pragmatic trials can improve the quality of the field by clearly labelling their work as such and by registering their trials and ensuring that registered data are accurate and up-to-date. The authors also suggest that taking a broader view of what constitutes a "pragmatic RCT" also generates questions regarding proper ethical standards when research is conducted on a large scale with multiple lines of responsibility. Finally, the mechanisms used to obtain consent in these trials should be further examined in light of the finding that many pragmatic trials fail to achieve goals set for participant enrollment.
Manuscript available from publisher's web site here.
Nicholls SG, Carroll K, Hey SP, et al. (2021). A review of pragmatic trials found a high degree of diversity in design and scope, deficiencies in reporting and trial registry data, and poor indexing. J Clin Epidemiol (ahead of print).