Monday, October 19, 2020

Existing Tools to Assess the Quality of Prevalence Reviews are Variable, with Some Missing Key Elements

Prevalence studies allow us to better understand the extent and impact of a health issue, guiding priority-setting for health care interventions, research, and clinical guidelines. While established tools for assessing the quality of guidelines, systematic reviews, and original research on interventions exist, no clear option has emerged as a way to assess the quality and risk of bias in prevalence research. The several tools that have been proposed, write the authors of a new systematic review of these instruments, are not without limitations.

Migliavaca and colleagues sifted through a total of 1,690 unique references, ending with a total of 30 tools that were either created for the direct purpose of assessing prevalence studies (n = 8) or were adaptable to this aim (n = 22). A grand total of 710 items from all of the tools were then combined into 119 items assessing similar constructs under six general domains: Population and Setting, Condition Measurement, Statistics, Manuscript Writing and Reporting, Study Protocols and Methods, and Nonclassified (e.g., importance of the study, applicability of results).

Click to enlarge.


The authors conclude that there was a great variability among tools assessed; further, several tools left out key elements that could affect the quality of a study, such as the representativeness of a sample, total sample size, or how the condition was assessed. Further, some tools fail to distinguish between assessments of whether the measure is valid, reliable, reproducible, or unbiased - differences that the authors of this review argue are important enough to warrant separate items in the development of a new tool. Although the authors suggest that a new, more comprehensive tool will improve the assessment of prevalence studies in the future, they identify the Joanna Briggs Institute Prevalence Critical Appraisal Tool as the best of what's currently available (downloadable from a list of JBI checklists here).

Migliavaca, C.B., Stein, C., Colpani, V., Munn, Z., Falavigna, M., and the Prevalence Estimates Reviews - Systematic Review Methodology Group (PERSyst). (2020). J Clin Epidemiol 127:59-68.

Manuscript available at the publisher's website here.







Tuesday, October 13, 2020

Equity Harms Related to Covid-19 Policies: Slowing the Spread Without Increasing Inequity

Since COVID-19 was first declared a pandemic in March of this year, numerous policies around the world have implemented some degree of lockdown, slashing social events and gatherings, shuttering once-bustling businesses and changing the face of the global economy. While the lockdowns in place were likely necessary to reduce the infection rate and resulting morbidity and mortality associated with the coronavirus, there are potentially undesirable consequences of these policies that affect measures of equity. In a new publication, Glover and colleagues present a framework for considering these effects and weighing them against the benefits of slowing the spread.

The work builds off of a novel combination of two existing frameworks. First, the Lorenc and Oliver framework lays out five potential harms of public health interventions which require mitigation: direct health harms, psychological harms, equity harms, group and social harms, and opportunity costs. Second, the PROGRESS-Plus health equity framework provides a list of 11 general categories that can affect measures of equity: Place of residence, Race, Occupation, Gender/sex, Religion, Education, Socioeconomic status, Social Capital, sexual orientation, age, and disability. Each of the two frameworks' individual components are used as a lens to examine the other. The resulting matrix of 55 potential sources of inequity related to the COVID-19 pandemic and its resulting public health policies provides an exemplary approach to considering all aspects of any large-scale public health intervention and the impact its implementation may have on inequity.

Key to the authors' resulting framework is the concept that both the policy responses to the pandemic and the nature of the pandemic itself are potential sources of inequity. For instance, individuals in lower-income occupations are also typically considered essential workers, and are less likely to have a safety net that would allow them to choose not to work or a job that is compatible with working remotely. Thus, the existing systemic inequities are exacerbated by the fact that they are now more likely to be exposed to the virus by continuing to go to work outside the home. However, policymakers can help reduce the impact of their policies on these sources of inequity - as well as ones caused by lockdown policies more directly - by considering mitigation strategies when implementing these policies (for example, by mandating improved sanitation, personal protective equipment, and social distancing for workers in vulnerable occupations). The figure below from the publication provides an overview of the relationship between the pandemic, policy responses and their resulting inequities, and potential points of intervention.

Click to enlarge.

The framework also allows for a more nuanced consideration of context in efforts to reduce the spread of coronavirus. Policies that are highly effective and viable in higher-income countries or areas with greater population density, for instance, may not be as beneficial in low- and middle-income countries and may even result in greater inequity. As with any intervention of any scale, the potential harms must be weighed against the desirable effects, and the context of the given intervention is key. This framework allows for consideration of a wider range of impacts when attempting to reduce illness and mortality in the age of a pandemic.

Glover, R.E., van Schalkwyk, M.C.I., Akl, E.A., Kristjannson, E., Lofti, T., Petkovic, J., ... & Welch, V. (2020). A framework for identifying and mitigating the equity harms of COVID-19 policy interventions. J Clin Epidemiol 128:35-48.

Manuscript available from the publisher's website here. 







Tuesday, October 6, 2020

New Study Examines the Impact of Abbreviated vs. Comprehensive Search Strategies on Resulting Effect Estimates

It's common practice - indeed, it's widely recommended - that systematic reviewers search multiple databases in addition to alternative sources of data such as the grey literature to ensure that no relevant studies are left out of analysis. However, meta-research on whether this theory holds up in practice is mainly limited to examinations of recall - in other words, reporting how many potentially relevant studies are picked up by an abbreviated search method as opposed to one that's more extensive. What's missing from this body of research, write Ewald and colleagues in a newly published study, is that recall studies compare items retrieved in absolute terms without considering the final weight or importance of each individual study - variables which will ultimately affect the direction, magnitude, and precision of the resulting effect estimate. Since larger studies with more caché are likely to have the greatest impact on the final estimate and certainty of evidence - and these studies are more likely to be picked up in even an abbreviated search - the added value of utilizing more extensive search strategies on a meta-analysis is left unclear.

To examine the impact of the extensiveness of a search strategy on resulting findings and certainty of evidence, the authors randomly selected 60 Cochrane reviews from a range of disciplines for which certainty of evidence assessments and summaries of findings were available. Thirteen reviews did not report at least one binary outcome, leaving a total of 47 for analysis. They then replicated these reviews' search strategies in addition to conducting 14 abbreviated searches for each review (e.g., MEDLINE only), such as limiting to one database or a combination of just two or three (e.g., MEDLINE and Embase only). Finally, meta-analyses were replicated for each of these scenarios, leaving out studies that would not have been picked up in the various abbreviated search strategies. 

Searching only one database led to a loss of at least one trial in half of the reviews, and a loss of two trials in one-quarter of them. As may be expected, the use of additional databases reduced the loss of information. Overall, however, the direction and significance of the resulting effect estimates remained unchanged in a majority of the cases, as shown in Figure 1 from the paper, below.

Click to enlarge.

The use of abbreviated searches did, however, introduce some amount of imprecision, typically increasing standard error by around 1.02 to 1.06-fold. The inclusion of multiple versus a single database did not clearly appear to improve precision compared to a comprehensive search.

The authors note that these findings are particularly applicable to authors of potential rapid reviews and guidelines, where a consideration of trade-offs between speed and thoroughness is of great importance. Rapid reviewers should be aware that limiting search strategy may change the direction of an effect estimate or render an effect estimate uncalculable in up to one in seven instances, but this should be weighed against the benefits of a quicker time to the dissemination of findings, especially during emergent health crises where time is of the essence.

Ewald, H., Klerings, I., Wagner, G., Heise, T.L., Dobrescu, A.I., Armijo-Olivo, S., ... & Hemkens, L.G. (2020). Abbreviated and comprehensive literature searches led to identical or very similar effect estimates: A meta-epidemiological study. J Clin Epidemiol 128:1-12.

Manuscript available from publisher's website here.  















Wednesday, September 30, 2020

Four Questions to Ask Before Replicating a Systematic Review

Just as with individual research trials, the replication of a systematic review can shed new light on an existing topic or help further solidify our assessment of the certainty of a body of evidence. However, duplication of efforts that is done unintentionally or without deliberate consideration of methodology (e.g., how similar or different the new review will be in terms of evidence searching, inclusion, and synthesis) is wasteful. How is one to know when the replication of a systematic review is appropriate and warranted?

A new consensus checklist recently published by Tugwell and colleagues in BMJ provides guidance on when - and when not - to conduct a systematic review replication. Driven by a six-person executive team, the checklist was informed by the input of methodologists, including experts in fields ranging from clinical epidemiology to guideline development and health economics, to knowledge users - those involved in the funding, commissioning, and development of systematic reviews. Two patients were involved in the development team and an additional 17 patient and public representatives were consulted for input via survey. 

The process culminated in the drafting of the checklist in a face-to-face setting, with an original 12 proposed items solidified into a final four. The items ask whether replication of the systematic review is of high priority (e.g., whether replication results will be expected to guide policymakers or be of relevance to stakeholders), whether there are certain methodological concerns (such as search design, scope of PICOs, etc.) that will be clarified or improved with a replication; whether the implementation of the replication's findings would be expected to have a sizable positive or negative impact on the population or individual level; and whether resources (e.g., time, money) spent on replication would not be better spent on conducting a new review to answer a novel question. 

Click to enlarge.


The ultimate decision of whether or not to replicate should be informed by the answers to these questions, the authors note, and left up to contextualized judgment rather than a quantitative threshold. Further, some of the items may be of higher or lower relevancy depending on the stakeholders for a specific review topic, and "middle-ground" solutions, such as repeating only the parts of a systematic review in need of replication, should be considered individually. The authors plan to test the usability, acceptability, and usefulness of this newly proposed tool with relevant end-users.

Tugwell, P., Welch, V.A., Karunananthan, S., Maxwell, L.J., Akl, E.A., Avey, M.T., ... & White, H. 2020. When to replicate systematic reviews of interventions: Consensus checklist. BMJ 370:m2864. 

Manuscript available from the publisher's website here. 






Thursday, September 24, 2020

Pre-Print of PRISMA 2020 Updated Reporting Guidelines Released

Upon their publication in 2009, the PRISMA guidelines have become the standard for reporting in systematic reviews and meta-analyses. Now, 11 years later, the PRISMA checklist has received a fresh facelift for 2020 that incorporates the methodological advances that have taken place over the intervening years.

In a recently released pre-print, Page and colleagues describe their approach to designing the new and improved PRISMA. Sixty reporting documents were reviewed to identify any new items deserving of consideration and 110 systematic review methodologists and journal editors were surveyed for feedback. The new PRISMA 2020 draft was then developed based on discussion at an in-person meeting and iteratively revised based on co-author input and a sample of 15 experts.



The result is an expanded, 27-item checklist replete with elaboration of the purpose for each item, a sub-checklist specifically for reporting within the abstract, and revised flow diagram templates for both original and updated systematic reviews. Here are some of the major changes and additions to be aware of:

  • Recommendation to present search strategies for all databases instead of just one.
  • Recommendation that authors list "near-misses," or studies that met many but not all inclusion criteria, in the results section.
  • Recommendation to assess certainty of synthesized evidence.
  • New item for declaration of Conflicts of Interest.
  • New item to indicate whether data, analytic code, or other materials have been made publicly available.
Page, M., McKenzie, J., Bossuyt, P., Boutron, I., Hoffman, T., Mulow, C., ... & Moher, D. 2020. The PRISMA 2020 Statement: An updated guideline for reporting systematic reviews. 

Pre-print available from MetaArXiv here. 

Friday, September 18, 2020

WHO Guidelines are Considering Health Equity More Frequently, but Reporting of Judgments is Often Incomplete

The GRADE evidence-to-decision (EtD) framework was developed as a way to more explicitly and transparently inform the considerations of the implications of clinical recommendations, such as the potential positive or negative impacts on health equity. A new analysis of World Health Organization (WHO) guidelines published between 2014 and 2019 - over half (54%) of which used the EtD framework - examines the consideration of health equities in the guidelines' resulting recommendations.

Dewidar and colleagues found that the guidelines utilizing the EtD framework were more likely to be addressing health issues in socially disadvantaged populations (42% of those developed with the EtD versus 24% of those without). What's more, the use of the EtD framework has risen over time, from 10% of guidelines published in 2016 (the year of the EtD's introduction) to 100% of those published within the first four months of 2019. Use of the term "health equity" increased to a similar degree over this period.

Just over one-third (38%) of recommendations were judged to increase or probably increase health equity, while 15% selected the judgment "Don't know/uncertain" and 8% provided no judgment. Just over one-quarter (28%) of the recommendations utilizing the EtD framework provided evidence for the judgment. When detailed judgments were provided, they were more likely to discuss the potential impacts of place of residence and socioeconomic status and less likely to explicitly consider gender, education, race, social capital, occupation, or religion.

Click to enlarge.

The authors conclude that while consideration of the potential impacts of recommendations on health equity has increased considerably in recent years, reporting of these judgments is still often incomplete. Reporting which published research evidence or additional considerations were used to make a judgment, as well as considering the various PROGRESS factors (Place, Race, Occupation, Gender, Religion, Education, Socioeconomic status, and Social capital) will likely improve the transparency of recommendations in future guidelines where health equity impacts are of concern.

Dwidr, O., Tsang, P., León-Garcia, M., Mathew, C., Antequera, A., Baldeh, T., ... & Welch, V. 2020. Over half of WHO guidelines published from 2014 to 2019 explicitly considered health equity issues: A cross-sectional suvey. J Clin Epidemiol 127:125-133.

Manuscript available from the publisher's website here.



Monday, September 14, 2020

Timing and Nature of Financial Conflicts of Interest Often Go Unreported, Systematic Survey Finds

The proper disclosure and management of financial Conflicts of Interest (FCOI) within the context of a published randomized controlled trial is vital to alerting the reader to the sources of funding for the research and other financial factors that may influence the design, conduct, or reporting of the trial.

A recently published cross-sectional survey by Hakoum and colleagues examined the nature of FCOI reporting in a sample of 108 published trials found that 99% of these reported individual author disclosures, while only 6% reported potential sources of FCOI at the institutional level. Individual authors reported a median of 2 FCOIs. Among the 2,972 FCOIs reported by 806 individuals, the greatest proportion came from personal fees other than employment income (50%) and from grants (34%). Further, of those disclosing individual FCOI, a large majority (85%) were provided by private-for-profit entities. Notably, only one-third (33%) of these disclosures included the timing of the funding in relation to the trial, 17% reported the relationship between the funding source and the trial, and just 1% reported the monetary value.


Click to enlarge.
 

Using a multivariate regression, the authors found that the reporting of FCOI by individual authors was positively associated with nine factors, most strongly with the authors being from an academic institution (OR: 2.981; 95% CI: 2.415 – 3.680), with the funding coming from an entity other than private-for-profit (OR: 2.809; 95% CI: 2.274 – 3.470), and the first author’s affiliation being from a low- or middle-income country (OR: 2.215; 95% CI: 1.512 – 3.246).

 

More explicit and complete reporting of FCOIs, the authors conclude, may improve readers’ level of trust in the results of a published trial and in the authors presenting them. To improve the nature and transparency of FCOI reporting, researchers may consider disclosing details related to the funding’s source, including the timing of the funding in relation to the conduct and publication of the trial, the relationship between the funding source and the trial, and the monetary value of the support.

Hakoum, M.B., Noureldine, H., Habib, J.R., Abou-Jaoude, E.A., Raslan, R., Jouni, H., ... & Akl, E.A. (2020). Authors of clinical trials seldom reported details when declaring their individual and institutional financial conflicts of interest: A cross-sectional survey. J Clin Epidemiol 127:49-58.

Manuscript available from the publisher's website here