Why a large hydroxychloroquine study was retracted over statistical abuse
A high-profile HCQ study claiming 16,990 Covid deaths was exposed for misusing data and ignoring dose effects, highlighting why scientific publications need a transparency overhaul. In a recent study published in the journal Archives of Public Health, researchers examined the methodological practices and findings of Pradelle et al. This study furthered the debate over the use of anti-rheumatic drugs by claiming that HCQ was associated with more than 16,990 deaths during the first wave of the Covid-19 pandemic. Subsequent reviews showed in Pradelle et al. However, the process lacked transparency as detailed explanations for the...
Why a large hydroxychloroquine study was retracted over statistical abuse
A high-profile HCQ study claiming 16,990 Covid deaths was exposed for misusing data and ignoring dose effects, highlighting why scientific publications need a transparency overhaul.
In a recent study published in the journalPublic Health ArchivesThe researchers adopted the methodological practices and results of Pradelle et al. This study furthered the debate over the use of anti-rheumatic drugs by claiming that HCQ was associated with more than 16,990 deaths during the first wave of the Covid-19 pandemic. Subsequent reviews showed in Pradelle et al. However, the process lacked transparency as detailed explanations for the withdrawal and related correspondence were not made public.
The present study criticizes the methodological claims and approach of the data set by Pradelle et al.
Background – The HCQ debate
The unprecedented growth of global Internet access has led to the widespread distribution of scientific knowledge through online social networks and media platforms, often shaping public opinion, individual behavior and, in turn, political decisions. This creates an implicit responsibility for scientists to maintain the highest standards of rigor in their methodological approaches. Despite this, more than 10,000 publications are retracted each year following criticism of their data reliability and accuracy.
Not only do these retractions represent a significant loss of funding and research efforts, but their erroneous findings, once disseminated, can be challenging to reverse. This study uses the Lancet Gate debate to highlight this point. The discussion centers on a publication in the Lancet about hydroxychloroquine (HCQ), an antimalarial drug being tested for use against coronavirus disease 2019 (CoVID-19). While widespread scientific outcry led to its withdrawal, several governments cited its findings in shaping their public policy on HCQ use.
The debate escalated to new heights when Pradelle et al. Published meta-analyses estimating the death toll from HCQ use of HCQ during the first wave of Covid-19. The study, which claimed 16,990 people may have died after HCQ consumption, consisted of Belgium, France, the United States, Spain, Turkey and Italy and was due to both widespread media coverage and political implications. While the publication was eventually retracted due to “lack of data” and “questionable assumptions,” the damage was done.
“The aim of this article is to address key concerns about the transparency and integrity of scientific publishing, particularly in the context of the retracted article by Pradelle et al. and the networked papers, and the weaknesses of the current publishing ecosystem in preventing misinformation and maintaining public trust in scientific institutions.”
Methodological shortcomings
The first methodological flaw examined in this critique is that of Pradelle et al. The estimated in-hospital mortality. While the publication estimated that more than 16,990 people died from compassionate HCQ use, these results were presented without appropriate sensitivity analyzes or dose subgroup adjustments, thereby preventing data reliability. The estimate for HCQ-related mortality (odds ratio [ORs]) is still incorrect. The publication by Pradelle et al. was derived from the previous meta-analyses by Axfors et al. Borrowed and derived primarily from high-dose randomized controlled trials, but Pradelle et al. The same effect size was applied to all patient groups, regardless of the actual dose received, without taking into account the dose dependence of the effect size or performing robustness checks for its validity.
The present critique further addresses the importance of distinguishing between statistical and clinical significance. It highlights the misapplication of effect sizes, lack of sensitivity analyses, and lack of subgroup estimates as cumulative factors affecting the clinical reliability of Pradelle et al. Invalidate.
The reanalysis found that lower dose HCQ regimens showed no clear evidence of increased mortality, while only higher doses were associated with a possible increase in risk. Importantly, sensitivity analyzes showed that the statistical conclusions depended heavily on a single large study, raising concerns about the robustness of the original findings.
"As seen in several countries during the Covid-19 pandemic, the use of HCQ has varied greatly in terms of dosage, patient selection, co-administration with other treatments, methodological standards and cautious interpretation of statistical findings in shaping public health policy."
These results reinforce the need for authors to take responsibility for critically evaluating their data sources and the assumptions embedded in their statistical models. Statistical methodology needs to see greater transparency before science and medicine can advance, and the spread of misinformation can stop.
Beyond methodological criticism, the study highlights broader systemic problems in scholarly publishing, including the rise of fraudulent publishing practices, reviewer fatigue, predatory journals, “paper mills,” and the erosion of trust in scholarly institutions.
Future recommendations
To address ongoing threats to scientific integrity, the study makes recommendations on reproducibility, the erosion of peer review processes and urgent needs for their reform and increasing transparency and accountability for peer-reviewed science. It highlights the potential of open science practices in developing effective solutions, particularly those of transparency and accountability.
“Platforms such as the Open Science Framework (OSF), Zenodo, Dryad and Figshare are examples of robust infrastructures that ensure scientific material remains available for review, reanalysis and further research.
Open peer review models, in which reviewer reports and identities are disclosed, could also improve the quality of reviews and develop a more constructive and accountable review process. “
The article further recommends incentives for peer reviewers such as CETORING Medical Education (CME), public recognition and opportunities for professional advancement, and the adoption of open data and code sharing to improve reproducibility.
These and other reforms are critical to encouraging reviewer participation, improving the rigorous standards of the peer review process, and improving overall transparency for a safer and healthier tomorrow.
Sources:
- Beaudart, C., Musuamba, F., Locquet, M. et al. Hydroxychloroquine use during the first COVID-19 wave: a case study highlighting the urgent need to enhance research practices within the publication ecosystem. Arch Public Health 83, 115 (2025), DOI – 10.1186/s13690-025-01596-2, https://archpublichealth.biomedcentral.com/articles/10.1186/s13690-025-01596-2