Data become studies.
Studies draw conclusions.
Conclusions create interpretations.
And interpretations lead to clinical decisions.
A process most would have hardly noticed pre-pandemic. But over the past year, we have all become ad hoc epidemiologists extrapolating our own conclusions on the clinical studies that formed the foundation for healthcare policies the states used to enact social distancing guidelines and mask mandates.
The heightened scrutiny of the data and the attention placed on the data ushered in an unprecedented level of pressure to report data and publish studies.
The faster we needed data, the more we abbreviated the clinical studies. The more abbreviated the studies, the greater the error – a process that highlights a fundamental limitation in the clinical study design process. When the fluency or the rate of progress increases, the basic checks and balances in the studies deteriorate.
We get faster results, but the quality of the results is lower. When the quality of the results are lower, there is greater error in extrapolating conclusions from those studies. Something the journal, Nature, has been covering extensively.
They found that more than 4% of the articles listed in the Dimensions database and around 6% of those listed in the PubMed database were dedicated to COVID-19 in 2020. Dimensions and PubMed are the two most referenced databases for clinical studies across the world.
Perhaps more alarmingly, the pandemic also saw a sharp rise in preprints (articles posted online before peer review). According to Nature, more than 30,000 of the COVID-19 articles published in 2020 were preprints — somewhere between 17% and 30% of the total COVID-19 research papers. And more than half of the preprints appeared on one of three sites — medRxiv, SSRN and Research Square. These are distinctly different from the more commonly referenced sites that have greater oversight in the quality of the publications listed.
Unsurprisingly, this has led to an increase in the number of retractions of clinical studies as well. Typically, it would take three years to retract a paper, but during COVID-19, it has taken just months — another effect of the increased fluency on clinical study development.
The rapid appearance and disappearance has a destructive effect on public policy, the least of which is the damaged credibility. Public policy experts and elected officials make decisions based upon this data, and the quality of their decisions is based upon the quality of the data.
So when we pump out publications with questionable data, we get questionable decision-making.
The most cited COVID-19 study is a publication that studied 41 patients at a hospital in Wuhan, China. The most cited preprint was a study on social distancing measures, which had a major impact on public policy in Britain. The latter attracting the most attention on social media, according to internet monitoring firm, Altmetric.
While it is too early to generalize, we see a clear disparity in the quality of the articles focusing on public policy and the articles focusing on other aspects of COVID-19. Articles that focus on public policy, or issues that affect the public, tend to be published more quickly and present with lower quality of data.
Recently, multiple European countries have discontinued use of the Oxford/AstraZeneca vaccine due to the risk of blood clot among those receiving the vaccine. According to some reports, only five (5) in thirty (30) million developed blood clots. And earlier studies that evaluated the safety of this vaccine found no risk of blood clots.
“There is currently no indication that [Oxford/AstraZeneca] vaccination has caused these conditions, which are not listed as side effects with this vaccine,” a statement issued by European Medicines Agency (EMA) read. “The vaccine’s benefits continue to outweigh its risks and the vaccine can continue to be administered while investigation of cases of thromboembolic [blood clots] events is ongoing,” it added.
Yet countries nevertheless have stopped administering the vaccine, based upon the anecdotes of select individuals. Preventing millions of high risk individuals from receiving the vaccine throughout Europe just as the continent is immersed in another wave of COVID-19 infections.
When anecdotal evidence trumps statistically significant findings from regulated clinical studies, we find decisions are made more as impromptu reactions than as comprehensive evaluations of the data.
But when the quality of the data is already perceived to be suspect, even robustly designed studies lack credibility. Soon the all data points from all sources are given equal consideration, largely because most of the public, even many health policy experts, lack the nuanced understanding of clinical study design –how to evaluate the quality of the data from the study design, a critical skill academic physicians devote much of their career towards.
This becomes a problem when trying to decipher the results of different clinical studies. Even the clinical studies used to evaluate the many vaccines ranged widely across the world. Some studies used relatively few clinical subjects or had limited endpoints at which the studies drew their conclusions. Some only included subjects with mild symptoms while others included a wide range of clinical presentations. The variability in the study designs led to variable confidence in the studies – rightly or wrongly.
We saw how the Pfizer and Moderna vaccines had different results compared to the Johnson and Johnson vaccine. Much of that was due to the study design. And the Indian and Chinese vaccines had even greater variations in the study design as many of these studies used fewer subjects compared to the Western study counterparts.
What would have been helpful is a uniform approach to modifying study design that expedites the research for each vaccine in a more consistent manner. Adjust the studies, but adjust them similarly.
Rather than allow for variations in the fundamental study design, we should develop novel study designs that can be used in times of heightened fluency of publications.
The pandemic showed that there will be times when clinical studies need to expedite their process. Rather than implementing the same clinical study design models and modifying them in haphazard ways, we should implement adaptive techniques that increase the fluency of clinical study design while maintaining the quality of data needed to optimize the extrapolation of key interpretations from that data.
We must optimize the ecological fallacy of data – the extrapolation of data to individuals. Something we fell well short of during the pandemic, leading to moments of spectacular failure.
In the early days of the pandemic, we needed to understand whether masks were truly effective, and we needed to understand this over the course of weeks, not years. That added fluency led to a slew of half-baked clinical studies that were more or less glorified narratives about the value of masks, which used numbers as fillers to substantiate a preexisting belief about masks held by the authors of the studies.
As more such narrative-driven studies came about, the overall credibility of all studies – even well designed studies – fell in the eyes of the public. But more importantly, the public policy experts making the decisions failed to discern the differing quality of the data when citing studies to make public policy decisions – leading to decisions made more out of politics than science, and revisions that prompted reactions of disbelief and disillusionment.
Health should develop rapid clinical studies designs that can be completed in a matter of weeks instead of months. Yet retain the fidelity of the data needed to make medically appropriate decisions.
The very concept of clinical research, how we obtain new information or systematize existing information into clinical decision-making, should change post-pandemic, become more adaptable.
When the fluency of research increases, it should not come at the cost of the quality of data. There is a balance, and it is to be found in appropriately designed clinical studies that account for the heightened fluency of research.
Now that would make for a great clinical study.