The Pandemic Uncovered Ways to Speed Up Science

There doesn't have to be a trade-off between good research and fast research.
Illustration: WIRED; Getty Images

The pandemic highlighted broad problems in research: that many studies were hyped, error-ridden, or even fraudulent, and that misinformation could spread rapidly. But it also demonstrated what was possible.

While it usually takes years to test drugs against a new disease, this time it took less than one to find several vaccines and treatments. Once, scientists discovered new strains of viruses only after an outbreak had already happened, but now they were able to use sewage samples to predict outbreaks in advance.

Not everyone saw the speed of these advancements positively: The belief that vaccines were “rushed,” for example, was one of the most common reasons that people delayed taking them. Many people believe that doing science quickly would mean doing away with standards and creating research that’s sloppy or even dangerous.

But that isn't always true, and the urgency of Covid-19 led many people to adapt, produce, and improve research at a quality and speed that few expected. Not only could we avoid those trade-offs, but we could improve science in ways that make it faster—and the pandemic has shown us how.

Collect routine data

Within six months of the outbreak, there were more than 30,000 genome sequences of the coronavirus—whereas in the same amount of time in 2003, scientists were able to get only a single sequence of the SARS virus.

The speed at which coronavirus genomes were sequenced is a success story, but it didn't show us the whole picture. While the UK used a large genomics program to sequence almost 3 million coronavirus genomes, many countries sequenced a few thousand in total, some less than a hundred.

Disparities like this are common. In many places, over a range of topics, lots of data goes unmeasured or missed: the prevalence of mental illness, national GDP, and even registrations of deaths and their causes. Instead, it has to be estimated with wide ranges. 

It's difficult and expensive for small research groups to collect data on their own, so they tend to collect what's convenient rather than comprehensive. For example, in psychology, research is often “WEIRD”—coming from participants who are White, Educated, Industrialized, Rich, and Democratic. In history, data comes from wherever records are common; in economics, where businesses have registered detailed accounts of their income and spending.

Different researchers measure the same data in different ways. Some people are contacted by multiple research groups looking at the same questions, while others go unseen.

Without data that's measured in a standard way, it's difficult to answer questions about whether things are different and why those differences might be. For example, is anxiety more common in richer countries, or more likely to be detected? Since the condition goes undiagnosed in many countries and surveys are rare, we don't have a clear answer.

This clues us to one way to speed up science: Big institutions, such as governments and international organizations, should collect and share data routinely instead of leaving the burden to small research groups. It's a classic example of "economies of scale,” where larger organizations can use their resources to build the tools to measure, share, and maintain data more easily and cheaply, and at a scale that smaller groups are unable to.

Doing this would help researchers avoid repeating one another’s limited efforts. It would also have large spillover benefits because routine data can be used by many researchers to see trends, make like-for-like comparisons, and be alerted to new problems.

While institutions have the ability to do so, many don't because they are unaware of the benefits or unwilling to create fancy dashboards to present it. But data doesn't need to be made attractive—it just needs to be accurate, representative, standardized, and widely usable.

Streamline Experiments

Randomized controlled trials—the simple procedure of randomly selecting which participants are given a treatment or a placebo—are very powerful because they prevent various biases from creeping into the data. But running a trial is hard, and different research teams tend to repeat efforts to set them up. It usually takes several years, thousands of patients, and tens of millions of dollars to recruit enough patients to test a single drug or vaccine.

But this doesn't need to be a barrier, because many treatments can be tested in the same RCT. That was the rationale behind the RECOVERY trial, which tested 20 drugs for Covid in the same trial within 2 years. It found the first effective treatment against the disease, dexamethasone, by June 2020, which went on to save over a million lives within nine months.

Because the trial was connected to patient registries in the UK's National Health Service, it could quickly enroll tens of thousands of participants from across the country who had the same disease, randomize them to one of many drugs, and look at outcomes that were already routinely measured.

This way of running trials is fairly new, and in some countries it doesn't fit traditional funding structures. But new ideas are being developed: For example, researchers could pay for only their group in a trial. Setting up these operations to minimize the difficulties that researchers face takes effort and cost—but that's because they absorb the overhead costs from each group, speeding up further research.

The same kind of "streamlining" might work elsewhere, too, when similar data is analyzed by different groups. One example is in the UK Biobank, a large dataset which has an online platform that researchers can use to quickly analyze data from hundreds of thousands of genome sequences, without needing to store all that data on their own computer servers.

Divide Up Labor in Research

Lots of research doesn't fit the models above: Many researchers work with specific types of data that can't be collected together, or do different types of experiments that can't be streamlined.

Science is vast and has been expanding. As this has happened, it has also become increasingly specialized, with more fields and deeper research. This means scientists need to catch up on a larger base of knowledge in order to make new discoveries, which is one reason that researchers believe science is slowing down.

In addition, the scientific endeavor involves many different skills. Scientists are expected to understand theory, design experiments, perform them, maintain data, manage labs, write papers, and present, communicate, and review one another's work. Each of these roles involves a steep learning curve that scientists get little training and time for, and together they can create a bottleneck that limits the quality of research and the speed of discovery.

But not necessarily. One way to turn the tide is through the "division of labor"—for people with different skills and backgrounds to work together on different parts of a project. People who specialize can make use of their expertise to spot patterns and develop new techniques that improve their work and speed it up.

Imagine how, with dedicated time, a lab worker finds new ways to run tests and handle their samples carefully. Or a programmer learns how to write code that makes their software run faster and more efficiently than before. Learning how to improve these skills requires taking the time to focus on them, so when different experts work together on different parts, they can make the whole process more productive.

That was how many groups developed global datasets for Covid-19—with teams that included not just academic researchers, but also software engineers, data managers, and journalists, who each brought different skills. The Economist estimated the number of excess deaths around the world; The New York Times tracked all the vaccines being developed; the Financial Times and my colleagues at Our World in Data showed a variety of Covid metrics on interactive dashboards.

Talented people who don't have a background in academia are usually priced out of science, without access to academic research or networks. But it should be a priority to attract them to the scientific community in regular times, not just in an emergency.

Make Science “Open Source”

Traditionally, researchers have published their work in papers in academic journals and published behind paywalls that are inaccessible to most people around the world.

Most researchers don’t share their data. If you’ve ever read the words “data is available upon request" in an academic paper, and emailed the authors to request it, the chances that you'll actually receive the data are just 7 percent. The rest of the time, the authors have lost access to their data, changed emails, or are too busy or unwilling.

Without access to data, it's hard to identify errors in papers that could change their results. And because they are static documents, errors take a long time to correct even when they are spotted. On average, it takes more than a year for journals to retract papers that have been plagiarized, or five years if their data has been fabricated.

Bad science is both harmful and a time sink, as researchers spend years pursuing ideas that were mistaken to begin with. The problem is that we often don't know who will find scientific knowledge valuable, or who might be able to improve it.

By being transparent with research—by sharing the data and code behind it—scientists can turn this around. They can let other researchers and people around the world contribute to the code and spot errors in the data. That's precisely what happened with global data on Covid, because many teams made their work open source by publishing their data and code on platforms like GitHub.

And there are further benefits: People whom researchers have never met can also use it for their own work and their own dashboards, which have benefits that can't be anticipated.

Reforming Peer Review

Doing science fast can be risky, and there have been countless examples during the pandemic of sloppy studies reaching the news before other experts could weigh in. Many people exposed fraudulent studies and investigated major conflicts of interest.

This is because our systems of peer review are falling behind. Peer review is usually organized by journals: When journal editors receive a paper, they send it out to a handful of peer researchers for feedback.

Researchers do review work for different journals and those journals work separately, which means their editors struggle to track the time and interests of the researchers they contact. Many don't respond, or turn down their requests because they lack the time or expertise to comment on a study. This leads to a huge backlog in publishing research, and it means most research papers receive comments from only two or three other scientists. Papers in the natural sciences take nine months to be published on average. In economics, the average is three years.

Part of this can be sped up by offering reviewers compensation—without compromising on quality—but other methods might work too, such as centralized platforms to track reviewers' time and interests. Still, this wouldn't be enough: Many scientists already opt to publish their studies as preprints instead, which have been growing in popularity for years. And we can't simply rely on voluntary work.

Institutions could treat peer review like a specialization, investing in people to develop the skills to do this work better: by building new tools to check for errors, plagiarism, and fabrication, and new platforms that make critiques from other experts easier to find.

In just the past two years, scientific research has saved millions of lives. Many of these extraordinary efforts have been unusual, with people adapting in ways that they wouldn't in normal times. But what the pandemic has shown us is that great science can be done fast.

Making science better should be one of the world's biggest priorities, and it doesn't need to come at the cost of speed. We can do both.