fbpx
01 / 05
COVID-19 Should Make Us Grateful for Technology

Blog Post | Health & Medical Care

COVID-19 Should Make Us Grateful for Technology

Imagine a pre-modern pandemic.

“In a way, everything is technology,” noted one of the world’s greatest economic historians, Fernand Braudel, in his monumental study Civilization and Capitalism. “Not only man’s most strenuous endeavors but also his patient and monotonous efforts to make a mark on the external world; not only the rapid changes . . . but also the slow improvements in processes and tools, and those innumerable actions which may have no immediate innovating significance but which are the fruit of accumulated knowledge,” he continued.

Yes, land, labor, and capital (that’s to say, the factors of production) are important components of economic growth. In the end, however, human progress in general and global enrichment in particular are largely dependent on invention and innovation. That is surely even clearer now that humanity’s hopes for the end of the pandemic and for our liberation from the accompanying lockdown rest on further scientific breakthroughs within the pharmaceutical industry. Let’s take a brief look at the impact of technology on health care, food supply, work, and sociality in the time of COVID-19.

Healthcare

The impact of modern technology is surely most keenly felt and anticipated within the sphere of human health care. Consider some of the worst diseases that humanity has had to face in the past. Smallpox, which is thought to have killed an estimated 300 million people in the 20th century alone, originated in either India or Egypt at least 3,000 years ago. Smallpox variolation, it seems, was practiced in China in the tenth century, but it was not until the late 18th century that Edward Jenner vaccinated his first patient against the disease. Smallpox was fully eradicated only in 1980.

Similar stories could be told about other killer diseases. Polio, which can be seen depicted in Egyptian carvings from the 18th dynasty, is of ancient origin. Yet the disease wasn’t properly analyzed until the year of the French Revolution, with Jonas Salk’s vaccine appearing only in 1955. Today, polio is close to being eradicated (just 95 cases were reported in 2019).

Malaria, probably humanity’s greatest foe, is at least 30 million years old (the parasite has been found in an amber-encased mosquito from the Paleogene period). It was only after the discovery of the New World that knowledge about the fever-reducing benefits of the bark of the cinchona tree spread to Europe and Asia. Quinine was first isolated in 1820, and chloroquine was introduced in 1946. Artemisinin drugs, which we still use, were discovered in the late 1970s. That’s to say that humanity lived with deadly diseases for millennia without fully knowing what they were, how they were transmitted, and how they could be cured. The fate of humanity, our ancestors thought, fluctuated under the extraneous influence of the “wheel of fortune” and there was nothing that anyone could do about it. One day you were alive and next day you were not.

Contrast that glacial pace of progress, and the fatalistic acceptance of disease and death, with our response time to the current pandemic. The Wuhan Municipal Health Commission reported the existence of a cluster of cases of “pneumonia” in Wuhan on December 31. On January 7 the Chinese identified the pathogen (novel coronavirus) responsible for the outbreak. On January 11 China sequenced the genetic code of the virus, and the next day it was publicly available. That enabled the rest of the world to start making diagnostic kits to identify the disease.

To take one example, the first COVID-19 infection in South Korea was identified on January 20. On February 4, the first test kit (made by Kogene Biotech) entered production. On February 7, the test kit was available at 50 locations around the country. Other countries followed suit.

The World Health Organization, which declared COVID-19 a global pandemic on March 11, may have acted too late. Still, it is noteworthy that just two months expired between the first sign of trouble and the time when the entire world put measures in place to retard the spread of the disease. In the meantime, we have learned a lot about governmental incompetence and regulatory overreach. But we have also learned a great deal about the spread and symptoms of the disease. Instead of starting from scratch, medical specialists in Europe and America can draw on the expertise of their colleagues in the Far East. Before the telegraph appeared midway through the 19th century, it took up to a month for a ship to carry information from London to New York. Today, we learn about the latest COVID-19 news (good and bad) and research in seconds.

By mid April, thousands of highly educated and well-funded specialists throughout the world were using supercomputers and artificial intelligence to identify promising paths toward victory over the disease. Some 200 different programs are underway to develop therapies and vaccines to combat the pandemic. They include studies of the effectiveness of existing antiviral drugs, such as Gilead’s Remdesivir, Ono’s protease inhibitor, and Fujifilm’s favipiravir. The effectiveness of generic drugs, such as hydroxychloroquine and chloroquine, is also being evaluated. Takeda is hard at work on convalescent plasma (TAK-888) in Japan, while Regeneron works on monoclonal antibodies in the United States. New vaccines, such as Moderna’s mRNA-1273, Inovio’s INO-4800, and BioNTech’s BNT162, are under development.

We don’t know which of these treatments (if any) will work, but here is what we can be sure of: There has never been a better time for humans to face and defeat a global pandemic. The world is richer than ever before, and money is what enables us to sustain a massive pharmaceutical industry and pay for highly sophisticated medical research and development.

Coronavirus may be deadly, but it is not the bubonic plague, which had a mortality rate of 50 percent. Luckily, it is a far milder virus that has reawakened us to the danger posed by communicable diseases. Once the immediate crisis is behind us, researchers will collect billions of data from dozens of countries and analyze the different governmental responses to the pandemic. That knowledge will be deployed by governments and the private sector to ensure that best practices are adopted, so that next time we are better prepared.

Food

When the Black Plague struck Europe in 1347, the disease found the local population ripe for slaughter. Following the close of the Medieval Warm Period at the end of the 13th century, the climate turned cold and rainy. Harvests shrunk and famines proliferated. France, for example, saw localized famines in 1304, 1305, 1310, 1315–17, 1330–34, 1349–51, 1358–60, 1371, 1374–75, and 1390. The Europeans, weakened by shortages of food, succumbed to the disease in great numbers.

The people of yore faced at least three interrelated problems. First, the means of transport and the transportation infrastructure were awful. On land, the Europeans used the same haulage methods (carts pulled by donkeys, horses, and oxen) that the ancients had invented. Similarly, much of Europe continued to use roads built by the Romans. Most people never left their native villages or visited the nearest towns. They had no reason to do so, for all that was necessary to sustain their meager day-to-day existence was produced locally.

The second problem was the lack of important information. It could take weeks to raise the alarm about impending food shortages, let alone organize relief for stricken communities.

Third, regional trade was seldom free (France did not have a single internal market until the Revolution) and global trade remained relatively insignificant in economic terms until the second half of the 19th century. Food was both scarce and expensive. In 15th-century England, 80 percent of ordinary people’s private expenditure went for food. Of that amount, 20 percent was spent on bread alone. Under those circumstances, a local crop failure could spell the destruction of an entire community. (Those who think that COVID-19 exposed the fragility of modern society should look up the Great Famine.)

By comparison, by 2013 only 10 percent of private expenditure in the United States was spent on food, a figure that is itself inflated by the amount Americans typically spend in restaurants. Speaking of restaurants, while most have been forced to close their doors, the restaurateurs use apps to deliver excellent food at reasonable prices. Moreover, months into the COVID-19 pandemic, the shops are, generally, well stocked and regularly replenished by the largely uninterrupted stream of cargo flights, truck hauling, and commercial shipping. Due to the miracle of mobile refrigeration, fresh produce continues to be sourced from different parts of the United States and abroad. Shortly before writing this piece, I was able to buy oranges from California, avocados from Mexico, and grapes from Chile in my local supermarket. Globalization may be under pressure from both the left and the right of the U.S. political spectrum, but should the pandemic impair U.S. agricultural production, many will be forced to acknowledge the benefits of the global food supply and our ability to import food from COVID-19-unaffected parts of the world.

This extensive and, at this point, still sturdy supply chain is, of course, a technological marvel. Computers collate information about items on the shelf that are in short supply, adjust the variety and quantity of items shipped between stores, fill new orders, etc. And so, commerce that’s still allowed to go on goes on. So does charity. Feeding America, a network of more than 200 food banks, feeds tens of millions of people through food pantries, soup kitchens, shelters, etc. Since 2005, the organization has been using a computerized internal market to allocate food more rationally. Feeding America uses its own currency, called “shares,” with which individual food banks can bid on the foods that they need the most. Grocery-delivery services bring food to the doorsteps of those who cannot or do not want to leave their homes. The old and the infirm can also use phones, emails, and apps to call upon volunteers to do their shopping and delivery.

Work

The nature of work has changed a lot over the last 200 years or so. Before the industrial revolution, between 85 percent and 90 percent of the people in the Western world were farm laborers. Their work was excruciatingly difficult, as witnessed by one 18th-century Austrian physician who observed that “in many villages [of the Austrian Empire] the dung has to be carried on human backs up high mountains and the soil has to be scraped in a crouching position; this is the reason why most of the young people are deformed and misshapen.” People lived on the edge of starvation, with both the very young and the very old expected to contribute as much as they could to the economic output of the family (most production in the pre-modern era was based on the family unit, hence the Greek term oikonomia, or household management). In those circumstances, sickness was a catastrophe: It reduced the family unit’s production, and therefore its consumption.

The industrial revolution allowed people to move from farms to factories, where work was better paid, more enjoyable, and less strenuous (which is largely why people in poor countries continue to stream from agricultural employment to manufacturing jobs today). Moreover, wealth exploded (real annual income per person in the United States rose from $1,980 in 1800 to $53,018 in 2016). That allowed for ever-increasing specialization, which included a massive expansion of services catering to the desires of an ever-more-prosperous population.

The service sector today consists of jobs in the information sector, investment services, technical and scientific services, health care, and social-assistance services, as well as in arts, entertainment, and recreation. Most of these jobs are less physically arduous, more intellectually stimulating, and better paid than either agricultural or manufacturing jobs ever were. Crucially, many of these service-sector jobs can be performed remotely. That means that even in the midst of the government-imposed economic shutdown, some work (about a third, estimates suggest) can go on. The economic losses from COVID-19, in other words, will be astronomical, but not total.

My own organization, for example, shut its doors in mid March. Since then, everyone has been scribbling away at home or appearing on news shows around the world via the Internet. All of us are in regular contact via the phone, Zoom, and Microsoft Teams. Other organizations are doing the same. As we already discussed, a great deal of shopping is taking place online. Shipping and delivery companies are expanding, with Amazon hiring 100,000 additional workers in the United States. Home entertainment, of course, has grown tremendously, with Netflix adding millions of new customers and expanding its offerings with thousands of new films and television shows. With over 30 million American children stuck at home, online learning companies are booming, and educators from high-school teachers to college professors continue to perform their jobs remotely. Telehealth is expanding, allowing patients to see their doctors in a safe and convenient way. Even minor medical procedures, such as eye exams, can be conducted remotely, and multiple companies will deliver your new specs to your front door. Banking and finance are still going on, with many people taking advantage of low interest rates to refinance their mortgages. Finally, the often unfairly maligned pharmaceutical industry is expanding as we all wait and hope for the release of a COVID-19 vaccine or effective therapeutic treatment.

Sociality

Aristotle observed that “man is by nature a social animal” and noted that without friends we would be unhappy. But the role of sociality (that is to say, the tendency to associate in or form social groups) goes much deeper than that. As William von Hippel explained in his 2018 book The Social Leap, sociality is the mechanism by which Homo sapiens came about. When early hominids were forced down from the trees (perhaps as a result of a climatic change that dried up African forests), they became more vulnerable to predators. To cover longer distances between the fast-disappearing trees while maintaining a modicum of protection against other animals, our ancestors developed bipedalism, which allowed them to free their upper body to carry weapons such as sticks and stones.

Even more important was the invention of cooperation. While a stick-wielding ape is slightly better-off than an unarmed one, a group of armed apes is much better at dispatching predators. Individuals in more cooperative bands survived to adulthood and bred more often, resulting in more-cooperative species. Furthermore, since living alone was tantamount to a death sentence, selfish apes who didn’t care about being ostracized for not pulling their weight died off, resulting in a desire for communal cooperation and a deep-rooted fear of rejection by the group.

The early hominids had brains more like those of chimps than those of modern humans. That’s because the evolutionary pressures that created the former — such as predation and food scarcity — could be overcome without tremendous intelligence. These pressures to survive were part of the physical landscape — a challenging but static environment that didn’t require a lot of cognitive ability to navigate. The environmental pressure that resulted in modern humans was the social system itself. The social landscape is much more dynamic than the physical one. Once they had banded together in groups, our ancestors were forced to forge relationships with, and avoid being exploited by, individuals with divergent and constantly shifting interests. Those who couldn’t keep up with the increasingly complex social game either died or were unable to mate.

This new pressure created a positive evolutionary cycle: Banding together created more complex social systems, which required bigger brains; bigger brains needed to be fed; and the best way to get more food was more cooperation and a more sophisticated social system. The main cognitive development that evolved from this evolutionary cycle is known as the “theory of mind.” In short, the theory of mind is the ability to understand that other minds can have different reasoning, knowledge, and desires from your own. While that seems basic, the theory of mind distinguishes us from all other life on Earth. It allows us to determine whether an affront, for example, was intentional, accidental, or forced. It allows us to feel emotions such as empathy, pride, and guilt — abilities that are keys to a functioning society.

So sociality and human beings are inseparable, as we have all been clearly reminded by the sudden restrictions on our ability to interact with others. As we sit at home, working away on our computers or watching television, most of us feel a tremendous sense of isolation (“social distancing”) from our family, friends, and colleagues. The urge to be around others is innate to us. It is who we are.

Dissatisfied with impersonal modes of communication, such as email and texting, we have rediscovered the need for a face-to-face interaction with our fellow humans. To that end, we utilize digital platforms such as Zoom, Google Hangouts, Facebook Live, and FaceTime to catch up on the latest news in other people’s lives, or simply to complain about the misery of loneliness and the pathetic inadequacy of our public officials (of both parties). Throughout the nation, people engage in virtual happy hours, dinners, book clubs, fitness classes, religious services, and group meditation. As my Cato Institute colleague Chelsea Follett recently wrote, “Technology has made it easier than ever to hold a physically-distanced ‘watch party’ synchronized so that viewers in different locations see the same part of a movie at the same time. For those who like to discuss movies as they watch, technology also enables a running group commentary of each scene in real time.” In the saddest of cases, technology enables people to say goodbye to dying friends and relatives. In a very real sense, therefore, technology keeps us sane (or, at the very least, saner).

Technology, then, allows us to cope with the challenges of the pandemic in ways that our ancestors could not even dream about. More important, technology allows our species to face the virus with grounds for rational optimism. In these dark days, remember all the scientists who are utilizing the accumulated store of human knowledge to defeat COVID-19 in record time and all the marvelous (not to say miraculous) ways the modern world keeps us well fed, psychologically semi-balanced, and (in many cases) productively engaged.

This originally appeared in National Review.

Blog Post | Economic Growth

Economic Growth Is More Important than You Think

Growth is a saving grace for the world's poorest people, and also has a major impact on the daily lives of Americans and the rest of the developed world.

This article first appeared in CapX. To read the original, click here.

What is economic growth, and why should it matter to ordinary people? Those questions are hard to answer in a hysterical world where once-dry academic matters are now politicized without fail. Recently, commentators from all sides have taken to dismissing growth as a golden idol of narrow-minded capitalists. Likewise, many people see the pursuit of growth as an alternative, not a complement, to the pursuit of social needs like public health and sustainability.

These narratives are understandable, considering the misinformed and tone-deaf ways in which many public figures have attempted to advocate the importance of growth and economic activity, particularly during the current pandemic. But the narratives themselves could not be more misleading. Economic growth affects the lives of ordinary people in many crucial ways, not just in the West, but importantly in countless developing nations too. In fact, growth is generally the greatest source of improvement in global living standards.

If we visualize the economy as a pie, then growth can be visualized as the pie getting bigger. Most economists measure growth using a metric called Gross Domestic Product (GDP), which defines the pie’s “ingredients” as consumption, investment, government spending and net exports. In developing countries, growth is largely driven by investment, while wealthier countries tend to rely on innovation to continue growing.

These working definitions, while highly simplified, are better than nothing. They are important because they can make it easier to understand how GDP correlates with countless key metrics of living standards.

In sub-Saharan Africa, for instance, Real Average GDP per Capita grew by 42% between 1990 and 2018. That growth corresponded to major decreases in extreme povertyinfant mortality and undernourishment.

Growth also increases access to resources that make people safer and healthier. A 2019 paper shows that, while disaster-related fatality rates fell for all global income groups between 1980 and 2016, developing countries in the early stages of growth experienced the greatest improvements. That is because those countries made the greatest relative advances in infrastructure and safety measures—advances facilitated by growth.

Growth is a saving grace for the world’s poorest people, but it also has a major impact on the daily lives of Americans and the rest of the developed world, and that impact is especially important in the age of coronavirus. For example, continuous growth has led to lifesaving breakthroughs in medical technology and research, which has allowed humanity to fight COVID-19 more quickly and effectively than we ever could have in the past. Vaccines for certain ailments took decades to develop as late as the mid-20th century, but it is quite possible that a vaccine for COVID-19 will be widely available just one year after the virus’s initial outbreak.

To many supposedly environmentally conscious critics, it seems intuitive that growth is not sustainable. However, sustainability-based criticisms of growth tend to ignore the reality that growth leads to green innovations that help the planet. Labor-augmenting technologies allow us to produce more while conserving resources and protecting the environment. Moreover, wealthier countries are better equipped to develop and adopt green technologies.

MIT scientist Andrew McAfee has documented many of the concrete environmental benefits of growth in his recent book, More From Less. McAfee notes that increases in America’s population and productive activity in recent decades have coincided with significant decreases in air and water pollution, along with gross reductions in the uses of water, fertilizer, minerals and other resources—all because economic growth and market coordination led to improvements in manufacturing and technology. For facilitating this process, which McAfee calls “dematerialization,” growth should be seen as a key to sustainability, not a barrier.

In a broader sense, growth has made our lives more convenient, dynamic and entertaining via developments in consumer technologies and other innovations. Imagine quarantining for five months (and counting) without the internet, PCs or smartphones. Many people would have no way of doing their jobs. Even for those that could, life would be much more difficult, not to mention dull.

Indeed, if one thing could be said to summarize the impact of growth around the world, it would be that growth makes everyone’s life easier. For instance, the amount of labor needed for average workers to purchase countless basic goods and services is at an all-time low and decreasing, largely because supply chains have grown and become more efficient. The result is that ordinary people, especially those in lower income groups with relatively greater reliance on basic goods, are better off.

The story of economic growth is in many ways the story of how cooperation and exchange can defeat poverty and scarcity. The better we understand that, the more likely we will be to support policies which allow resources to flow into areas that need them the most. Broadly speaking, no political idea has been more effective in this regard than free trade.

Knowing the importance of innovation to human well-being should also encourage us to embrace new technology instead of fearing it. We must therefore be wary of overbearing regulations and fiscal policies that prevent ideas from flourishing.

Most importantly, we should not listen to those who claim that economic growth is a pointless, abstract goal that only benefits the rich and leaves ordinary people behind. Growth is a vital driver of progress in modern society and should be taken seriously for the sake of humanity and the planet.

Blog Post | Science & Technology

Vasquez Reviews Ridley's “How Innovation Works”

Innovation requires trial and error. It requires the possibility to experiment and to fail. Only then can innovation provide the path to success and human progress.

The dog is an innovation. It took place at least 20,000 years ago, when a group of humans domesticated wolves and subsequently began to develop different breeds. The light bulb, wheeled baggage, and the computer are also innovations.

Since prehistoric times, innovation has changed the course of our lives and is, according to science writer Matt Ridley, “The most important fact about the modern world, but one of the least well understood.” Ridley is the author of the new eye-opening book, How Innovation Works: And Why It Flourishes in Freedom. In it, he tells the story of dozens of innovations, to illustrate how that phenomenon is the main cause of the enormous progress humanity has seen in the past few centuries. He derives the following lessons from his study.

Innovation almost always happens gradually and not suddenly. It is “not an individual phenomenon, but a collective, incremental and messy network phenomenon.” Ridley asks, “Who invented the computer?” To answer the question, we would have to go back more than 200 years to the Jacquard loom and then review countless succeeding contributions and innovators. The same can be said about the car or virtually all other innovations, even though we sometimes identify them with individuals like Henry Ford who discovered a way of making the automobile widely accessible to the public.

Innovation is thus collaborative, and the same ideas often independently occur to different individuals at the same time. The telegraph, the thermometer, photography, and the hypodermic needle are examples of simultaneous inventions. Twenty-one people invented the light bulb at about the same time. If Thomas Edison or the Wright brothers would not have existed, we would still enjoy artificial light or the wonder of airplane travel.

Innovation is not the same thing as invention. One can invent something novel, but the people who make a difference are the innovators who figure out the way that that new idea can be useful to society, typically by improving on it and lowering its costs. Innovations exist due to our growing knowledge and a demand for the innovative product. As Ridley observes, “The light bulb emerged inexorably from the combined technologies of the day. It was bound to appear when it did, given the progress of other technologies.”

It’s not possible to plan innovation. Not even the innovators can do so. Innovations more often than not come about because of chance events or unexpected discoveries. Sheer luck explains Alexander Fleming’s discovery of penicillin. The founders of Google did not “set out in search of search engines. The founders of Instagram were trying to make a gaming app. The founders of Twitter were trying to invent a way for people to find podcasts.” Innovation is unpredictable.

That unpredictability also helps explain why the so-called “entrepreneurial state” has not effectively promoted innovation and why we can’t expect it to do so. For example, Ridley explains that contrary to those who advocate in favor of publicly funded innovation, the U.S. government did not intend to create a global internet. Only when the internet “escaped the clutches” of the government – that is, when it was essentially privatized in the 1990s – did the private sector and universities begin to transform the internet into what we use today.

Ridley observes that the widely held view that science leads to technology and innovation—frequently canvassed to justify public subsidies for science—is only partially correct. It is equally true that scientific knowledge is the product of technological improvements and attempts to understand the latter. The first inoculations were conducted without a good understanding as to how and why they worked. Attempts to resolve problems in the yogurt industry contributed to the development of the revolutionary gene-editing method known as CRISPR (which may yet help us find a treatment for COVID-19).

Innovation requires trial and error. It requires the possibility to experiment and to fail. Only then can innovation provide the path to success and human progress. Or, as Ridley puts it, “Innovation is the child of freedom and the parent of prosperity.”

Blog Post | Science & Technology

Our Technological Renaissance

Claims of stagnation are not persuasive.

I put on a record today.

Well, I didn’t put on a record, so much as I put on a . . . well, a what? It wasn’t a vinyl plate or a spool of tape or even a piece of shiny circular plastic. Indeed, whatever physical medium was being used to store the music I was listening to wasn’t available to me at all. It simply came in through the air—like lightning. From the comfort of my chair, I picked up my iPhone, chose the album I wanted from the million-strong list that loaded instantly before my eyes, and directed the sound to the speakers in my vicinity, all of which started to play my choice within a few milliseconds. And then, when I tired of it, I shushed it with my voice.

I think about this sometimes when I hear people complain that the bright technological future we were all promised has steadfastly failed to appear. How, I wonder, would I even begin to explain Spotify and Sonos to my grandfather, who died in 1994? A compact disc could be comprehended by the elderly as a better vinyl record, much as the Space Shuttle could be comprehended as a faster airplane. But streaming? If my grandfather came back today, where would I start?

“Okay, so I’m using my telephone, which isn’t really a telephone so much as a supercomputer-cum-Library-of-Alexandria-cum-high-definition-movie-studio, to send a wireless signal to the magical speakers in my home, which, upon my request, will contact a set of servers 3,000 miles away in San Francisco, and request instant access to the closest digital copy of—”

“Wait, what’s a server?”

“—hold on—to the closest digital copy of one of millions of high-quality songs to which I have full and unlimited access, but neither own nor have to store, and—”

It boggles the mind.

It may be tempting to regard this example as a mere bauble or trinket, or even as a sign of decadence. But to do so would represent a disastrous miscalculation of its significance. It is true that some of our advances have slowed since the 1970s. We do not go to the moon on a regular basis, despite the promises of the Apollo program; transatlantic travel has become slower, rather than faster—R.I.P. Concorde; our cars essentially still use the same engines as they always have; and life expectancy is no longer leaping forward. But it is also true that, unlike then, we now enjoy a magnificent worldwide communications network that offers the sum of human knowledge in the blink of an eye and is open to anybody who wishes to join it. If that is “all” we’ve done in the last four decades, I think we should congratulate ourselves rather heartily.

Forget my grandfather for a moment and imagine explaining that to almost any literate person in human history. What do we imagine his reaction would have been? Do we think he would have said, “That sounds like stagnation to me”? Or do we think he would have said, “It sounds as if you have reached the promised land, I hope you are extremely grateful for the bounties you have inherited.” If not the latter, he’d be a fool.

From the desk on which I am writing these words, I have access to all of the great works in history: every song, every play, every book, every poem, every movie, every pamphlet, every piece of art. I can find every translation of the Bible that has ever been compiled and put them side by side for comparison. I can read the missives that were sent during the American Revolution, and examine the patents for the first steam engine, and listen to all of Winston Churchill’s speeches between 1939 and 1945. The world’s recipes are available to me without exception, and, if I desire, I can watch a cornucopia of free-to-use instructional videos in which experts show me how to cook them. At no cost or inconvenience, I can learn how to fix my sink or change my car’s tires or troubleshoot my dishwasher. If I want to know where the “panda ant” lives (Chile), to which genus it belongs (Euspinolia), how long it is (up to 8 millimeters), and whether it’s actually an ant (it’s not, it’s a wasp), I can find this information in seconds. What was on the front page of the Key West Citizen on June 2, 1943? Easy: “City Council Takes Up Incinerator Project with Representative of FWA.” Nearly 2,000 years ago, Pliny the Elder wondered if it might be a good idea to collect all of human knowledge in one place, available to all. That dream has become a reality—and we got to live when it happened. I’d say that’s pretty darn good.

The airplane annihilated distance; the smartphone has annihilated geography altogether. Provided that I have a stable connection to the Internet, it takes me the same amount of time to send a digital photograph to Delhi as it does for me to send it to a person in the house next door. On Saturday mornings I can sit and watch the same soccer games, broadcast live from England, that my dad is watching in England and text him about the developments in real time, as if I were sitting next to him. If I need to keep an eye on the news, it makes no difference whether I am sitting in the headquarters of Reuters or on a beach in Australia. Wherever I am, the information flow is the same. Except by design, there is no longer any such thing as “out of the loop.” As an achievement, this is monumental.

The “Spaceship Earth” attraction at Disney’s Experimental Prototype Community of Tomorrow tells the story of human communication from the days of the Neanderthal to the invention of the computer. I have wondered at times what Disney will substantively add to this story when it comes time to update the show, and I have come to conclude that the answer is almost certainly nothing. One cannot improve on instant worldwide communication that is accessible to every person and in every place. One can tinker around the edges to upgrade its speed, its reliability, its quality, and its durability, one can add some security into the mix for good measure, but, give or take, this is a problem that has now been solved. As the Phoenicians solved the alphabet problem, so have our contemporary engineers solved the transmission problem. The dream has arrived.

Not everyone appreciates this, of course, which is why it is customary for the complaint I am addressing to be amended slightly, from “technology has stagnated” to “technology is frivolously used and may even be bad for us.” But, while the latter proposition is arguably true, it concedes my premise that something dramatic has changed in the way in which we live. It is indeed entirely possible that the volume and speed of information that the I.T. revolution has ushered in have had a destructive effect on individuals or on society. It is possible, too, that, while the benefits are immense, most people choose not to take advantage of them. I would not be the first to lament that the first thing users seem to do with their access to the Internet is to begin arguing with strangers. And yet to contend that the abuse of the personal computer in some way undermines the value of the personal computer would be equivalent to contending that the use of the airplane for bombing renders the significance of its invention questionable.

I suspect that some of our disappointment is the fault of comic books. Riffle through any Bumper Sci-Fi Book for Boys!–style volume that was published between the 1920s and the 1960s and you will see that the physical breakthroughs that were anticipated—spacesuits, rocket ships, jetpacks, flying cars, laser guns, etc.—are featured prominently and enthusiastically, while the less tangible mass communications that were anticipated are set quietly in the background, as if they are inevitable. In story after story, the astronauts communicate from the planet Zog in an instant using video chat, and yet that, evidently, is not the exciting part. The exciting part is that they are on Zog.

I must confess that I do not understand why, for it is not at all obvious to me that exploring Zog is more useful than inventing Wikipedia, or that the ability to get to Zog would represent a greater leap forward than the ability to talk to our friends from it. Certainly, Zog may have some interesting rocks, and the technical feat of sending men there and returning them safely to Earth would be worth celebrating. (I do tend to tear up watching the original Moon landing.) But in comparison to a breakthrough that allows me to enjoy the words, faces, music, food, counsel, art, and research of every other human being on Earth, whether living or dead, it would pale. I have that. In my pocket.

Stagnation? Nope. Renaissance, more like.

This originally appeared in National Review. 

Blog Post | Health & Medical Care

What If the Coronavirus Had Hit Us 25 Years Ago?

A largely tech-free pandemic would have been far tougher than what we are going through.

I promise that this isn’t one of those schmaltzy “Look on the bright side”/”There is a silver lining to all this” articles. Coronavirus has no bright side, and there is no silver lining.

But over the past couple of days, I have become a bit more grateful for the technologies and technology-based services that are making the current situation a lot more bearable. These are, in the main, technologies that we have become so used to, it now feels as if they had been around since forever. It is easy to forget just how recent a lot of them are.

Imagine, for a moment, that the Coronavirus had hit us, say, 25 years earlier. I’m not talking about the Middle Ages, or the Victorian era. Just 25 years – a time that is well within living memory (or at least, I remember it quite well).

What would “Corona ‘95” look like?

For a start, if you are staying indoors, your only connections to the outside world are a landline telephone, TV, radio, and if you have a subscription, a newspaper. There is this new thing called “the internet”, but unless you are a tech whizz, you will probably not have heard of it yet (I certainly hadn’t). There are so few internet users that nobody even bothers to count them. Records on internet usage begin in 1998, when just 9% of UK households had internet access (today: 93%.)

Even if you are one those few early internet users, there is not much that you can do online. Internet search engines, for example, are still in their early stages. Google does not exist yet. Lots of websites just consist of blinking stars, and a sign that says “Under construction”. It still takes ten minutes to load them.

Social media exists in rudimentary form, but it is the preserve of a few computer geeks, and will remain so for more than another decade. Today, social media is an easy, low-cost, low-effort way of keeping abreast of the situation, of staying in contact with people, or just to distract yourself, and give yourself a break. And, no, it is not true that only young people use social media. There are more than 40 million people in the UK who use at least one social media platform. “Corona ‘95” would have been an extremely isolating experience, in comparison.

If you need to get hold of someone who is not at home, tough luck. The mobile phone usage is about to take off over the coming years, but thus far, only one in six households have one (compared to 95% today).

Delivery services are nothing like what they are today. The launch of Amazon is around the corner, but you will probably not hear that name for another couple of years (if memory serves me right, I placed my first order in 2000), and even then, it will be just books. The closest thing you have is a mail order catalogue, which is not much help for grocery shopping. Some restaurants offer home delivery, but that is by no means the norm. If you live in a small town or a suburb, you are lucky if you have more than two options to choose from.

The economic impact would have been infinitely worse. With 1995-technology, few people are able to work from home, and certainly not at a short notice, or without a sharp drop in productivity.

Computers are already common in the workplace, but far less so at home: only about one in four UK households have a home computer (today: 88%). Even if you have one, transferring your work files to your home computer is a major operation. You need dozens of floppy disks or CDs, as well as photocopies of material that is not digitalised. Even then, the moment you get home and start working, you will probably notice that you have forgotten an important file, or that a floppy disk has been corrupted.

If you need to regularly exchange information with colleagues, your phone bill will go through the roof. Even then, you cannot easily transfer files between you, so you will waste hours explaining things that you would nowadays solve with a simple e-mail attachment.

In terms of home entertainment, you better not be particularly picky in terms of TV content, because streaming services such a Netflix, HBO or Amazon Prime are still more than a decade into the future. For millions of people, “Corona ’95” would, above all, have meant crushing boredom.

Some readers will argue that this is all missing the point. In 1995, we had far less exposure to China, which means that Corona would simply not have happened. There is a bit of truth in that. For example, in the 1990s, goods from China accounted for less than 2% of total UK imports, compared to more than 6% today. However, given how easily the virus spreads, a little exposure is all it takes, and in 1995, China was already a far cry from the hermit kingdom of Chairman Mao’s days. The risk would have been lower then, but it could still have happened.

Thanks to modern technology, we are now far better prepared to deal with the consequences of the pandemic than we ever were. Better never than now – but better now than at any point in the past. It could have been so much worse.

This originally appeared in CapX.