fbpx
01 / 04
Matt Ridley: How Innovation Works For the Environment

Blog Post | Environment & Pollution

Matt Ridley: How Innovation Works For the Environment

The very incentives that drive private sector innovation also drive ever-greater economic and environmental efficiency.

It is a truth universally acknowledged that we live in a world of finite resources. However, this rather basic assertion is often misleadingly used by some climate activists. Take the “de-growth” crowd: they presume that indefinite growth, in a world of finite resources, is literally impossible. This is the basis of Extinction Rebellion’s self-proclaimed mission to overthrow capitalism: human progress and prosperity are considered intrinsically evil, and economic growth must first be halted, then reversed. Yet, as the British businessman Michael Liebreich has pointed out, as long as we have both solar and nuclear energy, which are virtually infinite, the supposed fairytale of eternal growth has real scientific backing.

Moreover, a lot of economic growth actually consists of physical shrinkage. We use 68% less land to produce a given amount of food than we did in 1961. We use less aluminum to make a soda can, less steel to make a car, and less energy to build a house than we once did. Our mobile phones include within them a whole desk full of objects that would have consumed far more resources to create in the past: a map, compass, flashlight, diary, address book, phone book, and so on.

More nuanced mainstream environmentalists promote a slightly different interpretation. They recognize that innovation and progress are desirable, but claim that the latter cannot be provided by the free market system, because capitalism inevitably turns toward exploitation and environmental abuse. This perspective ultimately displays a worrying lack of trust in human creativity, solidarity, and progress, although the latter kind of environmentalists at least accept the importance of innovation – despite their distrust of the free market.

The most common argument against free-market innovation is put forward by scholars such as the Italian-American economist Mariana Mazzucato, who argues in her 2015 book The Entrepreneurial State that state-led research and funding has been the basis of almost all modern innovation. The former, she believes, is required to solve issues such as climate change. However, as one of us has argued in his 2020 book How Innovation Works: And Why It Flourishes in Freedom, this is a misguided approach.

First, Mazzucato ignores the history of private innovation that drove the U.S. and Britain to the forefront of the global economy in the late 19th and early 20th centuries, almost entirely without government subsidy. The fact that government funding has ended up supporting important innovations in the mid-to-late 20th century, especially in a military context, is unsurprising given that public spending quadrupled from 10 percent to 40 percent of national income. If you shoot enough arrows, it is likely that one will eventually hit the target.

Second, given the rather self-evident assessment that we only have access to a limited set of resources at any one point in time, it is equally unsurprising that mass government investment in certain innovations will crowd out other sources of investment. Indeed, Mazzucato acknowledges this risk when she writes that “top pharmaceutical companies are spending decreasing amounts of funds on R&D at the same time that the state is spending more.”

The assumption that all innovation comes from government spending therefore commits a basic economic fallacy. We simply cannot presume to know how these resources would have been otherwise allocated by rational self-interested, decentralized market actors. It is entirely likely that they would, in fact, be invested more efficiently by private companies with better local knowledge and incentives.

The evidence is compelling: in 2003, the OECD published a study called Sources of Economic Growth in OECD Countries, which found that between 1971 and 1998 the quantity of private R&D had a direct impact on the rate of economic growth, whereas the quantity of publicly funded research did not. The question thus becomes not whether the state can support innovation, which it clearly can, but whether it is better and more effective at doing so than market forces. Both history and statistical evidence favor the latter.

What, specifically, does that mean then for the environment? In the 2020 edited volume Green Market Revolution: How Market Environmentalism Can Protect Nature and Save the Planet, the Swedish author Johan Norberg argues that some developed economies have in fact reached “peak stuff,” meaning that now they use fewer material resources both per unit of economic output and in absolute terms.

The beauty of the free market system is that efficiency is almost always rewarded, because efficiency directly translates into greater profit and productivity. Indeed, researchers such as Jesse Ausubel of Rockefeller University have found that in 2015, the United States economy was using 40 percent less copper, 32 percent less aluminum, and 15 percent less steel compared to their peaks in the 1990s. The same principle applies to 66 out of 72 raw resources tracked by the U.S. Geological Survey, as the American researcher Andrew McAfee found in his 2019 book More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources―and What Happens Next. According to McAfee, resources have been declining in use as economic output has inversely increased. The list of efficiency gains goes on and on, from cropland acreage to water and fertilizers to plastic.

When the private sector has been allowed to innovate, it’s done so to tremendous gains for the environment. The shale gas revolution has given America the fastest falling carbon dioxide emissions of any large economy. Genetically modified crops have reduced the use of pesticides by an average of 37 percent. The invention of LED lightbulbs has reduced electricity consumption in lighting by 75 percent for a given output of light. In the latter case, this improvement was almost certainly delayed by the government’s insistence on mandating the use of compact fluorescent bulbs.

In contrast, government regulation tends to stifle innovation, as can be seen in the case of nuclear energy. Clean, reliable, safe, and requiring very few material resources, nuclear power has the potential to meet the modern economy’s energy needs several times over, while hitting our climate targets. Yet, it has been languishing for years, declining from 17.6 percent of global energy production in 1996, to 10 percent today.

Apocalyptic environmentalists and over-zealous regulators have all but strangled the trial and development of new nuclear reactors and designs. One study shows that new nuclear reactor regulations introduced in the 70s increased the quantity of piping per megawatt by 50 percent, steel by 41 percent, electrical cable by 36 percent, and concrete by 27 percent. Regulations have only increased since that time, and the licensing process lengthened. Nonetheless, the fact that private companies are now leading the charge on Small Modular Reactors (SMRs), the newest generation of nuclear plants, bodes well. These smaller designs allow for greater experimentation, trial and error, and adaptation. But the government must stay out of the way.

Ultimately, where de-growth-ers view our limited resources as proof that economic progress is bad, and big-government lackeys see it as proof that private companies will not pursue sustainable progress, capitalists consider it the most compelling evidence in favor of allowing markets to innovate our way out of scarcity. Crucially, the very incentives that drive private sector innovation also drive ever-greater economic and environmental efficiency – serving both humanity and the planet well. Free markets are good for innovation, and innovation is good for the environment.

Blog Post | Science & Technology

The 20 Biggest Tech Advances of the Past 20 Years

Humanity has started off the new millennium with some astounding accomplishments.

In just a few days’ time, another decade will be over. With the 2020s nearly upon us, now is the perfect time to reflect on the immense technological advancements that humanity has made since the dawn of the new millennium.

This article explores, in no particular order, 20 of the most significant technological advancements we have made in the last 20 years.

1.     Smartphones: Mobile phones existed before the 21st century. However, in the past 20 years, their capabilities have improved enormously. In June 2007, Apple released the iPhone, the first touchscreen smartphone with mass-market appeal. Many other companies took inspiration from the iPhone. As a consequence, smartphones have become an integral part of day-to-day life for billions of people around the world. Today, we take pictures, navigate without maps, order food, play games, message friends, listen to music, etc. all on our smartphones. Oh, and you can also use them to call people.

2.     Flash Drives: First sold by IBM in 2000, the USB flash drive allows you to easily store files, photos, or videos, with a storage capacity so large that it would be unfathomable just a few decades ago. Today, a 128GB flash drive, available for less than $20 on Amazon, has more than 80,000 times the storage capacity of a 1.44MB floppy disk, which was the most popular type of storage disk in the 1990s.

3.     Skype: Launched in August 2003, Skype transformed the way that people communicate across borders. Before Skype, calling friends or family abroad cost huge amounts of money. Today, speaking to people on the other side of the world, or even video calling with them, is practically free.

4.     Google: Google’s search engine actually premiered in the late 1990s, but the company went public in 2004, leading to its colossal growth. Google revolutionized the way that people search for information online. Every hour there are more than 228 million Google searches. Today Google is part of Alphabet Inc. – a company that offers dozens of services such as translations, Gmail, Docs, Chrome web browser, and more.

5.     Google Maps: In February 2005, Google launched its mapping service, which changed the way that many people travel. With the app available on virtually all smartphones, Google Maps has made getting lost virtually impossible. It’s easy to forget that just two decades ago, most travel involved extensive route planning, with paper maps nearly always necessary when venturing to unfamiliar places.

6.     Human Genome Project: In April 2003, scientists successfully sequenced the entire human genome. Through the sequencing of our roughly 23,000 genes, the project shed light on many different scientific fields, including disease treatment, human migration, evolution and molecular medicine.

7.     YouTube: In May 2005, the first video was uploaded to what today is the world’s most popular video sharing website. From Harvard University lectures on quantum mechanics and favorite T.V. episodes, to “how-to” tutorials and funny cat videos, billions of pieces of content can be streamed on YouTube for free.

8.      Graphene: In 2004, researchers at the University of Manchester became the first scientists to isolate graphene. Graphene is an atom-thin carbon allotrope that can be isolated from graphite, the soft, flaky material used in pencil lead. Although humans have been using graphite since the Neolithic era, isolating graphene was previously impossible. With its unique conductive, transparent, and flexible properties, graphene has enormous potential to create more efficient solar panels, water filtration systems and even defences against mosquitos.

9.     Bluetooth: While Bluetooth technology was officially unveiled in 1999, it was only in the early 2000s that manufacturers began to adopt Bluetooth for use in computers and mobile phones. Today, Bluetooth is featured in a wide range of devices and has become an integral part of many people’s day-to-day lives.

10.   Facebook: First developed in 2004, Facebook was not the first social media website. Due to its simplicity to use, however, Facebook quickly overtook existing social networking sites like Friendster and Myspace. With 2.41 billion active users per month (almost a third of the world’s population), Facebook has transformed the way billions of people share news and personal experiences with one another.

11.   Curiosity, the Mars Rover: First launched in November 2011, Curiosity is looking for signs of habitability on Mars. In 2014, the rover uncovered one of the biggest space discoveries of this millennium when it found water under the surface of the red planet. Curiosity’s work could help humans become an interplanetary species in just a few decades’ time.

12.   Electric Cars: Although electric cars are not a 21st century invention, it wasn’t until the 2000s that these vehicles were built on a large scale. Commercially available electric cars, such as the Tesla Roadster or the Nissan Leaf, can be plugged into any electrical socket to charge. They do not require fossil fuels to run. Although still considered a fad by some, electric cars are becoming ever more popular, with more than 1.5 million units sold in 2018.

 13.  Driverless Cars: In August 2012, Google announced that its automated vehicles had completed over 300,000 miles of driving, accident-free. Although Google’s self-driving cars are the most popular at the moment, almost all car manufacturers have created or are planning to develop automated cars. Currently, these cars are in testing stages, but provided that the technology is not hindered by overzealous regulations, automated cars will likely be commercially available in the next few years.

14.  The Large Hadron Collider (LHC): With its first test run in 2013, the LHC became the world’s largest and most powerful particle accelerator. It’s also the world’s largest single machine. The LHC allows scientists to run experiments on some of the most complex theories in physics. Its most important finding so far is the Higgs-Boson particle. The discovery of this particle lends strong support to the “standard model of particle physics,” which describes most of the fundamental forces in the universe.

15.   AbioCor Artificial Heart: In 2001, the AbioCor artificial heart, which was created by the Massachusetts-based company AbioMed, became the first artificial heart to successfully replace a human heart in heart transplant procedures. The AbioCor artificial heart powers itself. Unlike previous artificial hearts, it doesn’t need intrusive wires that heighten the likelihood of infection and death.

16.   3D Printing: Although 3D printers as we know them today began in the 1980s, the development of cheaper manufacturing methods and open-source software contributed to a 3D printing revolution over the last two decades. Today, 3D printers are being used to print spare parts, whole houses, medicines, bionic limbs, and even entire human organs.

17.   Amazon Kindle: In November 2007, Amazon released the Kindle. Since then, a plethora of e-readers has changed the way millions of people read. Thanks to e-readers, people don’t need to carry around heavy stacks of books, and independent authors can get their books to an audience of millions of people without going through a publisher.

18.   Stem Cell Research: Previously the stuff of science fiction, stem cells (i.e., basic cells that can become almost any type of cell in the body) are being used to grow, among other things, kidney, lung, brain and heart tissue. This technology will likely save millions of lives in the coming decades as it means that patients will no longer have to wait for donor organs or take harsh medicines to treat their ailments.

19.   Multi-use Rockets: In November and December of 2015, two separate private companies, Blue Origin and SpaceX, successfully landed reusable rockets. This development greatly cheapens the cost of getting to space and brings commercial space travel one step closer to reality.

20.   Gene Editing: In 2012, researchers from Harvard University, the University of California at Berkeley and the Broad Institute each independently discovered that a bacterial immune system known as CRISPR could be used as a gene-editing tool to change an organism’s DNA. By cutting out pieces of harmful DNA, gene-editing technology will likely change the future of medicine and could eventually eradicate some major diseases.

However you choose to celebrate this New Year’s Eve, take a moment to think about the immense technological advancements of the last 20 years, and remember that despite what you may read in the newspapers or see on TV, humans continue to reach new heights of prosperity.

Blog Post | Environment & Pollution

Technology, Not Alarmism, Will Help Tackle Climate Change

People are likelier to accept the fact of climate change when told the problem is solvable with innovations.

A new policy report from the Breakthrough National Centre for Climate Restoration warns that “planetary and human systems [are] reaching a ‘point of no return’ by mid-century, in which the prospect of a largely uninhabitable Earth leads to the breakdown of nations and the international order.” This apocalyptic vision of the year 2050 follows a long tradition of counterproductive doomsaying.

Former Vice President, and Democratic presidential nominee hopeful, Joe Biden, has recently placed the “point of no return” even sooner, in just 12 years’ time. “[H]ow we act or fail to act in the next 12 years will determine the very livability of our planet,” he said earlier this week.

Environmental problems are certainly real, but alarmists do a disservice to the cause of tackling those challenges when they use cataclysmic language to describe the near future.

As Harvard University’s Steven Pinker noted in his book Enlightenment Now, psychological research has shown that “people are likelier to accept the fact of global warming when they are told that the problem is solvable by innovations in policy and technology than when they are given dire warnings about how awful it will be”.

But instead of focusing on solutions, like nuclear power, which does not emit CO2, and other technological breakthroughs that have the potential to reduce carbon emissions, some well-meaning people resort to apocalyptic rhetoric. Humanity has reached the “point of no return” many times already, according to past doomsayers.

In 2006, Al Gore warned that unless drastic measures were taken “within the next 10 years,” the world would “reach a point of no return.” That would place “the point of no return” in 2016.

Thirty years ago, in 1989, an unidentified senior U.N. environmental official told the Associated Press that “entire nations could be wiped off the face of the Earth by rising sea levels” if drastic action was not taken by the year 2000. The ocean has not swallowed any nations since his prognostication.

In 1982, executive director of the U.N. Environment Program Mostafa Tolba said that lack of action by the year 2000 would bring “an environmental catastrophe which will witness devastation as complete, as irreversible, as any nuclear holocaust.” His prediction of an environmental “nuclear holocaust” in just 18 years failed to materialize.

Back in 1970, Harvard University biologist George Wald claimed that “civilization will end within 15 or 30 years unless immediate action is taken against problems facing mankind.” His prediction would place the end of civilization sometime between 1985 and 2000.

Also in 1970, North Texas State University philosopher Peter Gunter wrote, “By the year 2000, 30 years from now, the entire world, with the exception of Western Europe, North America, and Australia, will be in famine.”

In 1969, Stanford University biologist Paul Ehrlich said, “If I were a gambler, I would take even money that England will not exist in the year 2000.” It is a good thing he did not put down money on that proposition, or he would have had to pay out 31 years later. (In fact, it would have served his bank account well to stay away from wagers entirely).

The frequency of hyperbolic, failed predictions of catastrophe would be more amusing if they were not so damaging to the public’s perception of real environmental challenges, including climate change.

Fortunately, there are also many environmentalists who hold a less pessimistic and more realistic view. Rockefeller University professor Jesse H. Ausubel, who was integral to setting up the world’s first climate change conference in Geneva in 1979, has shown how technological progress allows nature to rebound. For example, increasing crop yields to produce more food with less land reduce the environmental impact of agriculture. In fact, if farmers worldwide reach the productivity level of the average U.S. farmer, humanity will be able to return a landmass the size of India back to nature.

In addition to technological progress, economic development can also help protect the environment. As people rise out of extreme poverty, they often come to care more about environmental stewardship. The incredible decline in Chinese poverty spurred by economic liberalization, for example, has coincided with better preservation of forests. China had 511,807 more square kilometers of forest in 2015 than it did in 1990. Once a country reaches around $4,500 in GDP per capita, forest area starts to rebound. This is called the “forest transition” or, more broadly, the “environmental Kuznets curve”.

Many other such reasons for optimism exist. Yet the new report’s “2050 scenario finds a world in social breakdown and outright chaos,” David Spratt, the research director at the Breakthrough National Centre for Climate Restoration, told Vice.

Not to be outdone in pessimism, Congresswoman Alexandria Ocasio-Cortez has predicted that “the world is going to end in 12 years” without urgent action, rather than in 31 years’ time.

In the year 353, a bishop called Hilary of Poitiers also predicted that the world would end in just 12 years, in 365. It is a safe bet that Congresswoman Ocasio-Cortez’s forecast ends up as inaccurate as his was.

Environmental challenges should be taken seriously. And just as with so many other problems humanity has faced, environmental problems should be solvable given the right technology and spreading prosperity. The world will still exist a dozen years from now.

This first appeared in CapX.

Blog Post | Innovation

Ridley: Bureaucracies Stifle Innovation and Progress

We need to promote a new regulatory culture based on permissionless innovation.

While the world economy continues to grow at more than 3 per cent a year, mature economies, from Europe to Japan, are coagulating, unable to push economic growth above sluggish. The reason is that we have more and more vested interests against innovation in the private as well as the public sector.

Continuing prosperity depends on enough people putting money and effort into what the economist Joseph Schumpeter called creative destruction. The normal state of human affairs is what The jurist Sir Henry Maine called a “status” society, in which income is assigned to individuals by authority. The shift to a “contract” society, in which people negotiate their own rewards, was an aberration and it’s fading. I am writing this from Amsterdam and am reminded we caught the idea off the Dutch, whose impudent prosperity so annoyed the ultimate status king, Louis XIV.

In most western economies, it is once again more rewarding to invest your time and effort in extracting nuggets of status wealth, rather than creating new contract wealth, and it has got worse since the great recession, as zombie firms kept alive by low interest rates prevent the recycling of capital into new ideas. A new book by two economists, Brink Lindsey and Steven Teles, called The Captured Economy: How the Powerful Enrich Themselves, Slow Down Growth, and Increase Inequality, argues that “rent-seeking” behaviour — the technical term for extracting nuggets — explains the slow growth and rising inequality in the US.

They make the case that, in four areas, there is ever more opportunity to live off “rents” from artificial scarcity created by government regulation: financial services, intellectual property, occupational licensing and land use planning: “The rents enjoyed through government favouritism not only misallocate resources in the short term but they also discourage dynamism and growth over the long term.”

Here, too, hidden subsidies ensure that financial services are a lucrative closed shop; patents and copyrights reward the entertainment and pharmaceutical industries with monopolies known as blockbusters; occupational licensing gives those with requisite letters after their name ever more monopoly earning power; and planning laws drive up the prices of properties. Such rent seeking redistributes wealth regressively — that is to say, upwards — by creating barriers to entry and rewarding the haves at the expense of the have-nots. True, the tax and benefit system then redistributes income back downwards just enough to prevent post-tax income inequality from rising. But government is taking back from the rich in tax that which it has given to them in monopoly.

As an author, my future grandchildren will earn (modest) royalties from my books thanks to lobbying by American corporations to extend copyright to an absurd 70 years after I am dead. Yet there is no evidence that patents and copyrights incentivise innovation, except in a very few cases. Indeed, say Lindsey and Teles, the evidence suggests that “rents that now accrue to movie studios, record companies, software producers, pharmaceutical firms, and other [intellectual property] holders amount to a significant drag on innovation and growth, the very opposite of IP law’s stated purpose.”

[Thomas Babington Macaulay MP summarised an early attempt to extend copyright in a debate thus: “The principle of copyright is this. It is a tax on readers for the purpose of giving a bounty to writers. The tax is an exceedingly bad one; it is a tax on one of the most innocent and most salutary of human pleasures; and never let us forget, that a tax on innocent pleasures is a premium on vicious pleasures.” A correspondent sends me the following details of this appalling saga: “Someone noted that there is a divergence in copyright term in the European Union. All the then member states protect works for the life of the author plus fifty years while West Germany alone protects works for the life of the author plus seventy years. Immediately the copyright publishers suggested this as something in need of harmonisation. But instead of harmonising down to the norm, all the member states were lobbied to harmonise up to the unique German standard. As a result, Adolf Hitler’s “Mein Kampf” which was going out of copyright in 1995 was suddenly revived and protected as a copyrighted work throughout the European Union. Gilbert and Sullivan operettas whose copyright had been controlled by the stultifying hand of the D’Oyly Carte Opera Company found themselves in a position to once again stop anyone else performing Gilbert and Sullivan works or creating anything based upon them. It is not surprising that, following a brief flowering of new creativity when the Gilbert and Sullivan copyrights initially expired (e.g. Joseph Papp’s production of Pirates on Broadway and the West End stage), since their revival by the European Union harmonisation legislation their use have become effectively moribund. A generation of young people are growing up without knowing anything about Gilbert and Sullivan – an art form which, it can be argued, gave birth to the modern American and British musical theatre.”]

As for occupational licensing, Professor Len Shackleton of the University of Buckingham argues that it is mostly a racket to exploit consumers. After centuries of farriers shoeing horses, uniquely in Europe in 1975 a private members bill gave the Farriers Registration Council the right to prosecute those who shod horses without its qualification.

Then there are energy prices. Lobbying by renewable energy interests has resulted in a system in which hefty additions are made to people’s energy bills to reward investors in wind, solar and even carbon dioxide-belching biomass plants. The rewards go mostly to the rich; the costs fall disproportionately on the poor, for whom energy bills are a big part of their budgets.

An example of how crony capitalism stifles innovation: Dyson found that the EU energy levels standards for vacuum cleaners were rigged in favour of German manufacturers. The European courts rebuffed Dyson’s attempts to challenge the rules, but Dyson won on appeal and then used freedom of information requests to uncover examples of correspondence between a group of German manufacturers and the EU, while representations by European consumer groups were ignored.

So deeply have most businesses become embedded in government cronyism that it is hard to draw the line between private, public and charitable entities these days. Is BAE Systems or Carillion really a private enterprise any more than Oxford University, Oxfam, Oxfordshire county council or the NHS? All are heavily dependent on government contracts, favours or subsidies; all are closely regulated; all have well-paid senior managers extracting rent with little risk, and thickets of middle-ranking bureaucrats incentivised to resist change. Disruptive start-ups are rare as pandas; the vast majority work for corporate brontosaurs.

Capitalism and the free market are opposites, not synonyms. Some in the Tory party grasp this. Launching Freer, a new initiative to remind the party of the importance of freedom, two new MPs, Luke Graham and Lee Rowley, not only lambast fossilised socialism and anachronistic unions, but also boardrooms “peppered with oligarchical and monopolist cartels”.

One of the most insightful books of recent years was The Innovation Illusion by Fredrik Erixon and Björn Weigel, which argues that big companies increasingly spend their profits not on innovation but on share buybacks and other “rents”. Far from swashbuckling enterprise, much big business is “increasingly hesitant to invest and innovate”. Like Kodak and Nokia they resist having to reinvent themselves even unto death. Microsoft “was too afraid of destroying the value of Windows” to go where software was heading.

As a result, globalisation, far from being a spur to change, is an increasingly conservative force. “In several sectors, the growing influence of large and global firms has increasingly had the effect of slowing down market dynamism and reducing the spirit of corporate experimentation”.

The real cause of Trump-Brexit disaffection is not too much change, but too little. We need to “radically reduce the restrictive effect of precautionary regulation” and promote a new regulatory culture based on permissionless innovation, Erixon and Weigel say. “Western economies have developed a near obsession with precautions that simply cannot be married to a culture of experimentation”. Amen.

This first appeared in The Times.