fbpx
01 / 05
Romer and Nordhaus: Worthy Nobel Winners

Blog Post | Economic Growth

Romer and Nordhaus: Worthy Nobel Winners

Nordhaus showed that the price of light collapsed as a result of innovation. Romer showed that human potential for innovation is infinite.

World in the palm of hands of Nobel winners

On Monday, William Nordhaus and Paul Romer were awarded the Nobel Prize in Economics. Nordhaus, the Sterling Professor of Economics at Yale University, is best known for his work in economic modelling and climate change. Romer, who teaches at New York University, is a pioneer of endogenous growth theory, which holds that investment in human capital, innovation, and knowledge are significant contributors to economic growth.

The two American economists’ research is vital in showing the way people underestimate the progress humanity has already made and the likelihood that it will continue well into the future.

In his 1996 paper, Do Real-Output and Real-Wage Measures Capture Reality? The History of Lighting Suggests Not, Nordhaus looked at the economics of light. Open fire, he noted, produced a mere 0.00235 lumens per watt (a lumen is a measure of how much visible light is emitted by a source.) Lumens per watt refers to the energy efficiency of lighting. A traditional 60 watt incandescent bulb, for example, produces 860 lumens.

A sesame lamp could produce 0.0597 lumens per watt; a sperm tallow candle 0.1009 lumens; whale oil 0.1346 lumens and an early town gaslamp 0.2464 lumens. An electric filament lamp, which was launched in 1883, achieved an unbelievable 2.6 lumens per watt and, due to subsequent technological improvements, managed to deliver an astonishing 14.1667 lumens by 1990. Yet that’s small beer compared to the compact fluorescent lightbulb, which delivered 68.2778 lumens per watt when it was launched in 1992.

William Nordhaus

Accompanying these amazing improvements in the efficiency of lighting was a collapse in its price, both in absolute terms and in terms of human labour. Nordhaus estimates that the price of light per 1,000 lumens was $785 in 1800. By 1992 that price had dropped to 23 cents (both figures are in 2018 dollars). That amounts to a reduction of 99.97 percent. Today, the monetary cost of lighting per 1,000 lumens is inconsequential.

Now, consider the price of lighting from the perspective of human labour. Prior to the Neolithic revolution, which put an end to our nomadic past and turned our species into agriculturalists, it took more than 50 hours of labor (mostly gathering wood) to “buy” 1,000 lumen hours of light. By 1800, it took about 5.4 hours.

By 1900, it took 0.22 hours. By 1992, 1,000 lumen hours required 0.00012 hours of human labor. That amounts to a reduction of close to 100 per cent. As Tyler Cowen of George Mason University noted, Nordhaus found that “GDP figures understate the true extent of growth, and show[ed] that the relative price of bringing light to humans has fallen more rapidly than GDP growth figures alone might indicate.”

Put differently, technological change, which is not fully captured in GDP figures, makes us underappreciate the tremendous advance in standards of living over that of our ancestors. Can that advance be sustained and, even, improved upon? That’s where Romer enters the picture.

Many people worry that rising standards of living are unsustainable. Economic growth, they fear, will lead to exhaustion of natural resources and civilisational collapse. Just last year, Stanford University biology professor Paul R. Ehrlich noted that “You can’t go on growing forever on a finite planet. The biggest problem we face is the continued expansion of the human enterprise … Perpetual growth is the creed of a cancer cell.”

The earth, it is true, is a closed system. One day, we might be able to replenish our resources from outer space by, for example, dragging a mineral-rich asteroid down to Earth. In the meantime, we have to make do with the resources we have. But, what exactly do we have?

Paul Romer

According to Romer, we do not know the full extent of our resources. That’s because what matters is not the total number of atoms on Earth, but the infinite number of ways in which those atoms can be combined and recombined. As he put it in his 2015 article “Economic Growth”:

Every generation has perceived the limits to growth that finite resources and undesirable side effects would pose if no new recipes or ideas were discovered. And every generation has underestimated the potential for finding new recipes and ideas. We consistently fail to grasp how many ideas remain to be discovered.

The difficulty is the same one we have with compounding. Possibilities do not add up. They multiply.… To get some sense of how much scope there is for more such discoveries, we can calculate as follows. The periodic table contains about a hundred different types of atoms. If a recipe is simply an indication of whether an element is included or not, there will be 100 x 99 recipes like the one for bronze or steel that involve only two elements. For recipes that can have four elements, there are 100 x 99 x 98 x 97 recipes, which is more 94 million. With up to five elements, more than 9 billion. Mathematicians call this increase in the number of combinations ‘combinatorial explosion.

Once you get to 10 elements, there are more recipes than seconds since the big bang created the universe. As you keep going, it becomes obvious that there have been too few people on earth and too little time since we showed up, for us to have tried more than a minuscule fraction of the all the possibilities.

Figuring out the availability of resources, therefore, is not about measuring the quantity of resources, as engineers do. It is about looking at the prices of resources, as economists do. In a competitive economy, humanity’s knowledge about the value of something tends to be reflected in its price. As new knowledge emerges, prices change accordingly.

Nordhaus showed that the price of light collapsed as a result of innovation. Romer showed that human potential for innovation is infinite. The two men are apostles of human progress and well deserving of the highest accolade that the discipline of economics can bestow.

Blog Post | Cost of Technology

Appliances Contribute to Human Progress—but Regulations Threaten Their Affordability

The environmentalist regulatory agenda is targeting life-saving home appliances.

Summary: Home appliances have drastically improved human life, from preventing heat-related deaths with air conditioning to making household tasks more efficient with washing machines and refrigerators. Initially luxury items, many appliances have become affordable and accessible to most households thanks to free-market innovation. However, regulations driven by environmentalist ideology now increasingly threaten the affordability and accessibility of these essential devices, particularly for the lower-income families who need them most.


Human Progress has devoted a considerable amount of attention to home appliances—and for good reason, given the tremendous difference they have made in our lives. Whether it is the heat-related deaths averted by air conditioning, the foodborne illness prevented by refrigeration, the improvements in indoor air quality enabled by gas or electric stoves, or the liberation of women worldwide facilitated by washing machines and other labor-saving devices, these appliances have improved the human condition considerably over the past century or so.

Of course, the benefits of home appliances accrue only to those who can afford them, and on that count, the trends have been very positive. Although many appliances started as luxury items within reach of no more than a wealthy few, they didn’t stay that way for long. For example, the first practical refrigerator was introduced in 1927 at a price that was prohibitive for most Americans, but by 1933, the price was already cut in half, and by 1944, market penetration had reached 85 percent of American households.

Other appliances have similarly spread to the majority of households, first in developed nations over the course of the 20th century and now in many developing ones. And the process continues with more recently introduced devices, such as personal computers and cellphones. Cato Institute adjunct scholar Gale Pooley has extensively documented the dramatic cost reductions for appliances over the past several decades. The reductions are especially striking when measured by the declining number of working hours at average wages needed to earn their purchase price. For example, the “time price” of a refrigerator dropped from 217.57 hours in 1956 to 16.44 hours in 2022, a 92.44 percent decline.

Home appliances are a free-market success story. Virtually every one of them was developed and introduced by the private sector. These same manufacturers also succeeded in bringing prices down over time, all while maintaining and often improving on quality.

If left to the same free-market processes that led to the development and democratization of these appliances, we would expect continued good news. Unfortunately, in the United States and other countries, many appliances are the target of a growing regulatory burden that threatens affordability as well as quality. Much of this is driven by an expansive climate change agenda that often supersedes the best interests of consumers, including regulations in the United States and other nations that could undercut and possibly negate the positive trends on appliances in the years ahead.

Air Conditioners

Many appliances are time-savers, but air conditioning is a lifesaver. According to one study, widespread air conditioning in the United States has averted an estimated 18,000 heat-related deaths annually. Beyond the health benefits, learning and economic productivity also improve substantially when classrooms and workplaces have air-conditioned relief from high temperatures. Yet air conditioning is often denigrated as an unnecessary extravagance that harms the planet through energy use and greenhouse gas emissions. As a result, air conditioning faces a growing list of regulations, the cumulative effect of which threatens to reverse its declining time price.

In particular, the chemicals used as refrigerants in these systems have been subjected to an ever-increasing regulatory gauntlet that has raised their cost. This includes hydrofluorocarbons (HFCs), the class of refrigerants most common in residential central air conditioners. HFCs have been branded as contributors to climate change and are now subject to stringent quotas agreed to at a 2016 United Nations meeting in Kigali, Rwanda. The United States and European Union also have domestic HFC restrictions that mirror the UN ones. These measures have raised the cost of repairing an existing air conditioner as well as the price of a new system.

The regulatory burden continues to grow, including a US Environmental Protection Agency requirement that all new residential air conditioners manufactured after January 1, 2025, use certain agency-approved climate-friendly refrigerants. Equipment makers predict price increases of another 10 percent or more. Installation costs are also likely to rise since the new refrigerants are classified as mildly flammable, which necessitates several precautions when handling them.

Concurrently, new energy efficiency requirements for air conditioners also add to up-front costs. For example, a US Department of Energy rule for central air conditioners that took effect in 2023 has raised prices by between $1,000 and $1,500. This unexpectedly steep increase will almost certainly exceed the value of any marginal energy savings over the life of most of these systems.

The cumulative effect of these measures is particularly burdensome for low-income homeowners and in some cases will make a central air conditioning system prohibitively expensive.

Refrigerators

Refrigerators are technologically similar to air conditioners and thus face many of the same regulatory pressures, including restrictions on the most commonly used refrigerants as well as energy use limits. Fortunately, refrigerators have come down in price so precipitously that the red tape is less likely to impact their near universality in developed-nation households. However, for a developing world where market penetration of residential refrigerators is still expanding, the regulatory burden could prove to be a real impediment.

In addition to environmental measures adding to the cost of new refrigerators, the international community is also targeting used ones. Secondhand refrigerators from wealthy nations are an affordable option for many of the world’s poorest people. For millions of households, a used refrigerator is the only real alternative to not having one at all. However, activists view this trade as an environmental scourge and are taking steps to end it.

Natural Gas-Using Appliances

Several appliances can be powered by natural gas or electricity, particularly heating systems, water heaters, and stoves. The gas versions of these appliances are frequently the most economical to purchase, and they are nearly always less expensive to operate given that natural gas is several times cheaper than electricity on a per unit energy basis. However, natural gas is a so-called fossil fuel and thus a target of climate policymakers who are using regulations to tilt the balance away from gas appliances and toward electric versions. A complete shift to electrification has been estimated to cost a typical American home over $15,000 up-front while raising utility bills by more than $1,000 per year.

The restrictions on gas heating systems are the most worrisome example, especially since extreme cold is even deadlier than extreme heat. Residential gas furnaces have been subjected to a US Department of Energy efficiency regulation that will effectively outlaw the most affordable versions of them. And many European nations have imposed various restrictions on gas heat in favor of electric heat pumps that are far costlier to purchase and install.

There are more examples of home appliances subject to increasing regulatory restrictions. Indeed, almost everything that plugs in or fires up around the home is a target, justified in whole or in part by the need to address climate change. The cumulative effect of these measures poses a real threat to the centurylong success story of increased appliance affordability.

Blog Post | Space

Space Is the New Free-Market Frontier

Revisiting a visionary book published in the same year SpaceX was founded.

Summary: The 2002 book Space: The Free-Market Frontier shows how entrepreneurial capitalism can overcome the stagnation of government-led space travel. In retrospect, this collection of forward-thinking papers correctly predicted the vital role of private enterprise in advancing space exploration, as shown by SpaceX’s achievement of drastically reducing the cost of space launches. While some forecasts did not materialize, such as space tourism’s rapid growth, the book accurately anticipated the transformative impact of market-driven innovation on the space industry.


Space: The Free-Market Frontier is an exceptional book that presents a collection of farsighted papers from a Cato Institute conference in March 2001. The book was published in 2002, the same year that Elon Musk founded SpaceX and launched the space travel revolution. It is fascinating to revisit this book 22 years later to see what the renowned authors got right—and what they got wrong.

First, the book’s fundamental thesis has proven to be correct: Private space travel is the cornerstone of the future of space exploration. Entrepreneurial spirit and capitalism have rescued space travel from the cul de sac in which it had become trapped following the conclusion of the Apollo program. The question posed by the 21 contributors to this volume was: “What has happened in the past three decades to delay humankind’s full exploitation of space, and what can be done to change the situation?”

In one paper, Robert W. Poole Jr., founder of the Reason Foundation, identified the main stumbling block: “the central planning approach: the assumption that engineers and government planners can devise the one best way to launch payloads to space . . . and that it is simply a question of pouring enough funding into the chosen model for long enough to make it succeed.”

Buzz Aldrin, the Apollo 11 lunar module pilot for the first human landing on the moon, was equally critical: “The fundamental building block of the US space program is the transportation capability that provides access to space. With the exception of the Space Shuttle, American space access capabilities have changed little in the past four decades and no progress has been made in solving the greatest obstacle to space development—the high costs of space access.”

However, disillusionment had also taken hold with regard to the shuttle program, particularly as promises of cost reductions at the beginning of the program were never fulfilled. Tidal W. McCoy, chairman of the Space Transportation Association, criticizes “the enormous cost of maintaining the Shuttle, not to mention the cost of launch alone, which is close to $500 million every time.” That equated to about $10,000 to $12,000 per pound of cargo per launch, which was comparable to the costs associated with the Apollo flights. The transportation of one pound of payload was approximately 10 times more expensive than optimistic forecasts had initially predicted and was no less than that of traditional, nonreusable rockets.

Following the deaths of seven astronauts in the first shuttle accident in 1986, another seven astronauts lost their lives in a second accident in 2003, just one year after this book was published. The shuttle program was ultimately discontinued in 2011. The subsequent nine years marked a low point in American space exploration, as the United States was forced to depend on outdated Russian spacecraft to transport its astronauts to the International Space Station.

In 2022, the X-33 and X-34 projects, which had cost over $1 billion, were canceled. The X-33 was an experimental spaceplane developed by NASA and Lockheed Martin in the 1990s as a prototype for a reusable space transportation system called VentureStar. The project was abandoned in 2001 before it ever flew. The X-34 was an unmanned hypersonic aircraft developed by NASA, also in the 1990s, designed to test cost-effective reusable spaceflight technologies, but after successful ground tests and several delays, it was terminated in 2001. “X-33 and X-34 both demonstrated that NASA has a less-than-stellar track record in picking the right technologies,” complained Marc Schlather, director of the Senate Space Transportation Roundtable.

What alternatives did the book propose? Robert W. Poole suggested: “Instead of defining in great detail the specifications of a new launch vehicle . . . these government agencies would simply announce their willingness to pay US$X per pound for payloads delivered to, say low Earth orbit (LEO). In other words, instead of the typical government contracting model, which has failed to change the cost-plus corporate culture of aerospace/defense contractors, NASA and the other government agencies with space transportation needs would purchase launch services.”

This is exactly what happened over the next few years. In 2002, Musk established SpaceX and started to design his own rockets, free of the constraints of NASA’s strict guidelines and specifications. Musk rejected the “cost-plus” business model, which had encouraged companies to inflate costs because that allowed them to maximize their profits. Instead, Musk sold his services to NASA at a fixed price, as had been suggested in this book. This approach incentivized Musk to cut costs, a goal he achieved. While launch costs had remained stagnant for nearly four decades, Musk has managed to slash them by an impressive 80 percent so far, and it looks as if he will succeed in achieving further dramatic cost reductions in years to come.

This was precisely what Dana Rohrabacher, chair of the House Subcommittee on Space and Aeronautics, predicted in his paper: “We all know that the costs of going into space are very high. We also know that the private sector has proven again and again that it can bring the costs of goods and service down and the quality up. Therefore, an obvious way to reduce the costs of access to and enterprise in space is to involve the private sector as much as possible.”

Doris Hamill, Philip Mongan, and Michael Kearney from the company SpaceHab called for a paradigm shift in their article “Space Commerce: An Entrepreneur’s Angle” and correctly predicted: “This approach to attracting commercial users does not require the space agencies to perform market development activities, to command its contractors to find efficiencies that will undercut the contractor’s revenue stream or to establish limits on how much they will subsidize commercial research. They only need to agree to purchase commercial services that meet their research needs within their budgets. The rest will happen by itself.” And that is exactly how it happened.

Of course, in addition to many accurate forecasts, the volume also contains predictions that did not come to fruition. For example, Aldrin predicted that the number of satellites launched into space would not increase significantly and that space tourism would emerge as a major industry. We know 22 years later that things turned out differently, but as space travel expert Eugen Reichl points out, “If you take SpaceX out of the equation, then Aldrin was not all that far off the mark. SpaceX is in a league of its own, far ahead of other countries’ and manufacturers’ space operations. Today, SpaceX launches roughly two-thirds to three-quarters of all satellites worldwide, and they are mostly Starlinks. SpaceX currently sends more than 2,000 satellites into orbit and beyond every year. As far as Aldrin’s perspective on space tourism is concerned, its time is yet to come. Richard Branson led the industry into a dead end with SpaceShip2, which used the only partially scalable hybrid engine of SpaceShip1. It was simply the wrong concept. There were also two serious accidents with a total of four fatalities.” Nevertheless, the arguments put forward in Space: The Free-Market Frontier in favor of private space travel as an attractive business sector are fundamentally convincing.

It is certainly possible that some of the predictions outlined in the book are still on their way. Overall, the volume shows that the paradigm shift initiated with the founding of Space X was correctly predicted even before the company’s inception. “What the United States needs,” wrote Poole, “is a policy toward space that is consistent with free markets and limited government.”

Blog Post | Cost of Services

Vision Abundance Doubles on the LASIK Eye Surgery Market

The time price of LASIK eye surgery fell by over 50 percent since 1998.

Summary: Time price calculations show that LASIK surgery costs have fallen significantly since 1998. Advancements in LASIK technology, such as the transition to bladeless methods and personalized treatments, have enhanced both safety and efficacy. Dr. Gholam A. Peyman’s pivotal patent in 1988 laid the foundation for LASIK innovation, contributing to its increased affordability and accessibility, especially in countries like China and India.


According to Market Scope, the typical cost for LASIK surgery in 2023 was $4,492. This is up slightly from the 1998 price of $4,360. Let’s calculate and compare the time prices to see the true price difference. Unskilled hourly compensation in 1998 was around $7.75, indicating a time price of 562.6 hours. Unskilled hourly compensation is closer to $16.15 today, indicating a time price of 272.1 hours. The time price has fallen 51.6 percent. You get 2.07 eyes corrected today for the time it took to earn the money to correct one in 1998. LASIK has become 107 percent more abundant.

LASIK is the acronym for laser-assisted in situ keratomileusis. Keratomileusis is the medical term for corneal reshaping. Clearsight.com reports:

LASIK technology has significantly advanced since its inception. The initial blade-based approach has been replaced by the bladeless method, using femtosecond lasers for increased precision. Wavefront and topography-guided technology now allow for personalized treatment, while sophisticated eye-tracking systems enhance the surgery’s accuracy and safety. The remarkable advancements have not only improved visual acuity but also enhanced the overall quality of visual perception, offering patients the ability to see the world around them more clearly and vividly.

While thousands of ophthalmologists and researchers from all over the world have been involved in advancing the technology, Iranian-born immigrant to the United States Dr. Gholam A. Peyman was awarded the key patent in 1988. He holds over 200 US patents, including for novel medical devices, intraocular drug delivery, surgical techniques, and new methods of diagnosis and treatment. In 2011, President Barack Obama awarded Peyman the National Medal of Innovation and Technology.

Continuous innovation in LASIK technology is making vision correction safer, faster, more precise, and more affordable. If you want to save some money and take a bit more risk, the procedure is around $1,600 in China and under $1,000 in India. China performs the most vision correction procedures on the planet.

Remember, the learning curve ordains that with every doubling of production, costs per unit fall between 20 percent and 30 percent. This is because we discover valuable new knowledge every time we perform the procedure.

This graph shows the level of abundance of LASIK in US compared to the rest of the world.

As noted, since 1998, LASIK has become 107 percent more abundant in the United States, in contrast to hospital services, which have become 37.7 percent less abundant. Why the huge difference? LASIK has been relatively free to innovate. Perhaps more important, health insurance does not pay for this procedure, and LASIK is globally competitive. We also note that elective procedures have enjoyed much greater abundance growth than insurance-covered surgeries.

When entrepreneurs are free to innovate and compete, prices fall and quality increases. The opposite happens when governments and bureaucrats step in to protect the status quo. Imagine where we would be today if the manufacturers of eyeglasses had prevented the innovation of contact lenses? Or the contact lens industry had prevented LASIK?

This article was published at Gale Winds on 2/28/2024.

Blog Post | Adoption of Technology

Bitcoin Brought Electricity to Countries in the Global South

It won’t be the United Nations or rich philanthropists that electrifies Africa.

Summary: Energy is indispensable for societal progress and well-being, yet many regions, particularly in the Global South, lack reliable electricity access. Traditional approaches to electrification, often reliant on charity or government aid, have struggled to address these issues effectively. However, a unique solution is emerging through bitcoin mining, where miners leverage excess energy to power their operations. This approach bypasses traditional barriers to energy access, offering a decentralized and financially sustainable solution.


Energy is life. For the world and its inhabitants to live better lives—freer, richer, safer, nicer, and more comfortable lives—the world needs more energy, not less. There are no rich, low-energy countries and no poor, high-energy countries.

“Energy is the only universal currency; it is necessary for getting anything done,” in Canadian-Czech energy theorist Vaclav Smil’s iconic words.

In an October 2023 report for the Alliance for Responsible Citizenship on how to bring electricity to the world’s poorest 800 million people, Robert Bryce, author of A Question of Power: Electricity and the Wealth of Nations, sums it as follows:

Electricity matters because it is the ultimate poverty killer. No matter where you look, as electricity use has increased, so has economic growth. Having electricity does not guarantee wealth. But its absence almost always means poverty. Indeed, electricity and economic growth go hand in hand.

To supply electricity on demand to many of those people, especially in the Global South, grids need to be built in the first place and then have enough extra capacity to ramp up production when needed. That requires overbuilding, which is expensive and wasteful, and the many consumers of the Global South are poor.

Adding to the trouble are the abysmal formal institutions of property rights and rule of law in many African countries, and the layout of the land becomes familiar: corruption and fickle property rights make foreign, long-term investments basically impossible; poor populations mean that local purchasing power is low and usually not worth the investment risk.

What’s left are slow-moving charity and bureaucratic government development aid, both of which suffer from terrible incentives, lack of ownership, and running into their own sort of self-serving corruption.

In “Stranded,” a long-read for Bitcoin Magazine, Human Rights Foundation’s Alex Gladstein accounted for his journey into the mushrooming electricity grids of sub-Saharan Africa: “Africa remains largely unable to harness these natural resources for its economic growth. A river might run through it, but human development in the region has been painfully reliant on charity or expensive foreign borrowing.”

Stable supply of electricity requires overbuilding; overbuilding requires stable property rights and rich enough consumers over which to spread out the costs and financially recoup the investment over time. Such conditions are rare. Thus, the electricity-generating capacity won’t be built in the first place, and most of Africa becomes dark when the sun sets.

Gladstein reports that a small hydro plant in the foothills of Mount Mulanje in Malawi, even though it was built and financed by the Scottish government, still supplies exorbitantly expensive electricity—around 90 cents per kilowatt hour—with most of its electricity-generating capacity going to waste.

What if there were an electricity user, a consumer-of-last-resort, that could scoop up any excess electricity and disengage at a moment’s notice if the population needed that power for lights and heating and cooking? A consumer that could co-locate with the power plants and thus avoid having to build out miles of transmission lines.

With that kind of support consumer—guaranteeing revenue by swallowing any excess generation, even before any local homes have been connected—the financial viability of the power plants could make the construction actually happen. It pays for itself right off the bat, regardless of transmissions or the disposable income of nearby consumers.

If so, we could bootstrap an electricity grid in the poorest areas of the world where neither capitalism nor central planning, neither charity worker nor industrialist, has managed to go. That consumer of last resort could accelerate electrification of the world’s poorest and monetize their energy resilience. That’s what Gladstein went to Africa to investigate the bourgeoning industry of bitcoin miners electrifying the continent.

Bitcoin Saves the World: Energy-Poverty Edition

Africa is used to large enterprises digging for minerals. The bitcoin miners springing forth all over the continent are different. They don’t need to move massive amounts of land and soil and don’t pollute nearby rivers. They operate by running machines that guess large numbers, which is the cryptographic method that secures bitcoin and confirms its transaction blocks. All they need to operate is electricity and an internet connection.

By co-locating and building with electricity generation, bitcoin miners remove some major obstacles to bringing power to the world’s poorest billion. In the rural area of Malawi that Gladstein visited, there was nowhere to offload the expensive hydro power and no financing to connect more households or build transmission lines to faraway urban areas: “The excess electricity couldn’t be sold, so the power stations built machines that existed solely to suck up the unused power.”

Bitcoin miners are in a globally competitive race to unlock patches of unused energy everywhere, so in came Gridless, an off-grid bitcoin miner with facilities in Kenya and Malawi. Any excess power generation in these regions is now comfortably eaten up by the company’s onsite mining machines—the utility company receiving its profit share straight in a bitcoin wallet of its own control, no banks or governments blocking or delaying international payments, and no surprise government currency devaluations undercutting its purchasing power.

No aid, no government, no charity; just profit-seeking bitcoiners trying to soak up underused energy. Gladstein observes:

One night during my visit to Bondo, Carl asked me to pause as the sunset was fading, to look at the hills around us: the lights were all turning on, all across the foothills of Mt. Mulanje. It was a powerful sight to see, and staggering to think that Bitcoin is helping to make it happen as it converts wasted energy into human progress. . . .

Bitcoin is often framed by critics as a waste of energy. But in Bondo, like in so many other places around the world, it becomes blazingly clear that if you aren’t mining Bitcoin, you are wasting energy. What was once a pitfall is now an opportunity.

For decades, our central-planning mindset had us “help” the Global South by directing resources there—building things we thought Africans needed, sending money to (mostly) corrupt leaders in the hopes that schools be built or economic growth be kick-started. We squandered billions in goodhearted nongovernmental organization projects.

Even for an astute and serious energy commentator as Bryce, not once in his 40-page report on how to electrify the Global South did it occur to him that bitcoin miners—the very people who are turning the lights on for the poorest in the world—could play a crucial role in achieving that.

It’s so counterintuitive and yet, once you see it, so obvious. In the end, says Gladstein, it won’t be the United Nations or rich philanthropists that electrifies Africa “but an open-source software network, with no known inventor, and controlled by no company or government.”