fbpx
01 / 05
Centers of Progress, Pt. 34: Kyoto (The Novel)

Blog Post | Leisure

Centers of Progress, Pt. 34: Kyoto (The Novel)

The courtly competition in Kyoto produced groundbreaking artistic innovations, including the world's first novel.

Today marks the thirty-fourth installment in a series of articles by HumanProgress.org called Centers of Progress. Where does progress happen? The story of civilization is in many ways the story of the city. It is the city that has helped to create and define the modern world. This bi-weekly column will give a short overview of urban centers that were the sites of pivotal advances in culture, economics, politics, technology, etc.

The thirty-fourth Center of Progress is Kyoto during the Heian (meaning “peace”) period (794–1185 AD), a golden age of Japanese history that saw the rise of a distinctive high culture devoted to aesthetic refinement and the emergence of many enduring artistic styles. As the home of the imperial court, Kyoto was the political battleground where noble families vied for prestige by patronizing the best artists. This courtly competition produced groundbreaking innovations in many areas, including literature, and birthed a new literary form that would redefine fiction-writing: the novel.

Today, Kyoto remains the cultural heart of Japan. Its well-preserved Buddhist temples, Shinto shrines, and royal palaces attract tourists from around the world, and its zen gardens have had a profound influence on the art of landscaping. Some of its historic sights together comprise a UNESCO World Heritage Site. Traditional crafts represent an important part of the city’s economy, with kimono-weavers, sake-brewers, and many other renowned local artisans continuing to produce goods using heritage techniques.

In other ways, Kyoto is on the cutting-edge. The city is a hub of the information technology and electronics industries, houses the headquarters of the video game company Nintendo, and contains some 40 institutions of higher education, including the prestigious Kyoto University. The population of Kyoto now exceeds 1.45 million people, and the broader metropolitan region, including Osaka and Kobe, is the second-most populated area in Japan.

Surrounded on three sides by mountains, Kyoto has been renowned for its natural beauty since ancient times, from the famous Sagano Bamboo Grove to the blossoming cherry trees along the banks of the Kamo River in the city’s southwest. That natural beauty helped win the city’s nickname, “Hana no Miyako,” the City of Flowers.

Archeological evidence suggests that humans have lived in the area since the Paleolithic period. While few relics remain from the city’s beginnings, some of Kyoto’s architecture, such as the Shinto Shimogamo Shrine, dates to the 6th century AD. Japanese architecture relies heavily on wood, which deteriorates quickly, so the original building materials have not survived. However, the millennia-long Japanese tradition of continuously revitalizing wooden structures with rigorous respect for their initial form “has ensured that what is visible today conforms in almost every detail with the original structures.” The most famous example of this architectural renewal is the Shinto shrine in Ise, 80 miles to Kyoto’s southeast, which has been completely dismantled and rebuilt every two decades for millennia. During the Heian era, that shrine became known for imperial patronage, with the emperor often sending messengers from Kyoto to pay respects to the sacred site.

Kyoto was officially established in the year 794. Emperor Kanmu (735–806 AD), likely feeling threatened by the growing power of Buddhist religious leaders, moved his court away from the great monasteries in the old capital of Nara. Initially, in AD 784, he moved the capital to Nagaoka-kyō, but a series of disasters struck after the move, including the assassination of a key imperial advisor, the death of the emperor’s mother and three of his wives (including the empress), drought alternating with flooding, earthquakes, famine, a smallpox epidemic, and a severe illness that sickened the crown prince. The government’s official Divination Bureau blamed that last misfortune on the vengeful ghost of the emperor’s half-brother Sawara, who had starved himself to death after a politically-motivated imprisonment.

While a popular narrative holds that Kanmu abandoned Nagaoka-kyō to flee the purported ghost, there may be a less spooky explanation. In AD 793, the emperor’s advisor Wake no Kiyomaro (733–799 AD), perhaps one of the best hydraulic engineers of the 8th century, may have convinced the emperor that flood-proofing Nagaoka-kyō would be more expensive than starting from scratch in a less flood-prone location.

Whatever the reason, in AD 794, Kanmu moved the capital again, erecting a new city along a grid pattern modeled after the illustrious Chinese Tang-dynasty (618–907 AD) capital of Chang’an. The lavish new capital cost a staggering three-fifths of Japan’s national budget at the time. Its layout strictly conformed to Chinese feng shui or geomancy, a pseudoscience that seeks to align manmade structures with the cardinal directions of north, south, east, and west in a precise way thought to bring good fortune. The imperial palace compound, enclosed by a large rectangular outer wall (the daidairi), was built in the city’s north and faced south. Fires presented a constant problem to the predominantly wooden complex, and, although rebuilt many times, the Heian Palace no longer exists. (The present Kyoto Imperial Palace, modeled on the Heian period style, occupies a nearby location). 

From the Heian Palace’s main entrance emanated a large central thoroughfare, the monumental Suzaku Avenue. Over 260-feet wide, Suzaku Avenue ran through the center of the city to the enormous Rashōmon gate in the city’s south. That gate lent its name to the famous 1950 murder trial film by Akira Kurosawa set at the end of the Heian era. In the north of the city, close to the imperial compound, substantial Chinese-style homes housed the nobility. The emperor named his pricey metropolis Heiankyō, meaning “Capital of Peace and Tranquility,” now known simply as Kyōto, meaning “Capital City.” (It retains that name although Tokyo succeeded it as Japan’s capital in 1868).

The Heian period of Japanese history derives its name from the era’s capital city. However, the age earned its moniker’s meaning and was relatively conflict-free until a civil war (the Genpei War that lasted from 1180 to 1185 AD) brought the period to a close. This long peace allowed the court to develop a culture devoted to aesthetic refinement.

For centuries, the aristocratic Fujiwara family dominated not only the politics of the court at Kyoto (marrying into the imperial line and producing many emperors), but also sought to steer the city’s culture, prioritizing art and courtly sophistication. The nobility competed to fund all manner of artworks, gaining prestige from association with the era’s greatest innovators in areas such as calligraphy, theater, song, sculpture, landscaping, puppetry (bunraku), dance, and painting. 

The nobility also produced art themselves. “[T]he best poets were courtiers of middling rank,” noted Princeton University Japanese literature professor Earl Roy Miner. “The Ariwara family (or ‘clan’), the Ono family, and the Ki family produced many of the best poets” despite the Fujiwara family’s greater wealth and influence. The poet Ono no Michikaze (894–966 AD), for example, is credited with founding Japanese-style calligraphy.

It was in Kyoto that the court gradually stopped emulating Chinese society and developed uniquely Japanese traditions. For example, the Japanese Yamato-e painting tradition, noted for its use of aerial perspective and clouds to obscure parts of the depicted scene, competed with the Chinese-inspired kara-e painting tradition.

Perhaps above all, the Heian courtiers prized poetic and literary achievement. According to Amy Vladeck Heinrich, who directs the East Asia Library at Columbia University, “a person’s skill in poetry was a major criterion in determining his or her standing in society, even influencing political positions.” That was for good reason, as poetry played a large role in both courtly romance and diplomacy, with formal poetry exchanges strengthening the ties between potential paramours as well as other kingdoms.

The chief poetic form was the waka, from which the now better-known haiku was derived. Waka consist of thirty-one syllables, arranged in five lines, usually containing five, seven, five, seven, and seven syllables, respectively. One of the era’s greatest poets was the Kyoto courtier Ki no Tsurayuki (872–945 AD), co-compiler of the first imperially-sponsored poetry anthology and author of the first critical essay on waka. “The poetry of Japan has its roots in the human heart and flourishes in the countless leaves of words,” he wrote. “Because human beings possess interests of so many kinds it is in poetry that they give expression to the meditations of their hearts in terms of the sights appearing before their eyes and the sounds coming to their ears. Hearing the warbler sing among the blossoms and the frog in his fresh waters — is there any living being not given to song!” (The Japanese word for song can also mean poem). 

A favorite subject for Kyoto’s artists and writers was nature, especially as it changed with the seasons. As the Metropolitan Museum of Art puts it, “Kyoto residents were deeply moved by the subtle seasonal changes that colored the hills and mountains surrounding them and regulated the patterns of daily life.” 

Another recurrent theme was the impermanence of beauty and transience of life. Life in Kyoto was, after all, despite its relative opulence, extremely short. The Japanese historian Kiyoyuki Higuchi has written, “actual living conditions in and around the imperial court were, by today’s standards, unimaginably unsanitary and unnatural. According to books on the history of epidemic disease and medical treatment, aristocratic women, on average, died at age 27 or 28, while men died at age 32 or 33. In addition to the infant mortality rate being extremely high, the rate of women dying at childbirth was also high … Looking at the specific causes of death at the time, tuberculosis (possibly including pneumonia cases) accounted for 54 percent, beriberi for 20 percent, and diseases of the skin (including smallpox) for 10 percent.”

One of the period’s most iconic poems, by Ono no Komachi (c. 825–c. 900 AD), a courtier famed for her beauty, focuses on the fleeting nature of her looks:

花の色は                     Hana no iro wa           The flowers’ color

うつりにけりな          utsuri ni keri na           already faded away    

いたづらに                  itazura ni                     so meaninglessly        

わが身世にふる          waga mi yo ni furu      I’ve aged, passing through the world

ながめせしまに          nagame seshi ma ni   gazing blankly at the rain

The poem exemplifies wordplay, and its multiple puns make it impossible to precisely translate – as the verb furu can mean either “to age” or “to rain,” and the word “nagame” can mean either “lengthy rain” or “vacant gaze.”

When Kyoto was founded, Japanese was usually written using the Chinese writing system, which was not ideal. Chinese characters could not easily convey aspects of the Japanese language that were not present in Chinese. But in the 9th century, in Kyoto, the court women–discouraged from studying Chinese–developed a simplified phonetic syllabary writing system better suited to the nuances of the Japanese language. Their system, hiragana, not only helped to spread female literacy but gave writers far more flexibility and resulted in much of the best writing of the era being done by women. Today, Japanese is written using a combination of Chinese characters (kanji), hiragana, and katakana (another simplified syllabary developed by monks).

Perhaps the best example of the feminine influence on Heian-period Japanese literature is the competition between two of Emperor Ichijō’s (980–1011 AD) wives, Empress Teishi (977–1001 AD) and Empress Shōshi (988–1074 AD), who each sought to outdo the other and place her own son on the throne. They fought not with violence but with the arts: each tried to fill her household with superior poets and artists, thus heightening her relative prestige at court. 

These dueling empresses brought about a literary rivalry for the ages between two noblewomen in their service, who went by the pen names Sei Shōnagon (c. 966–c. 1025 AD) and Murasaki Shikibu (c. 978–c. 1014 AD). Shōnagon was a lady-in-waiting to Empress Teishi, and Murasaki was a lady-in-waiting to Empress Shōshi. Each may have been summoned to serve her respective empress specifically because of her literary talent.

In the year 1002, Shōnagon completed The Pillow Book, a compilation of poetry, observations, and musings now deemed a masterpiece of classical Japanese literature and among the best sources of information on Heian court life. Murasaki fired back with a masterpiece of her own and wrote scathing critiques of Shōnagon’s writing and personality. By the year 1008, at least part of Murasaki’s The Tale of Genji was in circulation among Kyoto’s aristocracy.

The Tale of Genji, which chronicles the youth, romances, and eventual death of a handsome and frequently lovestruck prince, is often considered the world’s first novel. The Encyclopedia Britannica notes that The Tale of Genji remains “the finest work not only of the Heian period but of all Japanese literature and merits being called the first important novel written anywhere in the world.” 

The Tale of Genji contains many of the elements that define novels to this day: it was a lengthy prose fiction piece with a central character and minor characters, narrative events, parallel plots, and, of course, conflict. The novel also features around 800 waka, which the characters often use to communicate. The story became an immediate hit among the nobility, inspiring numerous paintings of the novel’s scenes. 

While the novel’s focus is an idealized vision of courtly love, it also contains untimely deaths and other unpleasant details that would have been all too familiar to Kyoto’s courtiers. For example, there is no mention of bathing in The Tale of Genji, which sadly reflected Kyoto’s state of hygiene. As Higuchi points out:

[T]he custom of bathing was not widespread among the nobility of that time … Although beyond the imagination of people today, if a Heian noblewoman were to approach you, her body odor would likely be powerful. Moreover, whenever they caught colds, they would chew on raw garlic, increasing the odor level even more. A passage in Genji clearly illustrates this point: a woman writing a reply to a man asks that he please not stop by tonight since she reeks from eating garlic.

Kyoto’s greatest literary feud had a decisive victor. Shōnagon remains relatively unknown outside of Japan, and the empress she served died in childbirth in her early twenties. Murasaki’s writing has gone down in history, and the empress she served lived to see two of her sons become emperors. Today, an entire museum dedicated to The Tale of Genji stands in Uji just outside Kyoto.

The Heian period came to a close with the rise of samurai (hereditary military nobility) culture, and the de facto rulership of Japan transferred from Kyoto’s refined albeit unbathed courtiers to warring military generals called shogun.

To this day, the Japanese Imperial family still runs an annual poetry-writing contest. But whereas in the Heian era, typically only the nobility and monks had the time and education to compose poetry or prose, today, amateur writing is a popular pastime throughout Japan and the rest of the developed world.

Kyoto, Japan old town skyline in the Higashiyama District in the afternoon
Kyoto, Japan old town skyline in the Higashiyama District in the afternoon.

Many centuries after Kyoto’s era of literary brilliance, in 1905, the American professor of English Selden Lincoln Whitcomb opined, “The novel is the most comprehensive form of representative art that man has discovered.” For being at the center of the novel’s invention, a turning point in the history of the literary arts, and its numerous other achievements in art and poetry, Heian-era Kyoto is rightly our thirty-fourth Center of Progress.

Blog Post | Wellbeing

Is This the Best Time to Be Alive?

Overwhelming evidence shows that we are richer, healthier, better fed, better educated, and even more humane than ever before.

Imagine, if you will, the following scenario. It is 1723, and you are invited to dinner in a bucolic New England countryside, unspoiled by the ravages of the Industrial Revolution. There, you encounter a family of English settlers who left the Old World to start a new life in North America. The father, muscles bulging after a vigorous day of work on the farm, sits at the head of the table, reading from the Bible. His beautiful wife, dressed in rustic finery, is putting finishing touches on a pot of hearty stew. The son, a strapping lad of 17, has just returned from an invigorating horse ride, while the daughter, aged 12, is playing with her dolls. Aside from the antiquated gender roles, what’s there not to like?

As an idealized depiction of pre-industrial life, the setting is easily recognizable to anyone familiar with Romantic writing or films such as Gone with the Wind or the Lord of the Rings trilogy. As a description of reality, however, it is rubbish; balderdash; nonsense and humbug. More likely than not, the father is in agonizing and chronic pain from decades of hard labor. His wife’s lungs, destroyed by years of indoor pollution, make her cough blood. Soon, she will be dead. The daughter, the family being too poor to afford a dowry, will spend her life as a spinster, shunned by her peers. And the son, having recently visited a prostitute, is suffering from a mysterious ailment that will make him blind in five years and kill him before he is 30.

For most of human history, life was very difficult for most people. They lacked basic medicines and died relatively young. They had no painkillers, and people with ailments spent much of their lives in agonizing pain. Entire families lived in bug-infested dwellings that offered neither comfort nor privacy. They worked in the fields from sunrise to sunset, yet hunger and famines were common. Transportation was primitive, and most people never traveled beyond their native villages or nearest towns. Ignorance and illiteracy were rife. The “good old days” were, by and large, very bad for the great majority of humankind. Since then, humanity has made enormous progress—especially over the course of the last two centuries.

How much progress?

Life expectancy before the modern era, which is to say, the last 200 years or so, was between ages 25 and 30. Today, the global average is 73 years old. It is 78 in the United States and 85 in Hong Kong.

In the mid-18th century, 40 percent of children died before their 15th birthday in Sweden and 50 percent in Bavaria. That was not unusual. The average child mortality among hunter-gatherers was 49 percent. Today, global child mortality is 4 percent. It is 0.3 percent in the Nordic nations and Japan.

Most of the people who survived into adulthood lived on the equivalent of $2 per day—a permanent state of penury that lasted from the start of the agricultural revolution 10,000 years ago until the 1800s. Today, the global average is $35—adjusted for inflation. Put differently, the average inhabitant of the world is 18 times better off.

With rising incomes came a massive reduction in absolute poverty, which fell from 90 percent in the early 19th century to 40 percent in 1980 to less than 10 percent today. As scholars from the Brookings Institution put it, “Poverty reduction of this magnitude is unparalleled in history.”

Along with absolute poverty came hunger. Famines were once common, and the average food consumption in France did not reach 2,000 calories per person per day until the 1820s. Today, the global average is approaching 3,000 calories, and obesity is an increasing problem—even in sub-Saharan Africa.

Almost 90 percent of people worldwide in 1820 were illiterate. Today, over 90 percent of humanity is literate. As late as 1870, the total length of schooling at all levels of education for people between the ages of 24 and 65 was 0.5 years. Today, it is nine years.

These are the basics, but don’t forget other conveniences of modern life, such as antibiotics. President Calvin Coolidge’s son died from an infected blister, which he developed while playing tennis at the White House in 1924. Four years later, Alexander Fleming discovered penicillin. Or think of air conditioning, the arrival of which increased productivity and, therefore, standards of living in the American South and ensured that New Yorkers didn’t have to sleep on outside staircases during the summer to keep cool.

So far, I have chiefly focused only on material improvements. Technological change, which drives material progress forward, is cumulative. But the unprecedented prosperity that most people enjoy today isn’t the most remarkable aspect of modern life. That must be the gradual improvement in our treatment of one another and of the natural world around us—a fact that’s even more remarkable given that human nature is largely unchanging.

Let’s start with the most obvious. Slavery can be traced back to Sumer, a Middle Eastern civilization that flourished between 4,500 BC and 1,900 BC. Over the succeeding 4,000 years, every civilization at one point or another practiced chattel slavery. Today, it is banned in every country on Earth.

In ancient Greece and many other cultures, women were the property of men. They were deliberately kept confined and ignorant. And while it is true that the status of women ranged widely throughout history, it was only in 1893 New Zealand that women obtained the right to vote. Today, the only place where women have no vote is the Papal Election at the Vatican.

A similar story can be told about gays and lesbians. It is a myth that the equality, which gays and lesbians enjoy in the West today, is merely a return to a happy ancient past. The Greeks tolerated (and highly regulated) sexual encounters among men, but lesbianism (women being the property of men) was unacceptable. The same was true about relationships between adult males. In the end, all men were expected to marry and produce children for the military.

Similarly, it is a mistake to create a dichotomy between males and the rest. Most men in history never had political power. The United States was the first country on Earth where most free men could vote in the early 1800s. Prior to that, men formed the backbone of oppressed peasantry, whose job was to feed the aristocrats and die in their wars.

Strange though it may sound, given the Russian barbarism in Ukraine and Hamas’s in Israel, data suggests that humans are more peaceful than they used to be. Five hundred years ago, great powers were at war 100 percent of the time. Every springtime, armies moved, invaded the neighbor’s territory, and fought until wintertime. War was the norm. Today, it is peace. In fact, this year marks 70 years since the last war between great powers. No comparable period of peace exists in the historical record.

Homicides are also down. At the time of Leonardo Da Vinci, some 73 out of every 100,000 Italians could expect to be murdered in their lifetimes. Today, it is less than one. Something similar has happened in Belgium, the Netherlands, Switzerland, Germany, Scandinavia, and many other places on Earth.

Human sacrifice, cannibalism, eunuchs, harems, dueling, foot-binding, heretic and witch burning, public torture and executions, infanticide, freak shows and laughing at the insane, as Harvard University’s Steven Pinker has documented, are all gone or linger only in the worst of the planet’s backwaters.

Finally, we are also more mindful of nonhumans. Lowering cats into a fire to make them scream was a popular spectacle in 16th century Paris. Ditto bearbaiting, a blood sport in which a chained bear and one or more dogs were forced to fight. Speaking of dogs, some were used as foot warmers while others were bred to run on a wheel, called a turnspit or dog wheel, to turn the meat in the kitchen. Whaling was also common.

Overwhelming evidence from across the academic disciplines clearly shows that we are richer, live longer, are better fed, and are better educated. Most of all, evidence shows that we are more humane. My point, therefore, is a simple one: this is the best time to be alive.

Blog Post | Literacy

When the Mac “Ruined” Writing

When the Macintosh was Blamed for Ruining Student Writing

This article was originally published on Pessimists Archive.

Quills were once the default writing tool, when pens rose to prominence their impact on writing would be a hot debate in the literary world, one that would repeat when typewriters started to replace pens, and once more when word processors displaced typewriters.

A Macintosh computer next to an old computer system.

Just as people were coming to terms with word processors in the late 1970s and early 1980s, another evolution in writing tools took place, Apple introduced Macintosh: the first commercially available computer with a Graphical User Interface (GUI.) This new paradigm of computing would garner intrigue.

Newspaper from the New York Times 
showing a cartoon with the text "Electronic notebook" and "Computer programs that mimic a desk, even a messy one"

As these new devices made there way onto college campuses, some academics wondered how they might impact student work. At the University of Delaware, one member of the English Department (Marcia Peoples Halio) would conduct a study on the matter culminating in a 1990 paper titled “Student Writing: Can the Machine Maim the Message?”

Newspaper from the Pittsburg Press with the title "Students using IBMs do better than Apple users, study says"

Her conclusion? Students using a Macintosh produced inferior work compared to their counterparts on the austere command line (30% of Macintosh writers used complex sentences, whereas 50% of the IBM writers did.) Peoples Halio would quip in the paper: “Never before in 12 years of teaching had I seen such a sloppy bunch of papers” with regards to work created on the Macintosh.

An article from Rory J. O'Connor with the title "Do our students pen failure with colorful computers?"

The study would catch the attention of the San Jose Mercury News that would summarize her findings: “The same icons, mouse, fonts, and graphics that make the machine easy to use may well turn off the brain’s creative-writing abilities.” The story would be picked up by numerous outlets, a few international and eventually garnered a mention in The New York Times who noted in 1992 that no study had yet refuted her findings, also noting the criticism it garnered from other academics.

An article from Ex Machina by Peter H. Lewis titled "Computer words: Less perfect?" with a cartoon version of William Shakespeare writing on a personal computer.

The most popular critique was Halio’s methodology, namely the glaring absence of control groups. An Apple representative swiftly pointed out the studies flaws, Halio retorted, acknowledging some imperfections in her study, staunchly compared giving a Macintosh to an inexperienced writer to handing a brand-new sports car to a gleeful 16-year-old. The Los Angeles Times would admit “the idea that our minds are somehow warped by our word processors” was too compelling to ignore, but was generally critical of the study.

An article on Innovation by Michael Schrage titled "Quill or Computer? Makes No Difference"

Academic Steven Youra would publish a response to the paper the same year in ‘Computers and Composition’ where he would criticize the study’s inadequate design and flawed logic, criticizing Halio’s limited understanding of the Macintosh’s capabilities.

The authors central point is that students view the Macintosh as a toy, and therefore when they write with it, their language is less formal than that of IBM users, who associate their machine with high seriousness.

Steven Youra, “Computers and Student Writing: Maiming the Macintosh (A Response)”

He refuted the portrayal of Macintosh as fostering immature writing, emphasizing its potential to encourage playful, engaged writing experiences. In 1995, the same journal would publish a study that sought to address the studies flaws – producing results refuting Halio Pope’s findings.

The Chicago Tribune syndicated a column in 1991 co-written by a young Brit Hume, which concluded the debate was moot since GUIs would soon be everywhere, while noting the study likely suffered from sample bias because of the types of students who’d opt for a Macintosh in the first place (less academic more creative types.)

Article by T.R Reid and Brit Hume titled Scholar finds big gap between Mac, PC users", and another paper by John Marrs titled "Fancy Typewriters"

As we’re faced with new evolutions in writing tools – it is worth remembering this little debate and those that proceeded it, going all the way back to the original literary technology: writing itself.

Socrates critiquing writing:

If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

Plato quoting Socrates in the Phaedrus, 370 BCE

Blog Post | Economics

India, a Story of Progress

The world should take note of which principles brought freedom and prosperity to India.

The 76-year story of modern India is one of the greatest stories of progress in history. At the time of its independence in 1947, it was a mostly agricultural economy of 340 million people with a literacy rate of only 12 percent and a life expectancy of only 32 years. Today, it has the fifth-largest economy by nominal gross domestic product (GDP) and third largest by purchasing power parity. In his book “Enlightenment Now: The Case for Reason, Science, Humanism, and Progress,” Steven Pinker highlights six key areas of progress: life, health, wealth, safety, literacy, and sustenance. In every one of these metrics, life in India has significantly improved over the years.

Self-Sufficiency Is Self-Destructive

Since independence in 1947, India suffered the consequences of socialist ideals. In a quest for self-sufficiency, the government played a heavy role in the economy. Under Prime Minister Jawaharlal Nehru, India pursued Soviet-style “Five Year Plans,” intending to turn India into an industrialized economy. From 1947 to 1991, the government owned most key industries, including steel, coal, telecommunications, banking, and heavy industry. India’s economy was closed to foreign competition, with high tariffs and restrictions to foreign investment. For example, the import tariff for cars was around 125 percent in 1960. The policy of import substitution aimed to produce goods domestically instead of importing them from abroad. In reality, massive waste and inefficiency resulted, as Indian businesses were protected from international competition.

Furthermore, India’s private sector was heavily constrained. Overregulation and corruption stifled the business environment, and subsidies and price controls disincentivized production, leading to market distortions and fiscal deficits. The government required industrial licenses for the establishment, expansion, or modernization of industries, causing bureaucratic barriers and corruption. This environment tended to harm small businesses at the expense of large corporations, as large corporations could better cope with the complex bureaucracy. The period was often referred to as the License Raj, comparing the extent of control of the industrial licenses to that of direct rule by the British Empire before Indian independence.

Sustenance, Health, and Life

In his 2016 book, “Progress: Ten Reasons to Look Forward to the Future,” Johan Norberg showed how these problems impacted daily life. When Norman Borlaug invented new high-yield wheat, India was facing a threat of mass starvation. Despite that, Indian state monopolies lobbied against both food and fertilizer imports. Fortunately, Borlaug was able to bring through his innovations. In 1965, yields in India rose by 70 percent.

From 1948 to 2018, the number of calories per person increased by two-thirds, growing from 1,570 to 2,533. For reference, the recommended healthy number of calories per person is 2,000 for a woman and 2,500 for a man. The average Indian now no longer suffers from undernourishment.

This achievement is even more remarkable when one considers the growth of the Indian population, which added a billion new citizens between 1948 and 2018. As well as having a greater population, Indians began living longer, with life expectancy more than doubling between 1947 and 2022. Furthermore, fewer children were dying—infant mortality fell dramatically between 1960 and 2022. Many children previously suffered from malnutrition. Parents could now watch their children grow up and have children of their own.

Wealth, Safety, and Literacy

However, problems in India remained. The License Raj continued to strangle the Indian economy in the name of protectionism. In 1978, the economist Raj Krishna coined the term the “Hindu rate of growth” to refer to slow economic growth of around 4 percent per year, which was prevalent in India from the 1950s to the 1980s. But Krishna was incorrect. The slow rate of growth had nothing to do with Hinduism or factors unique to India. Instead, India’s growth was low, because of the restrictive policies of the socialist government. As soon as India removed the restrictions to competition and commerce, it began reaching growth rates of between 6 percent and 9 percent each year.

The economic liberalization of India was prompted by an economic crisis in 1990. India, having borrowed heavily from international lenders to finance infrastructure projects, was facing a balance of payments crisis and had only two weeks until it would default on its debt. A new government under Prime Minister P. V. Narasimha Rao abolished the License Raj, removing restrictions for most industries and foreign investment into Indian companies. Restrictions on foreign technology and imports were scrapped, as were subsidies to fertilizer and sugar. India flung open its doors to the world, embracing competition in both imports and exports. Indian companies now faced foreign competition in the domestic market but also had the entire world market to sell to.

New industries sprung up, with India developing competitive industries in telecommunications, software, pharmaceuticals, biotechnology, research and development, and professional services.

The result was a dramatic increase in the standard of living for ordinary Indians. The economy flourished as foreign investment flooded in. The innovating spirit of ordinary Indians was unleashed. Between 1993 and 2021, access to electricity went from 50 percent of the population to 99.6 percent. The literacy rate improved from 48.2 percent to 74.4 percent. This is even more remarkable considering that India added extra 600 million people during that period.

Having access to a microwave, refrigeration, and electric lighting are all amenities that we take for granted, but these conveniences are relatively recent for the average Indian. A virtuous cycle of more educated, well-fed citizens creates greater innovation and prosperity. It is also correlated with less violence, with the homicide rate falling by 48 percent between 1991 and 2020.

Absolute poverty also has been falling. In 1987, half of the Indian population lived in extreme poverty. By 2019, this figure had fallen to 10 percent. Granted, there are still issues in India. Millions of people live in slums, and poverty remains a problem. However, it is worth appreciating just how far India has come.

As the Indian economist Gurcharan Das says about his country’s progress in the documentary “India Awakes,” “The principles that brought so much prosperity and freedom to the West are being affirmed in a country that is in the East.”

These principles are that of a market economy, openness to innovation, and a favorable attitude to commerce.

Life, health, education, and sustenance have all measurably improved. Violence and poverty have declined. Progress has occurred, and the world should take note.

Build For Tomorrow | Ep. 61

The Greatest Myth about Learning

For decades, people have been told they have a certain “learning style.” Maybe you think you’re a visual learner, for example, or a reading/writing learner. But new research is upending all that. Here’s what we got wrong — and how we can become truly better learners.