fbpx
01 / 05
Centers of Progress, Pt. 26: Los Angeles (Cinema)

Blog Post | Leisure

Centers of Progress, Pt. 26: Los Angeles (Cinema)

Introducing the city that invented modern filmmaking.

Today marks the twenty-sixth installment in a series of articles by HumanProgress.org called Centers of Progress. Where does progress happen? The story of civilization is in many ways the story of the city. It is the city that has helped to create and define the modern world. This bi-weekly column will give a short overview of urban centers that were the sites of pivotal advances in culture, economics, politics, technology, etc.

Our twenty-sixth Center of Progress is Los Angeles during the Golden Age of Hollywood (1910s–1960s). The city pioneered new filmmaking styles that were soon adopted globally, giving the world some of its most iconic and beloved films in the process. Los Angeles’s Hollywood neighborhood is synonymous with filmmaking, representing the city’s unparalleled cinematic contributions.

With some four million inhabitants, Los Angeles is only the second most populous city in the United States. However, it may well be the most glamorous, with many celebrities and movie stars calling Los Angeles home. The city is also known for its impressive sports centers and music venues, shopping and nightlife, pleasant Mediterranean climate, terrible traffic, beautiful beaches, and easy-going atmosphere. Two famous landmarks include Disneyland and Universal Studios Hollywood, film-related theme parks that attract around 18 million and 9 million annual visitors, respectively.

The site where Los Angeles now stands was first inhabited by native tribes, including the Chumash and Tongva. The first European explorer to discover the area was Juan Rodríguez Cabrillo, who arrived in 1542. Los Angeles’s Cabrillo Beach still bears his name. Spanish settlers founded a small ranching community at the site in 1781, calling it El Pueblo de Nuestra Señora la Reina de los Ángeles, meaning “the Town of Our Lady the Queen of the Angels.” The name was soon shortened to Pueblo de los Ángeles.

The Mexican War of Independence transferred control of the town from Spain to Mexico in 1821. Then, after the conclusion of the Mexican-American War (1846-1848), the future state of California was ceded to the United States.

That same year, gold was discovered in California. Hopeful miners poured into the area, and when California gained statehood in 1850, the migration intensified. True to its ranching roots, Los Angeles soon boasted the largest cattle herds in the state. The town gained a reputation as the “Queen of the Cow Counties” for supplying beef and dairy products to feed the growing population of gold-miners in the north.

While most of Los Angeles County was cattle ranchland, there were also a number of farms devoted to growing vegetables and citrus fruits. (To this day, the Los Angeles area remains a top producer of the nation’s broccoli, spinach, tomatoes, and avocados). As the local food industry prospered, the city proper began to grow, from around sixteen hundred inhabitants in 1850 to almost six thousand people by 1870.

In 1883, a politician and real estate developer named Harvey Wilcox and Daeida, his significantly younger second wife, moved to town. The pair wanted to try their hand at ranching and bought over a hundred acres of apricot and fig groves. When their ranch failed, they used the land to build a community of upscale homes. They named the new subdivision “Hollywood.”

One story claims that Daeida was inspired by an estate with the same name in Illinois or by a town of the same name in Ohio. Others theorize that the Wilcoxes drew inspiration from a native shrub with red berries called toyon, or “California holly,” which grows abundantly in the area. In tribute to that theory, the Los Angeles City Council named toyon the city’s “official native plant” in 2012. While the true origin of the name “Hollywood” remains disputed, Daeida has been nicknamed the “mother of Hollywood” for her role in the story. (Ironically, she envisioned Hollywood as a Christian “temperance community” free of alcohol, gambling, and the like).

In any case, Hollywood started as a small but wealthy enclave that by 1900 boasted a post office, a hotel, a livery stable, and even a streetcar. A banker and real estate magnate named H.J. Whitley moved into the subdivision in 1902. He further developed the area, building more luxury homes and bringing electricity, gas, and telephone lines to town. He has been nicknamed the “father of Hollywood.”

Hollywood was officially incorporated in 1903. Unable to independently handle its sewage and water needs, Hollywood merged with the city of Los Angeles in 1910. By then, Los Angeles had around 300,000 people. That would top a million by 1930, and by 1960, that would grow to 2.5 million.

The city’s explosive growth can be traced to one industry.

The first film to be completed in Hollywood was The Count of Monte Cristo in 1908. The medium of film was still young, and The Count of Monte Cristo was one of the first films to convey a fictional narrative. Filming began in our previous Center of Progress Chicago, but by wrapping up production in Los Angeles, the film crew made history. Two years later came the first film produced start-to-finish in Hollywood, called In Old California. The first Los Angeles film studio appeared on Sunset Boulevard in 1911. Others followed suit, and what began as a trickle soon turned into a flood.

What led so many filmmakers to relocate to Los Angeles? The climate allowed outdoor filming year-round, the terrain was varied enough to provide a multitude of settings, the land and labor were cheap—and, most importantly, it was far away from the state of New Jersey, where the prolific inventor Thomas Edison lived.

With exclusive control of many of the technologies needed to make films and operate movie theaters, Edison’s Motion Picture Patent Company had secured a near-monopoly on the industry. Edison held over a thousand different patents and was notoriously litigious. Moreover, Edison’s company was infamous for employing mobsters to extort and punish those who violated his film-related patents.

California was the perfect place to flee from Edison’s wrath. Not only was it far from the East-Coast mafia, but many California judges were hesitant to enforce Edison’s intellectual property claims.

The Supreme Court eventually weighed in, ruling in 1915 that Edison’s company had engaged in illegal anti-competitive behavior that was strangling the film industry. But by then, and certainly by the time that Edison’s film-related patents had all expired, the cinema industry was already firmly planted in California. Edison has been called “the unintentional founder of Hollywood” for his role in driving the country’s filmmakers to the West coast.

Hollywood became the world leader in narrative silent films and continued to lead after the commercialization of “talkies,” or films with sound, in the mid-to-late 1920s. At first, such films were exclusively shorts. Then, in 1927, Hollywood produced The Jazz Singer, the first feature-length movie to include the actors’ voices. It was a hit. More and more aspiring actors and film producers flocked to Los Angeles to join the burgeoning industry.

In the 1930s, Los Angeles studios competed to wow audiences with innovative films. The Academy Awards, or Oscars, were first presented at a private dinner in a Los Angeles hotel in 1929 and first broadcast via radio in 1930. They remain the most prestigious awards in the entertainment industry to this day. Distinct movie genres soon emerged, including romantic comedies (including the beloved film It Happened One Night, which swept the Oscars and boasts a near-perfect score on Rotten Tomatoes), musicals, westerns, and horror films, among others.

The innovations of that era continue to influence movies today. King Kong premiered in 1933. In 2021, its namesake giant ape appeared in his twelfth feature film, this time battling Godzilla. Hollywood gave the world its first full-length animated feature film in 1937 with Walt Disney’s Snow White and the Seven Dwarfs. In 1939, Hollywood popularized color productions with the release of The Wizard of Oz. While it was not the first color film, it was among the most influential in promoting the technology’s widespread adoption.

In the 1940s, the iconic “Hollywood sign” first appeared in its current incarnation, replacing a sign reading Hollywoodland erected in 1923. The next few decades saw the production of some of history’s best-loved classic films. Those include Citizen Kane (1941), Casablanca (1942), It’s a Wonderful Life (1946), Singin’ in the Rain (1952), Rear Window (1954), 12 Angry Men (1957), Vertigo (1952), Psycho (1960), Breakfast at Tiffany’s (1961), and The Good, the Bad and the Ugly (1966). Many remain top-rated productions, beating decades of more recent movies to appear in the Internet Movie Database’s top 100 films sorted by user rating.

As it transformed from a humble cattle town into the geographic center of filmmaking, Los Angeles came to define a new art form. Movies enrich humanity by providing entertainment, inspiration, laughter, and thrills. Moreover, films create cultural experiences that can bring people together, act as an artistic outlet, and even shift worldviews. Hollywood created modern cinema. Thus, every person who has ever enjoyed a movie, even one produced elsewhere, owes a debt of gratitude to Los Angeles. It is for those reasons that Los Angeles is our 26th Center of Progress.

Blog Post | Adoption of Technology

Why Steve Wozniak Dismissed PCs and Steve Jobs Didn’t

In March 1976 Apple Co-Founder Steve Wozniak presented a circuit board design for a home PC at the now legendary “Homebrew Computer Club” – a meeting of early personal computing pioneers.

In attendance was a young Steve Jobs who – so struck by the projects potential – convinced Wozniak to keep the designs private and co-found Apple to commercialize them. The rest was history.

Often forgotten in this Silicon Valley legend was the doubt one of the Steve’s (Steve Wozniak) cast on the very idea of personal computing, just as the companies magnum opus – the Macintosh – had been released to the world.

In a 1985 interview Wozniak posited: “The home computer may be going the way of video games, which are a dying fad” – alluding to the 1983 crash in the video game market. Wozniak continued:

for most personal tasks, such as balancing a check book, consulting airline schedules, writing a modest number of letters, paper works just as well as a computer, and costs less.

Even in the realm of education Wozniak doubted the value of computers, saying that after leaving Apple and enrolling in college:

I spent all my time using the computer, not learning the subject I was supposed to learn. I was just as efficient when I used a typewriter.

He seemed well aware of the heretical nature of his statements, telling a reporter: “Nobody at Apple is going to like hearing this, but as a general device for everyone, computers have been oversold” and that “Steve Jobs is going to kill me when he hears that.”

Many of his critiques were not uncommon: the same year The New York Times ran a story titled “Home Computer is Out in the Cold” exploring the failed promise of computers becoming as ubiquitous as television and dishwashers in the home. In the piece Silicon Valley luminary Esther Dyson joked “What the world needs is a home computer that does windows” – she meant housework, not the operating system that would launch 8-months later.

Apple CEO John Sculley – who’d infamously force Steve Jobs out of his own company later that same year – would conclude:

A lot of myths about computers were exposed in 1984. One of them is that there is such a thing as the home computer market. It doesn’t exist. People use computers in the home, of course, but for education and running a small business. There are not uses in the home itself.

Another person quoted in the piece, Dan Bricklin – co-inventor of spreadsheet software (VisiCalc) – said ”What everyone is missing is that it has to be both convenient and cheap” — alluding to computer pioneer Alan Kay’s 1972 vision for “Dynabook” – an iPad like device connected to central databases (what we now call the internet). Bricklin would quip:

You are not going to go upstairs just to type in a quick query and get back an answer.

John Sculley would eventually concur with Bricklin and Alan Kay, in the proceeding years Apple would begin work on handheld connected computers, first via a group of Macintosh engineers that would spin off the company “General Magic.” Then via the “Newton” an early PDA and precursor to the iPhone.

General Magic/Sony Magic Link Credit: Josh Carter • Apple Newton Image Credit: Old-Computers.net • 1972 Dynabook Illustration, Alan Kay

It turns out even Steve Jobs agreed with broader assessments of the limited appeal of the PC market – but crucially – only in the micro. In a 1985 Playboy interview Jobs would admit the home PC market was: “more of a conceptual market than a real market.”

Credit: Playboy, 1985

However, Jobs insisted that something big was coming – alluding to a nascent “nationwide communications network” that would be “a truly remarkable breakthrough for most people, as big as the telephone” and make computers “essential in most homes.”

Playboy: What will change?

Steve Jobs: The most compelling reason for most people to buy a computer for the home will be to link it into a nationwide communications network. We’re just in the beginning stages of what will be a truly remarkable breakthrough for most people—as remarkable as the telephone.

Jobs was right. You’re using such a network right now on a personal computing device. It turned out people would be willing to “go upstairs just to type in a quick query and get back an answer” if the world’s knowledge was at their fingertips.

Macro Pessimism vs Macro Optimism

What separated the two Steve’s is Wozniak was a pessimist in the micro and the macro, while Jobs was staunchly optimistic in the macro, about the larger idea of personal computing and only doubted the mass appeal in the current form – much like Alan Kay, Dan Bricklin and eventually John Sculley.

The problem with macro optimism? It is speculative in nature, it is vague and sounds pie-in-the-sky (and much of it inevitably is) – on the other hand micro pessimism is specific and grounded in current reality. Steve Jobs would articulate this in response to Playboy probing if he was expecting people to act on pure faith when investing in a home PC:

Playboy: Then for now, aren’t you asking home-computer buyers to invest $3000 in what is essentially an act of faith?

Steve Jobs: …the hard part of what we’re up against now is that people ask you about specifics and you can’t tell them. A hundred years ago, if somebody had asked Alexander Graham Bell, “What are you going to be able to do with a telephone?” he wouldn’t have been able to tell him the ways the telephone would affect the world. He didn’t know that people would use the telephone to call up and find out what movies were playing that night or to order some groceries or call a relative on the other side of the globe.

Jobs would go on to cite the early days of the telegraph when short term optimists predicted a “telegraph on every desk” – overlooking the impracticality of learning morse code. He’d point out that personal telegraphy machines didn’t have mass consumer appeal, not because electronic messaging was unappealing to consumers but because its early form factor was unappealing.

His broader point: Being pessimistic about the mass consumer appeal of home based telecommunication in the form of telegraph machines was accurate – in the micro – however, applying that to telecommunication in the macro would have been a mistake because eventually the telephone, cellphones and texting would emerge. He’d posit that the recent advent of the graphical user interface via the Macintosh was a similar leap in usability and would similarly increase consumer demand.

Playboy wasn’t convinced:

Playboy: Is that really significant or is it simply a novelty? The Macintosh has been called “the world’s most expensive Etch A Sketch” by at least one critic.

Jobs: It’s as significant as the difference between the telephone and the telegraph.

Early iterations of breakthrough technologies are often extremely limited and it is easy to think those limits will persist, our previous dive into early reactions of the Wright Brothers Flier is a good example (we called it “beta bias”).

Extrapolating something like the jumbo jet in response to critiques of the Flier would have sounded polly-annish to critics of that early implementation of manned flight. As would prognosticating image based user interfaces at the advent of command line computers, or an information network at the advent of user interfaces as jobs did in Playboy. When asked to elaborate on that particular prediction Jobs would say:

I can only begin to speculate. We see that a lot in our industry: You don’t know exactly what’s going to result, but you know it’s something very big and very good.

What he was describing was macro optimism.

Beta Bias

The broader lesson here is not to judge emerging technologies – especially breakthrough ones – on early form factors and limitations. Micro pessimism is often grounded in reality, but reality changes – especially in the context of technological development. While no one can be sure how new technologies will improve and evolve, we can be sure that they will thanks to that invisible force driving technological progress: human intelligence, ingenuity and creativity.

This article was published at Pessimists Archive on 5/6/2024.

New Atlas | Energy & Natural Resources

Lithium-Free Sodium Batteries Enter US Production

“Two years ago, sodium-ion battery pioneer Natron Energy was busy preparing its specially formulated sodium batteries for mass production. The company slipped a little past its 2023 kickoff plans, but it didn’t fall too far behind as far as mass battery production goes. It officially commenced production of its rapid-charging, long-life lithium-free sodium batteries this week, bringing to market an intriguing new alternative in the energy storage game.

Not only is sodium somewhere between 500 to 1,000 times more abundant than lithium on the planet we call Earth, sourcing it doesn’t necessitate the same type of earth-scarring extraction. Even moving beyond the sodium vs lithium surname comparison, Natron says its sodium-ion batteries are made entirely from abundantly available commodity materials that also include aluminum, iron and manganese.”

From New Atlas.

The Human Progress Podcast | Ep. 49

Jay Richards: Human Work in the Age of Artificial Intelligence

Jay Richards, a senior research fellow and center director at The Heritage foundation, joins Chelsea Follett to discuss why robots and artificial intelligence won't lead to widespread unemployment.

Blog Post | Science & Technology

Human Work in the Age of Artificial Intelligence | Podcast Highlights

Chelsea Follett interviews Jay Richards about why robots and artificial intelligence won't lead to widespread unemployment.

Listen to the podcast or read the full transcript here.

Your book, The Human Advantage, is a couple of years old now, but it feels more relevant than ever with ChatGPT, DALL-E 2, and all of these new technologies. People are more afraid than ever of the threat of technological unemployment.

There’s something that economists call the lump of labor fallacy. It’s this idea that there’s a fixed amount of work that needs to be done, and if some new technology makes a third of the population’s work obsolete, then those people won’t have anything to do. Of course, if that were a good argument, it would have been a good argument at the time of the American founding, when almost everyone was living and working on farms. You move forward to, say, 1900, and maybe half the population was still on farms. Well, here we are in 2022, and less than 2 percent of us work on farms. If the lump of labor fallacy were true, we’d almost all be unemployed.

In reality, there’s no fixed amount of work to be done. There are people providing goods and services. More efficient work makes stuff less expensive, giving people more income to spend on more things, creating more work. But a lot of smart people think that advancements in high technology, especially in robotics and artificial intelligence, make our present situation different.

Is this time different?

I don’t think so.

Ultimately, the claim that machines will replace us relies on the assumption that machines and humans are relevantly alike. I do not buy that premise. These machines replace ways in which we do things, but there is no reason to think that they’re literally going to replace us.

A lot of us hear the term artificial intelligence and imagine what we’ve seen in science fiction. But that term is almost all marketing hype. These are sorting algorithms that run statistics. They aren’t intelligent in the sense that we are not dealing with agents with wills or self-consciousness or first-person perspective or anything like that. And there’s no reason beyond a metaphysical temptation to think that these are going to be agents. If I make a good enough tractor, it won’t become an ox. And just because I developed a computer that can run statistical algorithms well doesn’t mean it will wake up and be my girlfriend.

The economy is about buying and selling real goods and services, but it’s also about creating value. Valuable information is not just meaningless bits, it has to be meaningful. Where does meaningful information come from? Well, it comes from agents. It comes from people acting with a purpose, trying to meet their needs and the needs of others. In that way, the information economy, rather than alienating us and replacing us, is actually the economy that is most suited to our properties as human beings.

You’ve said that the real challenge of the information economy is not that workers will be replaced but that the pace of change and disruption could speed up. Could you elaborate on that? 

This is a manifestation of the so-called Moore’s Law. Moore’s Law is based on the observation that engineers could roughly double the number of transistors they put on an integrated circuit every two years. Thanks to this rapid suffusion of computational power, the economy is changing much faster than in earlier periods.

Take the transition from the agrarian to the industrial economy. In 1750, or around the time of the American founding, 90 percent of the population lived and worked on farms. In 1900, it was about half that. By 1950, it halved again. Today, it’s a very small percentage of the population. That’s amazingly fast in the whole sweep of history, but it took a few hundred years, a few generations.

Well, in my lifetime alone, I listened to vinyl records, 8-track tapes, cassette tapes, CDs, and then MP3 files that you had to download. Nobody even does that today. We stream them. We moved from the world of molecules to the world of bits, from matter to information.

There were whole industries built around the 8-track tape industry, making the tapes, making the machines, and repairing them. That has completely disappeared. We don’t sit around saying, “Too bad we didn’t have a government subsidy for those 8-track tape factories,” but this is an illustration of how quickly things can change.

That’s where we need to focus our attention. There can be massive disruptions that happen quickly, where you have whole industries that employ hundreds of thousands of people disappear. You can say, “I know you just lost your job and don’t know how to pay your mortgage, but two years from now, there will be more jobs.” That could be true. It still doesn’t solve the problem. If we’re panicking about Skynet and the robots waking up, we’re not focusing on the right thing, and we’re likely to implement policies that will make things worse rather than better.

Could you talk a bit about the idea of a government provided universal basic income and how that relates to this vision of mass unemployment? 

I have a whole chunk of a chapter at the end of the book critiquing this idea of universal basic income. The argument is that if technology is going to replace what everyone is doing, one, they’re not going to have a source of income, and that’s a problem. People, in general, need to work in the sense that we need to be doing something useful to be happy.

I think there are two problems with that argument. One is that it’s based on this false assumption of permanent technological unemployment that is not new. In the book, I quote a letter from a group of scientists writing to the president of the United States warning about what they call a “cybernetic revolution” and saying that these machines are going to take all the jobs and we need a massive government program to pay for it. The letter is from the 1960s, and the recipient was Lyndon Baines Johnson. This is one of the justifications for his great society programs. Well, that was a long time ago. It’s exactly the same argument. It wasn’t true then. I don’t think it’s true now.

The second point is that just giving people cash payments misses the point entirely. First, it pays people to not work. Disruption is a social problem, but the last thing you want to do is to discourage people from finding new, innovative things to do.

Entrepreneurs find new things to do, new types of work. They put their wealth at risk, and they need people that are willing to work for them. And so you want to create the conditions where they can do that. You don’t want to incentivize people not to do that.

Let’s talk a bit about digitalization. How did rival and non-rival goods relate to this idea of digitalization? 

So, a banana is a rival good. If I eat a banana, you can’t have it. In fact, I can’t have it anymore. I’ve eaten it, and now it’s gone. Lots of digital goods aren’t like this at all. Think of that mp3 file. If I download a song for $1.29 on iTunes, I haven’t depleted the stock by one. The song is simply copied onto my computer. That’s how information, in general, is. If I teach you a skill, I haven’t lost the skill; it was non-rival. More and more of our economy is dealing in these non-rival spaces. It’s exciting because rather than dealing in a world of scarcity, we’re dealing in a world of abundance.

It also means that the person who gets their first can get fabulously wealthy because of network effects. For instance, it’s really hard to replicate Facebook because once you get a few billion people on a network, the fact that billions of people are on that network becomes the most relevant fact about it. There’s a winner-take-all element to it. But, in a sense, that’s fine. Facebook is not like the robber baron who takes all the shoreline property, leaving none for anyone else. It’s not like that in the digital world. There are always going to be opportunities for other people to produce new things that were not there before.

And then there’s hyper-connectivity. You’ve said that this is something you don’t think gets enough attention; for the first time, a growing share—soon all of humanity probably—will be connected at roughly the speed of light to the internet. Can you elaborate on that? 

Yeah, this is absolutely amazing.

Half of Adam Smith’s argument was about the division of labor and comparative advantage. When people specialize, the whole becomes greater than the sum of its parts. In the global market, we can produce everything from a pencil to an iPhone, even though no one person or even one hundred people in the network knows how. Together, following price signals, we can produce things that none of us could do on our own. Now, imagine that everyone is able to connect more or less in real time. There will be lots of cooperative things that we can do together, of course, that we could not do otherwise. 

A lot of people imagine that everybody’s going to have to be a computer engineer or a coder or something like that, but in a hyper-connected world, interpersonal skills are going to end up fetching a higher premium. In fact, I think some of the work that coders are doing is more likely to be replaced.

Do you worry about creative work, like writing, being taken over by AI? 

Algorithms can already produce, say, stock market news. But the reality is that stock market news is easily systematized and submitted to algorithms. That kind of low-level writing is going to be replaced just as certain kinds of low-level, repetitive labor were replaced. On the other hand, highly complex labor, such as artisanal craft work, is not only going to be hard to automate, but it’s also something we don’t necessarily want to automate. I might value having hand-made shoes, even if I could get cheaper machine-made shoes.

To sum up, how do you think people can best react to mass automation and advances in AI? 

I think the best way to adapt to this is to develop broad human skills, so a genuine liberal arts education is still a really good thing. Become literate, numerate, and logical, and then develop technical skills on the side, such as social media management or coding. The reality is that, unlike their parents and grandparents, who may have just done one or two jobs, young people today are likely to do five or six different things in their adult careers. They need to develop skills that allow them to adapt quickly. Sure, pick one or two specialized skills as a side gig, but don’t assume that that’s what you’re going to do forever. But if you know how to read, if you know how to write, if you are numerate and punctual, you’re still going to be really competitive in the 21st century economy.

Get Jay Richards’s book, The Human Advantage: The Future of American Work in an Age of Smart Machines, here.