fbpx
01 / 05
In Pursuit of Progress: The First (I think) Youtube Video Written by AI

Blog Post | Adoption of Technology

Why Steve Wozniak Dismissed PCs and Steve Jobs Didn’t

In March 1976 Apple Co-Founder Steve Wozniak presented a circuit board design for a home PC at the now legendary “Homebrew Computer Club” – a meeting of early personal computing pioneers.

In attendance was a young Steve Jobs who – so struck by the projects potential – convinced Wozniak to keep the designs private and co-found Apple to commercialize them. The rest was history.

Often forgotten in this Silicon Valley legend was the doubt one of the Steve’s (Steve Wozniak) cast on the very idea of personal computing, just as the companies magnum opus – the Macintosh – had been released to the world.

In a 1985 interview Wozniak posited: “The home computer may be going the way of video games, which are a dying fad” – alluding to the 1983 crash in the video game market. Wozniak continued:

for most personal tasks, such as balancing a check book, consulting airline schedules, writing a modest number of letters, paper works just as well as a computer, and costs less.

Even in the realm of education Wozniak doubted the value of computers, saying that after leaving Apple and enrolling in college:

I spent all my time using the computer, not learning the subject I was supposed to learn. I was just as efficient when I used a typewriter.

He seemed well aware of the heretical nature of his statements, telling a reporter: “Nobody at Apple is going to like hearing this, but as a general device for everyone, computers have been oversold” and that “Steve Jobs is going to kill me when he hears that.”

Many of his critiques were not uncommon: the same year The New York Times ran a story titled “Home Computer is Out in the Cold” exploring the failed promise of computers becoming as ubiquitous as television and dishwashers in the home. In the piece Silicon Valley luminary Esther Dyson joked “What the world needs is a home computer that does windows” – she meant housework, not the operating system that would launch 8-months later.

Apple CEO John Sculley – who’d infamously force Steve Jobs out of his own company later that same year – would conclude:

A lot of myths about computers were exposed in 1984. One of them is that there is such a thing as the home computer market. It doesn’t exist. People use computers in the home, of course, but for education and running a small business. There are not uses in the home itself.

Another person quoted in the piece, Dan Bricklin – co-inventor of spreadsheet software (VisiCalc) – said ”What everyone is missing is that it has to be both convenient and cheap” — alluding to computer pioneer Alan Kay’s 1972 vision for “Dynabook” – an iPad like device connected to central databases (what we now call the internet). Bricklin would quip:

You are not going to go upstairs just to type in a quick query and get back an answer.

John Sculley would eventually concur with Bricklin and Alan Kay, in the proceeding years Apple would begin work on handheld connected computers, first via a group of Macintosh engineers that would spin off the company “General Magic.” Then via the “Newton” an early PDA and precursor to the iPhone.

General Magic/Sony Magic Link Credit: Josh Carter • Apple Newton Image Credit: Old-Computers.net • 1972 Dynabook Illustration, Alan Kay

It turns out even Steve Jobs agreed with broader assessments of the limited appeal of the PC market – but crucially – only in the micro. In a 1985 Playboy interview Jobs would admit the home PC market was: “more of a conceptual market than a real market.”

Credit: Playboy, 1985

However, Jobs insisted that something big was coming – alluding to a nascent “nationwide communications network” that would be “a truly remarkable breakthrough for most people, as big as the telephone” and make computers “essential in most homes.”

Playboy: What will change?

Steve Jobs: The most compelling reason for most people to buy a computer for the home will be to link it into a nationwide communications network. We’re just in the beginning stages of what will be a truly remarkable breakthrough for most people—as remarkable as the telephone.

Jobs was right. You’re using such a network right now on a personal computing device. It turned out people would be willing to “go upstairs just to type in a quick query and get back an answer” if the world’s knowledge was at their fingertips.

Macro Pessimism vs Macro Optimism

What separated the two Steve’s is Wozniak was a pessimist in the micro and the macro, while Jobs was staunchly optimistic in the macro, about the larger idea of personal computing and only doubted the mass appeal in the current form – much like Alan Kay, Dan Bricklin and eventually John Sculley.

The problem with macro optimism? It is speculative in nature, it is vague and sounds pie-in-the-sky (and much of it inevitably is) – on the other hand micro pessimism is specific and grounded in current reality. Steve Jobs would articulate this in response to Playboy probing if he was expecting people to act on pure faith when investing in a home PC:

Playboy: Then for now, aren’t you asking home-computer buyers to invest $3000 in what is essentially an act of faith?

Steve Jobs: …the hard part of what we’re up against now is that people ask you about specifics and you can’t tell them. A hundred years ago, if somebody had asked Alexander Graham Bell, “What are you going to be able to do with a telephone?” he wouldn’t have been able to tell him the ways the telephone would affect the world. He didn’t know that people would use the telephone to call up and find out what movies were playing that night or to order some groceries or call a relative on the other side of the globe.

Jobs would go on to cite the early days of the telegraph when short term optimists predicted a “telegraph on every desk” – overlooking the impracticality of learning morse code. He’d point out that personal telegraphy machines didn’t have mass consumer appeal, not because electronic messaging was unappealing to consumers but because its early form factor was unappealing.

His broader point: Being pessimistic about the mass consumer appeal of home based telecommunication in the form of telegraph machines was accurate – in the micro – however, applying that to telecommunication in the macro would have been a mistake because eventually the telephone, cellphones and texting would emerge. He’d posit that the recent advent of the graphical user interface via the Macintosh was a similar leap in usability and would similarly increase consumer demand.

Playboy wasn’t convinced:

Playboy: Is that really significant or is it simply a novelty? The Macintosh has been called “the world’s most expensive Etch A Sketch” by at least one critic.

Jobs: It’s as significant as the difference between the telephone and the telegraph.

Early iterations of breakthrough technologies are often extremely limited and it is easy to think those limits will persist, our previous dive into early reactions of the Wright Brothers Flier is a good example (we called it “beta bias”).

Extrapolating something like the jumbo jet in response to critiques of the Flier would have sounded polly-annish to critics of that early implementation of manned flight. As would prognosticating image based user interfaces at the advent of command line computers, or an information network at the advent of user interfaces as jobs did in Playboy. When asked to elaborate on that particular prediction Jobs would say:

I can only begin to speculate. We see that a lot in our industry: You don’t know exactly what’s going to result, but you know it’s something very big and very good.

What he was describing was macro optimism.

Beta Bias

The broader lesson here is not to judge emerging technologies – especially breakthrough ones – on early form factors and limitations. Micro pessimism is often grounded in reality, but reality changes – especially in the context of technological development. While no one can be sure how new technologies will improve and evolve, we can be sure that they will thanks to that invisible force driving technological progress: human intelligence, ingenuity and creativity.

This article was published at Pessimists Archive on 5/6/2024.

New Atlas | Energy & Natural Resources

Lithium-Free Sodium Batteries Enter US Production

“Two years ago, sodium-ion battery pioneer Natron Energy was busy preparing its specially formulated sodium batteries for mass production. The company slipped a little past its 2023 kickoff plans, but it didn’t fall too far behind as far as mass battery production goes. It officially commenced production of its rapid-charging, long-life lithium-free sodium batteries this week, bringing to market an intriguing new alternative in the energy storage game.

Not only is sodium somewhere between 500 to 1,000 times more abundant than lithium on the planet we call Earth, sourcing it doesn’t necessitate the same type of earth-scarring extraction. Even moving beyond the sodium vs lithium surname comparison, Natron says its sodium-ion batteries are made entirely from abundantly available commodity materials that also include aluminum, iron and manganese.”

From New Atlas.

The Human Progress Podcast | Ep. 49

Jay Richards: Human Work in the Age of Artificial Intelligence

Jay Richards, a senior research fellow and center director at The Heritage foundation, joins Chelsea Follett to discuss why robots and artificial intelligence won't lead to widespread unemployment.

Blog Post | Science & Technology

Human Work in the Age of Artificial Intelligence | Podcast Highlights

Chelsea Follett interviews Jay Richards about why robots and artificial intelligence won't lead to widespread unemployment.

Listen to the podcast or read the full transcript here.

Your book, The Human Advantage, is a couple of years old now, but it feels more relevant than ever with ChatGPT, DALL-E 2, and all of these new technologies. People are more afraid than ever of the threat of technological unemployment.

There’s something that economists call the lump of labor fallacy. It’s this idea that there’s a fixed amount of work that needs to be done, and if some new technology makes a third of the population’s work obsolete, then those people won’t have anything to do. Of course, if that were a good argument, it would have been a good argument at the time of the American founding, when almost everyone was living and working on farms. You move forward to, say, 1900, and maybe half the population was still on farms. Well, here we are in 2022, and less than 2 percent of us work on farms. If the lump of labor fallacy were true, we’d almost all be unemployed.

In reality, there’s no fixed amount of work to be done. There are people providing goods and services. More efficient work makes stuff less expensive, giving people more income to spend on more things, creating more work. But a lot of smart people think that advancements in high technology, especially in robotics and artificial intelligence, make our present situation different.

Is this time different?

I don’t think so.

Ultimately, the claim that machines will replace us relies on the assumption that machines and humans are relevantly alike. I do not buy that premise. These machines replace ways in which we do things, but there is no reason to think that they’re literally going to replace us.

A lot of us hear the term artificial intelligence and imagine what we’ve seen in science fiction. But that term is almost all marketing hype. These are sorting algorithms that run statistics. They aren’t intelligent in the sense that we are not dealing with agents with wills or self-consciousness or first-person perspective or anything like that. And there’s no reason beyond a metaphysical temptation to think that these are going to be agents. If I make a good enough tractor, it won’t become an ox. And just because I developed a computer that can run statistical algorithms well doesn’t mean it will wake up and be my girlfriend.

The economy is about buying and selling real goods and services, but it’s also about creating value. Valuable information is not just meaningless bits, it has to be meaningful. Where does meaningful information come from? Well, it comes from agents. It comes from people acting with a purpose, trying to meet their needs and the needs of others. In that way, the information economy, rather than alienating us and replacing us, is actually the economy that is most suited to our properties as human beings.

You’ve said that the real challenge of the information economy is not that workers will be replaced but that the pace of change and disruption could speed up. Could you elaborate on that? 

This is a manifestation of the so-called Moore’s Law. Moore’s Law is based on the observation that engineers could roughly double the number of transistors they put on an integrated circuit every two years. Thanks to this rapid suffusion of computational power, the economy is changing much faster than in earlier periods.

Take the transition from the agrarian to the industrial economy. In 1750, or around the time of the American founding, 90 percent of the population lived and worked on farms. In 1900, it was about half that. By 1950, it halved again. Today, it’s a very small percentage of the population. That’s amazingly fast in the whole sweep of history, but it took a few hundred years, a few generations.

Well, in my lifetime alone, I listened to vinyl records, 8-track tapes, cassette tapes, CDs, and then MP3 files that you had to download. Nobody even does that today. We stream them. We moved from the world of molecules to the world of bits, from matter to information.

There were whole industries built around the 8-track tape industry, making the tapes, making the machines, and repairing them. That has completely disappeared. We don’t sit around saying, “Too bad we didn’t have a government subsidy for those 8-track tape factories,” but this is an illustration of how quickly things can change.

That’s where we need to focus our attention. There can be massive disruptions that happen quickly, where you have whole industries that employ hundreds of thousands of people disappear. You can say, “I know you just lost your job and don’t know how to pay your mortgage, but two years from now, there will be more jobs.” That could be true. It still doesn’t solve the problem. If we’re panicking about Skynet and the robots waking up, we’re not focusing on the right thing, and we’re likely to implement policies that will make things worse rather than better.

Could you talk a bit about the idea of a government provided universal basic income and how that relates to this vision of mass unemployment? 

I have a whole chunk of a chapter at the end of the book critiquing this idea of universal basic income. The argument is that if technology is going to replace what everyone is doing, one, they’re not going to have a source of income, and that’s a problem. People, in general, need to work in the sense that we need to be doing something useful to be happy.

I think there are two problems with that argument. One is that it’s based on this false assumption of permanent technological unemployment that is not new. In the book, I quote a letter from a group of scientists writing to the president of the United States warning about what they call a “cybernetic revolution” and saying that these machines are going to take all the jobs and we need a massive government program to pay for it. The letter is from the 1960s, and the recipient was Lyndon Baines Johnson. This is one of the justifications for his great society programs. Well, that was a long time ago. It’s exactly the same argument. It wasn’t true then. I don’t think it’s true now.

The second point is that just giving people cash payments misses the point entirely. First, it pays people to not work. Disruption is a social problem, but the last thing you want to do is to discourage people from finding new, innovative things to do.

Entrepreneurs find new things to do, new types of work. They put their wealth at risk, and they need people that are willing to work for them. And so you want to create the conditions where they can do that. You don’t want to incentivize people not to do that.

Let’s talk a bit about digitalization. How did rival and non-rival goods relate to this idea of digitalization? 

So, a banana is a rival good. If I eat a banana, you can’t have it. In fact, I can’t have it anymore. I’ve eaten it, and now it’s gone. Lots of digital goods aren’t like this at all. Think of that mp3 file. If I download a song for $1.29 on iTunes, I haven’t depleted the stock by one. The song is simply copied onto my computer. That’s how information, in general, is. If I teach you a skill, I haven’t lost the skill; it was non-rival. More and more of our economy is dealing in these non-rival spaces. It’s exciting because rather than dealing in a world of scarcity, we’re dealing in a world of abundance.

It also means that the person who gets their first can get fabulously wealthy because of network effects. For instance, it’s really hard to replicate Facebook because once you get a few billion people on a network, the fact that billions of people are on that network becomes the most relevant fact about it. There’s a winner-take-all element to it. But, in a sense, that’s fine. Facebook is not like the robber baron who takes all the shoreline property, leaving none for anyone else. It’s not like that in the digital world. There are always going to be opportunities for other people to produce new things that were not there before.

And then there’s hyper-connectivity. You’ve said that this is something you don’t think gets enough attention; for the first time, a growing share—soon all of humanity probably—will be connected at roughly the speed of light to the internet. Can you elaborate on that? 

Yeah, this is absolutely amazing.

Half of Adam Smith’s argument was about the division of labor and comparative advantage. When people specialize, the whole becomes greater than the sum of its parts. In the global market, we can produce everything from a pencil to an iPhone, even though no one person or even one hundred people in the network knows how. Together, following price signals, we can produce things that none of us could do on our own. Now, imagine that everyone is able to connect more or less in real time. There will be lots of cooperative things that we can do together, of course, that we could not do otherwise. 

A lot of people imagine that everybody’s going to have to be a computer engineer or a coder or something like that, but in a hyper-connected world, interpersonal skills are going to end up fetching a higher premium. In fact, I think some of the work that coders are doing is more likely to be replaced.

Do you worry about creative work, like writing, being taken over by AI? 

Algorithms can already produce, say, stock market news. But the reality is that stock market news is easily systematized and submitted to algorithms. That kind of low-level writing is going to be replaced just as certain kinds of low-level, repetitive labor were replaced. On the other hand, highly complex labor, such as artisanal craft work, is not only going to be hard to automate, but it’s also something we don’t necessarily want to automate. I might value having hand-made shoes, even if I could get cheaper machine-made shoes.

To sum up, how do you think people can best react to mass automation and advances in AI? 

I think the best way to adapt to this is to develop broad human skills, so a genuine liberal arts education is still a really good thing. Become literate, numerate, and logical, and then develop technical skills on the side, such as social media management or coding. The reality is that, unlike their parents and grandparents, who may have just done one or two jobs, young people today are likely to do five or six different things in their adult careers. They need to develop skills that allow them to adapt quickly. Sure, pick one or two specialized skills as a side gig, but don’t assume that that’s what you’re going to do forever. But if you know how to read, if you know how to write, if you are numerate and punctual, you’re still going to be really competitive in the 21st century economy.

Get Jay Richards’s book, The Human Advantage: The Future of American Work in an Age of Smart Machines, here.