Chelsea Follett: Joining me today is Dr. Jay Richards, a senior research fellow and center director at the Heritage foundation. Previously, he was a business professor at Catholic University, and his research has focused on a wide variety of topics in culture, economics, and public policy. Jay’s articles and essays have been published in the Harvard Business Review, the Wall Street Journal, Barron’s, the Washington Post, the New York Post, Newsweek, Forbes, and many other outlets. He has appeared on CNN, CBS, MSNBC, and hundreds of radio and television programs. And he’s the author or editor of more than a dozen books, including The Human Advantage, the Future of American Work in an Age of Smart Machines. And he joins the podcast today to discuss why he thinks robots and artificial intelligence won’t lead to widespread technological unemployment. How are you, Jay?

Jay Richards: Just fine, thanks. That’s actually a really good summary of the argument in my book.

Chelsea Follett: Well, it’s got a very, it’s title is very self-explanatory in that sense, but. So, your book, The Human Advantage, is a couple years old now, but it feels more relevant than ever with ChatGPT, DALL-E 2, all of these new technologies. People are more afraid than ever of the threat of technological unemployment. So can you tell me a bit about the book and what led you to write it and how you became interested in this topic?

Jay Richards: Well, I’m sort of a shameless generalist. I’m a philosopher that’s really interested in, kind of anything having to do with humans. And so economics has a lot to do with humans. It got sort of transfixed by this idea that humans are supposedly going to be replaced by machines. I had actually written at an earlier point in my career on strong artificial intelligence, and had debated Ray Kurzweil, who is a leading advocate of something called transhumanism, this idea, essentially, that technology is going to move so quickly that the best we can do is upload ourselves to a future internet and leave this mortal coil behind. And so there’s all these kinds of economic and technological and philosophical and anthropological assumptions that go into this, so that, honestly, it was all those things that makes me interested in this. But it’s also just a kind of mundane economic point, which is this idea that you might, economists often call the lump of labor fallacy. And it’s just this idea that, well, there’s this fixed amount of work that needs to be done.

So let’s say we have full employment at the moment, so we have just enough work that needs to be done that we’re able to employ everyone that’s here. But then what happens if some new technology comes along, that makes a third of the population’s current work obsolete. Well, then they won’t have anything to do, because the amount of work that needs to be done will be filled. And so you can, you sort of hear this argument in a current context, and it has this aura of plausibility because it’s sort of hard to imagine, okay, what would those people do? But, of course, anyone who that knows anything about history knows that. If that were a good argument, it would have been a good argument since the first person invented the wheel or chipped a couple of flint rocks together and did something slightly more efficiently than someone else did.

Because, of course, the reality is that there’s no fixed amount of work to be done. There are people providing goods and services. And the fact that someone might develop a technology that makes one way of working more efficient doesn’t mean that there aren’t going to be a bunch of other things that can be done. And, in fact, more efficient work makes the stuff available at that point less expensive. So, people, in theory, have more disposable income to spend on other things. And so, just as a historical point, this idea that you’re going to have long-term technological unemployment, if that was a good argument, it would have been a good argument at the time of the American founding, when almost everyone was living and working on farms. You move forward to, say, 1900, maybe half the population was still on farms. Well, here we are in 2022, less than 2% of us is working on farms. If that lump of labor fallacy were true, we’d almost all be unemployed, because we wouldn’t. There’s no employment available for us in farms.

Right? So there’s got to be something wrong with the argument. But a lot of officially smart people think that somehow high technology, especially robotics and artificial intelligence, makes our situation different than any previous situation, and that, in fact, something about new technology is really going to make a large segment, at least of us, obsolete. And this pop, this idea, frankly, it’s worked its way into popular culture, and a lot of people worry about this.

Chelsea Follett: Absolutely. So it’s true that in the past, people have found new things to do, new sorts of jobs have been created. But is this time different? That’s something people worry about now, because now machines are getting smarter, more intelligent. We have artificial intelligence advances.

Jay Richards: Yeah, that’s the worry. And so it’s, what’s funny about it is that people in artificial intelligence research, they know about this thing called Moravec’s paradox. It’s named after Hans Moravec, which is, he’s a prominent researcher, was a prominent researcher in this area. And the idea of Moravec’s paradox is that whereas you imagine our technology would be good at doing the simple stuff, like manual labor, it’s going to be hard to do the intellectual stuff, whereas, in fact, we’re really good at creating machines that can replicate intellectual stuff. So, remember, IBM developed a computer to beat the greatest champion in chess way back in 1995, but we still don’t have a, Rosey, the robotic housekeeper, because that requires a robot that can move in three-dimensional space. It requires all these kinds of coordination when I pick up a glass, right? I know how heavy it is ahead of time. I get feedback from my eyes to my fingers. I can coordinate in a certain way.

You need a whole lot of computational resources to be able to do the stuff that any 3-year-old can do in time and space. Whereas if you have an intellectual problem that can be reduced to an algorithm, well, then a fast enough computer is just going to solve that problem. And so, what Moravec’s paradox points out is that some of this physical stuff is actually harder to automate than some of the mental stuff. And so, the worry, though, is that robotics is going to take care of the physical stuff, artificial intelligence is going to take care of the intellectual stuff. And so there’s going to be this massive sort of swath of different kinds of jobs that are going to be taken care of. And the argument is that whereas before developing a tractor, what that does is that enhances human labor. AI and robotics replace human labor, so they replace us in some qualitatively different way than the previous technologies do. Or at least that’s the argument. And that’s sort of the impression. And I think that’s what scares people. And then you add on top of that, the fact that we’ve all been fed a diet of movies in which Skynet wakes up in Terminator, or the robots become conscious.

And it’s very easy to merge what we think we’re seeing before us with this kind of Sci-Fi scenario. And it almost always seems to lead to panic for some reason.

Chelsea Follett: Right. So why do you think that instead of not making us, failing to make us more productive, robots will instead replace us? Why do you disagree with that?

Jay Richards: Yeah, I disagree with it for a couple of reasons. One is honestly just looking at what machines do. So I think we’re dealing. First of all, I don’t want. On the one hand, I don’t want to say that we’re not going to get automated cars that are going to replace long haul trucking or something like that. In fact, if anything, in the book, I try to be extremely optimistic about the prospects for that because I don’t want to make it easy on myself. And so I say, look, maybe 10 years from now, it will be nothing but automated Tesla trucks on the highways. So that’s actual jobs, right? With actual people that will disappear. That’s on the one side, but on the other, there’s a heck of a lot of hype behind this stuff. And so it’s actually much, much harder to do some of the final sort of things than you would imagine it. So it’s actually just, it’s going to be much easier to get a 90% automated truck than to get 100% automated truck, for one thing. But the other thing is that artificial intelligence is almost all marketing hype. I mean, what if we called this technology statistical algorithms? Well, that’s what these are. These are literally sorting algorithms that run statistics. And they, you come up, these are designed by designers and modelers. They don’t actually do anything beyond what the designers initially intended them to do.

These aren’t intelligent in the sense that we are not dealing with agents. We’re not dealing with things, with wills or self-consciousness, or first serve, first person perspective or anything like that. And there’s absolutely no reason beyond a kind of metaphysical temptation to think that these are going to be agents any more than I think, look, if I make a good enough tractor, it’s not going to become an ox. And just because I developed a computer that can run statistical algorithms well doesn’t mean it’s going to wake up and be my girlfriend. This is Sci-FI. And so I just don’t buy that. I just do not buy this [inaudible] leap from weak artificial intelligence, which is what we’re actually dealing with, that’s a Google search, to strong artificial intelligence, which says that, well, these machines, once they get, I don’t know, fast enough or something, they’re going to somehow become conscious. I think it’s just a kind of a simple metaphysical mistake that is easy because we’ve been fed this diet of Sci-Fi but anyone that knows anything about statistical modeling knows what’s actually happening under the hood. And so I think part of this is just a lot of us hear the word artificial intelligence, we imagine things are happening that aren’t happening, and so we panic, and I honestly, I blame some of the hypers for some of this, because they’ve hyped it so much that people start to believe it.

But there were always two possible responses, right? There was the utopian response, which was, okay, the machines are going to do all of our manual labor and all the drudgery, and we’ll just party all night and sleep all day, right? That’s the kind of utopian interpretation. But there’s the dystopian interpretation, which is they’re going to wake up and hate us and take all our jobs and leave us with nothing to do, and we’re all going to be super depressed. Both of those share an assumption, though, which is that we are relevantly alike, or these machines will be relevantly like us, that they could literally replace us. I just do not buy the first premise. And then I think if you actually look at the details of what these machines do, what you’ll find is that they replace ways in which we do things, but there is no reason to think that they’re literally going to replace us.

Chelsea Follett: Could you talk a bit about the idea of a government provided universal basic income and how that relates to this vision of mass unemployment?

Jay Richards: Well, it relates directly, and in fact, my editor at the time, when I wrote the book, I have a whole sort of chunk of a chapter at the end critiquing this idea of universal basic income. The reason is because almost everyone who argues about the kind of, the rise of the robots is the name of a book that I critique. In some ways, you think that they’re arguing about the future of technology, when in fact it’s just their way of arguing for a universal basic income. The idea of universal basic income is, of course, if it’s universal, it means everybody gets it. It’s a cash payment directly from the government, as opposed to a means tested welfare program. So you get it simply by virtue of, say, being an American rather than being poor or something like that. And it’s basic. So it’s not going to, it’s not going to be the equivalent of $100,000 a year job, but the idea is that the income would be high enough that it’s going to provide you some basic needs or something like that.

Now, the argument is that, well, if technology is going to replace what everyone is doing, one, they’re not going to have a source of income, so that’s a problem. And two, they’re going to be depressed, because if you look at happiness surveys, I mean, the reality is that people in general, we need to work in the sense that we need to be doing something in which we’re creating value. And so this is going to be a really severe human catastrophe if, say, 15 years from now, half the population, not only doesn’t have a job, but is not going to have a job because they’re not relevantly skilled or the machines are doing what they could otherwise do. And so that’s the basic argument. I think there are two problems with it. One is that it’s based on this false assumption of permanent technological unemployment that is not new. And in fact, in the book, I quote a letter from a group of scientists writing to the president of the United States warning about this thing that they call it a cybernetic revolution and saying essentially what we said, that these machines are going to take all the jobs and we need a massive kind of government program to pay for it.

The letter is from the 1960s, and the recipient was Lyndon Baines Johnson. This is one of the justifications for his great society programs. Well, that was a long time ago. It’s exactly the same argument. It wasn’t true then. I don’t think it’s true now. And so I don’t think the argument’s well motivated. That’s the first point. But the second point is that this idea that just giving people cash payments is gonna somehow solve the problem, I think misses the point entirely. First, it pays people to not work. And so how that’s going to help people feel motivated to work, I don’t really get the logic. It also would, I think, prevent the very thing that would actually need to happen. And so, yeah, disruption is a social problem. We need to figure out how to take care of. But the last thing you want to do is to discourage people from finding new, innovative things to do, because the reality is once, let’s say, an entire industry actually does disappear because it becomes obsolete.

The best solution for that, it’s going to be what’s always happened under the best of circumstances. Entrepreneurs find new things to do, new types of work. They put their wealth at risk, and they need people that are willing to work for them, right? And so that’s what I think the solution to this is going to be. It’s going to be as it is in the past. It’s going to be human beings, as I believe, actually made in the image of the creative God to create value that wasn’t there before. And so you want to create the conditions where they can do that. You don’t want to incentivize people not to do that. And all of these universal basic income schemes, though, I think maybe in many cases well meaning, would essentially incentivize people to continue doing the thing that they shouldn’t be doing.

Chelsea Follett: You’ve said that the real challenge of the information economy is not that workers will be replaced, but that the pace of change and disruption could speed up. Could you elaborate on that?

Jay Richards: I think that’s definitely happening. And, I mean, I think this is a manifestation of the so-called Moore’s law. So Moore’s law generally, it was an observation by the guy that was a part of intel, actually named Gordon Moore, in 1965, wrote a famous paper in an obscure engineering journal, noting that basically he had observed that engineers could roughly double the number of transistors they put on an integrated circuit about every 18 or 24 months. That’s the sort of onus of it. So, the idea is that you can increase the amount of computational power in the brains of a computer, and you can double this every year and a half or two years, so that you get essentially kind of a type of exponential growth. It’s not exactly exponential growth, but it’s significant growth. So if you keep doubling, this is going to speed up. And so this is why you can get from a situation where all the computational power used in the Apollo moon missions, for instance, is probably worth a penny, probably costs us a penny now. That’s all mostly because of Moore’s law. Now, apply that to technology as a whole, in which you get this rapid suffusion of computational power, of networking power, which is a different kind of, sort of exponential growth.

And all these growth and the information parts of the economy, we’re moving much, much faster than if you look at earlier periods. So, imagine the transition from, say, the agrarian to the industrial economy, which has taken place mostly just in the history of our country, even that shift from the agricultural to the sort of, the urban and industrial era. So in, say, 1750, or around the time of the American founding, something like 90% of the population was living and working on farms or connected to agriculture. You take, it’s 1900, you get to about half that. 1950, it halves again. Now, it’s a very small percentage of the population. That’s amazingly fast if you take the whole sweep of history. Still, it was a few hundred years, right? It happened over a few generations. Well, in my lifetime. I grew up as a little kid listening to my mom’s LP’s, to her vinyl records. My dad then invested heavily in 8-track tape technology, which no one watching or listening to this probably even knows what that is.

It was a very bad music kind of transfer technology. Then it moved to cassette tapes, then to CDs, then to iTunes. So, MP3 files that you had to download? Well, nobody even does that. I haven’t bought an iTunes song, I don’t know, in 8 or 10 years. We stream these things. And what is the music now? Well, there’s a physical infrastructure, but these are MP3 files. They are digital entities. They’re files. And so we have moved, as they sort of put it, in the field. We moved from the world of molecules to the world of bits, from atoms to bits, from matter to information. Just in my lifetime, what is that? Five or six different forms of storing and listening to music just in the last few decades. Now, there were whole industries, I assume, built around the 8-track tape industry built, making the tapes, making the machines, people that knew how to repair them. That has completely disappeared. But nobody, we don’t sit around saying, well, too bad we didn’t have a government subsidy for those 8-track tape factories for making the machines.

And so, but I think this is an illustration of how quickly things can change. And so I think that’s actually where we need to focus our attention. Not on this idea that we’re going to all end up unemployed, but that there can be massive disruptions that can happen quickly, where you have a whole industry may give gainful employment to hundreds of thousands of people, then suddenly, one innovation away, and the whole thing disappears. Now, it’s one thing to say, well, okay, I know you just lost your job and don’t know how to pay your mortgage, but two years from now, they’re going to be more jobs. That could be true. It still doesn’t solve your problem. And so that’s why I think, actually, the disruption, that’s the real social problem we ought to take seriously. And by focusing on this panic about Skynet and the robots waking up, we’re actually not even focusing on the right thing, and we’re likely to implement policies that will make things worse rather than better.

Chelsea Follett: So what does that quickening pace of change and disruption mean for us? What makes the information economy unique? What are its features?

Jay Richards: Well, and so in the book I talk about, and I’m hoping I can remember these off the top of my head, but information economy has very particular features that it has a common. The biggest one is that the production of new kinds of meaningful information, those are actually… That’s the main thing that makes it different from say… I mean, we’re always dealing with the infusion of information, so if somebody figures out how to take a stone and make a wheel, that’s actually an informational act, but you can think of it is that we’re moving more and more away from the world of atoms to the world of bits in which information becomes a kind of a more predominate element of our economy, now we still live in the world of atoms, so we have to figure out a way to sort of solve those problems, but certainly disruption, I think is the rapidity of disruption is a key thing, but the increase of networking, so we are much more hyper-connected than we were. There’s the movement of what is called digitalization, so digitization is that movement from the world of molecules to the world of bits, and the fact that it is ever more informational.

I think those are the things that do make this different than the other kind of shifts you could sort of think of these major economic shifts, maybe there’s the one, we only know vaguely, like the sort of shift from the hunter-gatherer stage to the agrarian and agricultural stage, which lasted thousands of years. The industrial stage that Marx thought was the last one before you get socialism, but we’re way past that. We’re past, in many places, the service economy, I would say we’re now kind of in the information economy, in which the production of information becomes the primary source of new wealth creation. And you can see that when you look at the most high cap companies, right? They tend to be information companies, even though things like energy are, of course, profoundly important. But all of those things do make this somewhat different, and it also means that we’re gonna need to develop skills that are appropriate to that kind of economy. If you’re thinking of it as a kind of an individual person, okay what’s a self-help way of thinking of this? Well, you’re gonna want to optimize yourself or an economy that’s highly informational, that’s highly digitized, that’s highly connected, and then in fact, I think the connectivity of the economy is something that people pay too little attention to, I think that’s one of the most important facets of it.

Chelsea Follett: Right, so those features are disruption, exponential growth, digitalization, and hyper-connectivity. You’ve talked a little bit already about the disruption and the exponential growth. So let’s talk a bit about digitalization. How did rival and non-rival goods relate to this idea of digitalization?

Jay Richards: Yeah. And so this is something that I don’t think, again, is all that well understood though, if you read in this kind of growing field of Information Economics, they talk about this and a rival good is just any kind of good that’s a zero sum game. So Banana is a rival good. If I eat a banana, you can’t have it. In fact, I can’t have it anymore, right? I’ve eaten it and now it’s gone. And that’s true of basically almost all physical goods, a piece of real estate. If I own a plot to land, somebody else can’t also own and occupy it fully in the same way that I do, so those are rival goods. And we tend to think, and the economists tend to think in terms of, okay, economics is about economizing. So we’re looking at what’s the most efficient way of allocating scarce resources. Scarcity implies that a good or service is rival. But lots of digital goods aren’t like this at all, so think of that mp3 file. So if there’s a song on, let’s just use the iTunes case, where I assume we’re still buying those songs on iTunes, if I download a song for a $1.29 on iTunes, is there a warehouse somewhere where these files are being stored and I’ve now just depleted the store by one.

That’s how it would be if it was records or cassettes or CDs. That’s not how it is in this case. These are digital files that can be copied exactly. Now, yeah, there are physical limits in terms of the bandwidth and they’ve got to be stored on a hard drive somewhere or something like that, but strictly speaking, these are basically non-rival goods, and in fact, that’s how information in general is it is like if I teach you a skill I’ve not lost the skill, it was non-rival. And that’s generally the nature of information, and so it’s an amazing thing because it means as more and more of our economy is dealing in these kind of non-rival spaces, it’s sort of exciting because rather than dealing in a world of scarcity, we’re dealing in a world of abundance. And that, I think opens up different economic possibilities, but it also means that the person that gets their first can get fabulously wealthy, and so we have these network effects, for instance, where… Okay, Netscape didn’t quite work, right. And Myspace didn’t quite work, but it’s actually really hard to replicate Facebook. Once you get a few billion people on a network, the fact that billions of people are on that network is the most relevant fact about it, and so there’s a bit of a winner takes all element to it, but that’s in a sense that’s fine as long as people treat their power appropriately because it’s not like they’ve extracted that from someone else. It’s not like the robber baron supposedly who takes all of the shore line leaving no shore property for anyone else. It’s not like that in the digital world. You’ve got all of these non-rival goods. And so that’s exciting. It’s also exciting because it means that as people figure out how to produce these valuable non-rival goods, there’s always gonna be alternatives for other people to produce new things that were not there before, and so that’s… again, it gets a little weird and metaphysical, but you get to this point where we’re saying, Okay, what are humans like? What do we actually value? What’s the economy like? Well, the economy is about buying and selling real goods and services, but it’s also about creating value, and value creation is often the subjective thing, and so I think that’s just kind of the nature of the economy. That’s where things are going, and so that’s why I just honestly think once you look at the details, I think we need to focus more on the promise for these kinds of developments rather than the peril, we spend a little too much time on the peril.

Chelsea Follett: And then there’s hyper-connectivity. You’ve said that this is something you don’t think gets enough attention, and you floated Kevin Kelly’s book, The Inevitable, about how for the first time, a growing share—soon all of humanity probabl—will be connected at roughly the speed of light to one thing, the internet. Can you elaborate on that?

Jay Richards: Yeah, this is absolutely amazing. And of course, we know that Adam Smith, half of his argument was about the division of labor and comparative advantage, and so the idea is that we actually benefit in a non-zero way, that the sum is great… The whole is greater than the sum of its parts. That’s why you can have a firm or a company in which people can specialize. They can do things that they could not do otherwise. Then you get to the nature of a global market in which you can produce everything from a pencil to an iPhone, no one person or even a 100 people in the network knows how to do it, and yet together, following price signals, we can produce things that none of us could do on our own. And now imagine that everyone is able to connect more or less in real time and everyone is able to do that. In principle, there’s gonna be lots of cooperative things that we can do together, of course, that we could not do otherwise, and I actually think this is important because a lot of what people imagine when you have these conversations is that, “oh man, everybody’s gonna have to be a computer engineer or a coder or something like that,” but I can tell you from teaching in a business school, that some of the most prized skills in business are so-called soft skills, and I don’t know why they’re called soft, but what soft means is that you have high emotional intelligence, you know how to treat people, you know how to get people to work together well, these are the jobs of the managers, right. And those are skills that I… Trust me, the machines aren’t doing that any time soon, those are human skills, and so I actually think the so-called soft skills, that is the human skills, interpersonal skills in a hyper-connected world, those are gonna end up fetching a premium, and I continue to think that’s the case, because more and more people are gonna be connecting more and more in different ways, and so they’re gonna need to be connectors, they’re gonna need to be people with those skills, and so I really don’t… In fact, I think some of the work that coders is doing is more likely to be replaced than the people that generally have the soft skills.

Chelsea Follett: Let’s talk more about what people can do to adapt and prepare to these changes and disruption. You’ve talked about adaptability, anti-fragility, altruism, and creative freedom as some of the factors really needed to help people react to these changes, could you elaborate on each of those?

Jay Richards: Yeah, and so basically, as I said, so there’s these properties that you have in an information economy, disruption, exponential growth, digitization, hyper-connectivity, increasing amounts of information, and then there are these virtues that correspond to those, and so just to take the sort of obvious one would be hyper-connectivity then is a gift of collaboration or a virtue of collaboration is it ends up being a virtue that you can cultivate. Or information. So what is information? In this case, it’s not just meaningless bits. In information theory, you can sort of just, ’cause we’re trying to figure out how to measure the length of a sequence or something, you ignore, you can ignore the meaning, but the information that matters to people would be meaningful information.

And so where does that come from? Where does meaningful information come from? Well, it comes from agents. It comes from people acting for purposes, attending to the needs of others, trying to meet their needs, that’s a creation of information that can only come from agents, from people that have a gift of creative freedom. Okay, what constrains us? Well, we’re gonna constrain ourselves based upon, well, one, the things we’re able to do, obviously the things that maybe we’re good at, maybe sometimes the things we are passionate at, but very often we’re gonna constrain ourselves based upon what we think might be valuable to other people so that we try to produce things valuable to other people and do it better than our competitors, right?

That’s a type of self-restraint, but it’s in service of an exercise of higher freedom, and so that’s what I think the answer to this in any particular economy is that you wanna exercise that kind of creative freedom, and I just think in a world in which information is the sort of the coin of the realm, that’s actually really good news, because the information economy rather than being sort of alienating to us and replacing us is actually the economy that in some ways is most suited to our properties as human beings. The agricultural economy, we’re doing a lot of things that animals could sort substitute for. Cows aren’t going to substitute for the creation of meaningful information in the information economy.

Chelsea Follett: Hopefully not, but with some of these advances in artificial intelligence, do you worry about more creative work, some of those soft skills like writing, being good at writing, as machines become better at that, should people worry that those skills are something that AI might take over, should they be looking at developing different skills. How would you react to that?

Jay Richards: Yeah. And so I do think, especially I’ll say I’ve already seen some of the algorithms can produce, say, stock market news, right? But the reality is that stock market news is something that’s easily submitted to algorithms and which you just have a system and you just have all the stock numbers and they sort of plug in and basic rules of grammar, and so it’s kind of low level stuff and so I suspect some of that kind of writing is gonna be replaced just as certain kinds of low-level labor get replaced, so highly repetitive work in factories. I think we need to spend less time focusing on, okay, what do we do to get all these factory jobs back. Working in an assembly line in a factory is an artifact of the 20th century. It’s a particular way of doing labor. There’s no reason to assume that we’re gonna get those back or that it even makes sense to get those back. On the other hand, highly complex labor, artisanal craft work, for instance, that’s not only gonna be hard to automate, it’s also something we don’t necessarily want to automate. I might actually value having hand-made shoes, even if I could get machine made shoes.

And so I honestly think a lot of this is to kind of realize what’s going on, you gotta dig more deeply at what it means to be a machine and what it means to be human, how those things are alike and how they’re different, and so this does, look you can’t ever, if you’re gonna talk about these things, you ultimately are gonna press up against philosophical questions about the nature of the human person and things like that, which is perfectly happy. I’m perfectly happy to do that because I’m into philosophy, but I really do think what the problem is that so often in these AI debates, tacit assumptions about what machines and humans are, they get submerged. And so if you assume that humans are basically machines made of meat, your assumption is kinda gonna lead you to a particular conclusion if you think human beings are this kind of amazing hybrid of the material and the spiritual, and we find ourselves together here and we have purposes and we have values and goals and the capacity for creation, then you’re gonna have a different view of these questions. And so whether you like it or not, submerged assumptions about whether humans are machines or not, they work their way out in particular ways.

Chelsea Follett: So to sum up, how do you think people can best react to this massive cultural shift of mass automation and advances in AI?

Jay Richards: What I would say is the best way to adapt to this is first to actually develop the kind of broadly human skills, so I actually think genuine Liberal Arts education is still a really good thing.

I don’t mean your Gender Studies School at the local state school. That’s not what I mean by liberal arts. You got a genuine liberal arts education in which you’re doing deep readings of text and you’re grappling with them. This kind of stuff, I think is actually really good. I honestly think the ideal education a person could have is a richly humanizing liberal arts education in which you become literate and you become numerate and you become logical, and then also develop side skills that are highly technical, whether it may be a social media management or maybe take a couple of coding classes or something like that, so that you do both, because the reality is that, unlike our parents and grandparents who may be just did one or two jobs, most of us look especially if I’m an X-er, but if you’re a Millennial or a Gen Z-er, you’re likely to do five or six totally different things in your adult career, and so if you knew that ahead of time, what would you do? Well, you’d wanna develop those skills that allow you to adapt quickly and then sure pick one or two kind of specialized ones that you can learn as a kind of a side gig, but don’t assume that that’s what you’re gonna do forever.

But if you know how to read, if you know how to write, if you know how to construct a sentence, if you are numerate and you’re punctual, all that punctual, you’re still gonna be really competitive, I think in the 21st century economy.

Chelsea Follett: This has been a great conversation. Thank you so much for speaking with me. Jay.

Jay Richards: Great to be with you. Thanks for having me.