Jason Feifer: This is Pessimists Archive, a show about how change happens. I’m Jason Feifer. Like many people in the past few years, Justin E. H. Smith was getting concerned about social media. He deleted his Facebook account, and although he did keep Twitter, he used it with deep reservation. He felt like something was just wrong about the way these platforms were being used or better yet how the platforms were using us. Rather than being a place of free flowing expression, he saw them as a place stuffed full of addictive triggers and manipulations, which corrupted the very way we communicate with each other.

And Justin wasn’t alone, of course, in thinking this. Politicians and pundits, and even some scientists were saying much the same thing. It became part of a broader movement known as the Techlash, which cast a deep suspicion over big tech companies and the tools they built. But Justin came at this from a different angle. He is not a politician or a pundit, he’s a professor of the history and philosophy of science at the University of Paris, and he spent a long time studying the centuries long history of people trying to devise instantaneous, long distance communication. It was the dream of so many people who yearned to close the gap between their great physical distances, and the tools we have today are the end result of that work. But Justin felt like something just went off course recently.

Justin E. H. Smith: I had been one of the most ardent Techlashers, most concerned about all of these things we’re very familiar with now, the way impulse to free expression and cultivation of the self, is deformed and mutated by the hidden algorithms of social media.

Jason Feifer: He felt like he had to do something to help people see the problem.

Justin E. H. Smith: This then led me to a current book project, which is now under contract with Princeton University Press, called Against the Algorithm. And the full title is something like Against the Algorithm: Human Freedom in a Data-Driven World.

Jason Feifer: For a while now, Justin had been hard at work writing this book and building his case up against the internet. And then a funny thing happened, COVID-19 took over the world, blowing up many things in its path. And one of them was Justin’s own feelings about the internet.

Justin E. H. Smith: The month of March, 2020 happens, and we’re locked down at home. And immediately, almost kind of instinctively, I like everyone else in the world, turned to the internet for what I can only describe as some kind of genuine continuation of the human interactions that I was now physically cut off from. And it became quickly, very hard for me to remember what it was that had me so angry about internet-facilitated communication and expression.

Jason Feifer: Justin was writing a book about how bad the internet was, and then he discovered how good the internet was. And he was not the only one going through this transformation. Many surveys conducted during this pandemic have found it. For example, the Global Insights Firm National Research Group surveyed more than 1000 people in early April, and found that 88% of Americans said they have a better appreciation for technology now. In mid July, another survey of 2000 people, this one conducted by OnePoll on behalf of SunGard Availability Services, found that most people had significantly increased their use of digital services, and that seven in 10 respondents plan to continue using these new services after this is all over.

Meanwhile, prominent tech critics started to change their tone. For example, the New York Times reporter Nellie Bowles is normally chronicling the dark side of Silicon Valley culture, but wrote a piece in March called Coronavirus Ended the Screen Time Debate: Screens Won. In it, she admitted that she’s quote, “Thrown off the shackles of screen time guilt,” end quote, and discovered a new world of connectivity and interests as a result. Across Twitter for months, people who work in tech and tech policy had been watching this and tweeting the same four words in response: “The Techlash is over.” And is it true, is the techlash over? I mean, I guess that depends on the definition of the word over. When you look back at the history of people fearing innovation, you see that fear happens in cycles. We worried that novels would destroy children’s brains, then novels were okay, but radio is destroying children’s brains. Then radio is okay, but video games were destroying children’s brains, though I guess we haven’t entirely gotten past that one.

Laura Ingraham: Because some of these games are really wild. You know them, Call to Duty, Call of Duty? I don’t do any of these, but-

Jason Feifer: Good expert analysis there by Laura Ingraham. But anyway, history shows that fear does have an endpoint, and video games will one day go the way of the novel or radio. At some point it stops, the thing that was once so controversial becomes so uncontroversial that we have no idea it was ever controversial to begin with. And so, it’s worth wondering: is this right now, what the end looks like?

Matt Ridley: I wish that were the case, but I’m somewhat pessimistic about that.

Jason Feifer: And to be clear, you just heard from a guy who wrote a best-selling book called The Rational Optimist. He’s supposed to be optimistic. That’s Matt Ridley, whose most recent book is called How Innovation Works. He’s also a member of the UK House of Lords.

Matt Ridley: If you look at the history of pessimism, it’s just come around again and again and again, that people resist change, and they find all sorts of excuses as to why it’s rational to do so. I think we’ll be lucky if we out of this with everybody’s saying, “I’m terribly sorry, I was wrong. I’m never going to cost out on vaccines, or whatever in the future.”

Jason Feifer: So, maybe we’re not done with fear, but what if we’re done with one cycle of fear? What if, I don’t know, we’re at least done worrying about screen time and ready to move on to worrying about something else? And if that’s the case, well, what are we going to fear next? That is what we’re going to devote this episode of Pessimists Archive to answering, and here’s how we’re going to do it: first, we’re going to dig into exactly why we keep repeating ourselves, like why do we keep applying the same old fears to brand new technologies, without ever seeming to learn from what came before? The answer is not what you’d think. And after that, we’re going to turn to two people who have their eyes on the future in very different ways. We have Kevin Roose, technology columnist for the New York times and author of a book called future-proof.

Kevin Roose: I do think it serves a purpose, to question emerging technologies and what could go wrong with them.

Jason Feifer: And we’ve got Peter Diamandis, founder and executive chairman of the XPRIZE Foundation, and executive founder of Singularity University, and more.

Peter Diamandis: Change means instability and causes fear. It’s human nature.

Jason Feifer: And they are going to tell us what nobody’s freaking out about now, but that if you give it a few years, things are going to get freaky. Someone go tell Laura Ingraham, that we have got her next batch of things to barely understand, ready to go. And it’s all coming up after the break.

Okay, we’re back. So like I said a minute ago, our two big questions of this episode are: what causes a cycle to repeat, and what is coming next? To answer that first question, I want to tell you the story of a woman named Amy Orben. Much like the book author, Justin E. H. Smith that you met earlier, amy also had an unexpected experience that changed the course of her research.

Amy Orben: I was doing my PhD at the University of Oxford. I had come into the PhD wanting to look at how social connection changes through digital means. Does a like equate to a minute talking to someone, or an hour or not at all?

Jason Feifer: It turns out that those questions are kind of unanswerable. There just isn’t the data. This was back in 2017, and at the time the world was having a big panic about social media. And so Amy thought, maybe that’s what she should study. She could look into these big, important subjects that were grabbing headlines around the world, like social media use among teenagers and how it was impacting their mental health, and come away with insights that could help improve lives.

Amy Orben: It felt so urgent, and it felt like every minute mattered.

Jason Feifer: So, Amy spent the next few years researching this for her thesis. And at some point she thought, you know, it would be nice to kick this paper off with a historical anecdote, something that contextualizes the dangers of social media. So, she went to the library and came across a 1941 article in the Journal of Pediatrics, about what radio does to children. The author was a doctor named Mary Preston, and she wrote this.

Speaker 7: The average rich child radio addict, starts lapping up his fascinating crime at about four o’clock in the afternoon, and continues for much of the time until sent to bed. The spoiled children listen until around 10 o’clock, the less indulged until around nine o’clock.

Jason Feifer: The article goes on and on like that, detailing the thoughts and behaviors and habits, of children who according to the report, had become addicted to radio. Medically addicted. The report described how radio was harming children’s mental health, and how they began to value it over everything else. And Amy is in the library reading this, and it gives her a kind of existential crisis.

Amy Orben: It felt like it was exactly the same conversation I had been having for three years, just 80 years before. It was a more of a sense of self-reflection, what am I doing? What do I want to do for the next 30, 40 years of my life?

Jason Feifer: Amy completed her thesis, and became a college research fellow at the University of Cambridge, where she had planned to continue her research. But now she was thinking, do I really want to devote my life to repeating the same argument used against radio, but just now against social media? So, Amy started looking back at her research and all the other studies she’d found on social media, but now with this new lens. She reanalyzed the data from past studies, and mind you, these are studies that had gotten a lot of attention over the years. Studies that had been used as the foundation for so many books, and articles, and political hand wringing about how social media is seriously impacting our children. And she found, well first of all, the reporting methods are ridiculous. It’s mostly self-reported.

Amy Orben: So, we go and ask a teenager, “How much do you use social media on a normal weekday?” And I wouldn’t be able to answer that question.

Jason Feifer: And also, when you really look at the research that had been done with this data…

Amy Orben: The research was flawed. They don’t really tell us a lot about whether there’s a causal impact of social media on depression. We’re all talking about correlations mainly, there were very, very small.

Jason Feifer: Amy and a coauthor put their findings together into a paper that was published in the journal Nature, Human Behavior in January of 2019. And well, let’s quote Scientific American’s coverage of it, which says that Amy’s study…

Speaker 7: Data on more than 350,000 adolescents, to show persuasively that at population level, technology use has a nearly negligible effect on adolescent psychological wellbeing. Technology use tilts the needle less than half a percent away from feeling emotionally sound. For context, eating potatoes is associated with nearly the same degree of effect, and wearing glasses has a more negative impact on adolescent mental health.

Jason Feifer: The internet is just as bad for a young mind as a potato. Though that should come as no surprise to potato historians, because when the potato spread through Europe around the 15 hundreds, well?

Speaker 9: It was sometimes called the Devil’s Apple, and some said it was used by witches to make flying ointment.

Jason Feifer: That was a historian on YouTube called The History Guy. And people said that in part because the potato isn’t mentioned in the Bible. You say potato, I say potato to hell.

Speaker 10: It’s Mr. Potato Head.

Jason Feifer: So, okay. How is this possible? How could older studies find an alarming connection between social media use and young people’s mental health, and then how could Amy and her colleague look at this same study and find almost no problem at all? The answer is in part, a question about how scientists analyze and understand data. Most of these old studies were based on analyses of large, publicly available datasets, which according to Scientific American, is very susceptible to researcher bias. To prove that point, Amy and her coauthor on the paper found 600 million possible ways to analyze that data. Then they used what’s called specification curve analysis, which to oversimplify, means looking at the data in many, many ways, and then evaluating the results across all the results.

As Scientific American explained, this method is quote, “The statistical equivalent of seeing the forest for the trees,” end quote. And doing this helps take into account all the other possible things that could impact a young person’s mental health.

Amy Orben: The connection between things like wellbeing and technologies are inherently complicated. We often think of them as a one-way street, technology affects us, but actually the way we feel so impacts the way we use technologies, and other third factors especially for children and teenagers, like their background, their parents, their motivation, those all impact this very complicated network.

Jason Feifer: Which is actually quite different from how we tend to think. Instead of looking at the world as a complex series of slow-moving social and economic factors, we tend to see it as being impacted the most by whatever is newest and loudest. There’s a great term for this kind of thinking, in fact.

Speaker 11: Technological determinism.

Jason Feifer: Technological determinism is the belief that our technologies are the primary thing that shape our society. And when the technologies change, we change as a result.

Amy Orben: We feel like technologies affect us, but we cannot affect them.

Jason Feifer: So, now consider the ramifications of this point of view. If we are predisposed to believe that social media is the primary driver of influence on children, then scientists might start to see that pattern in the data. And if they do, and they spot a correlation between depression and Facebook use, they might instead see it as a causation, that Facebook is causing the depression. And if they do that and publish their work, then that research will create a lot of news.

Speaker 12: Tonight, as if being a teenager wasn’t hard enough, doctors are now warning teens, their Facebook obsession could lead to depression.

Jason Feifer: And on segments like this one from ABC Action News, a snappy term for the problem will be used, like…

Speaker 12: Our John Thomas joins us now with what doctors are calling Facebook Depression.

Jason Feifer: And parents and educators might become alarmed about Facebook Depression, and believe that the best way to help a depressed teenager is to stop them from using Facebook, which would be a reasonable reaction to this news, right? Take away the thing that’s causing the depression. But what if it’s wrong? What if scientists saw a correlation, not a causation, and what if that means that Facebook doesn’t make teenagers depressed, but instead depressed teenagers tend to be more frequent users of Facebook, maybe because they found a community there? Now, you are going to take Facebook away from them based on a misunderstanding of the situation, which means that you’ve severed them from a community that may be valuable to them, so your solution causes more problem.

And this is why the fear of the new can be so counterproductive. So, once Amy sees all this, she starts thinking bigger. Instead of just looking at the issue of social media, she starts to think about the bigger phenomenon. Why did social media researchers like herself not learn anything from the last time this happened, with radio or television or whatever? Why are we in a constant loop of fear, repeating a previous generation’s fears while never seeming to carry over that older generation’s eventual learnings? Because you know, radio did not turn out to turn children into helpless addicts. Where did that lesson go? So, Amy digs in and she comes up with a theory.

Amy Orben: I called my cycle, the Sisyphean Cycle of Technology Panic.

Jason Feifer: It’s named for Sisyphus, the character in Greek mythology who was forced to roll a giant boulder up a hill, and then just as it neared the top, the boulder would roll back down, and he’d have to do the same thing again and again for eternity. So, if the boulder is our fear of the new, then the question is, how does the boulder go up and down the hill?

Amy Orben: What we have is different stages of the cycle.

Jason Feifer: The first few stages may feel familiar. A new technology is introduced, and its adoption starts to create widespread change in behavior, particularly in populations that are seen as vulnerable, like children. So, let’s say that kids become obsessed with playing this new game called pinball, and now they’re no longer playing ball in the street like they used to. Then this becomes linked to some larger concern in society, like say, how there was a panic in the 1950s about widespread juvenile delinquency, and pinball then became identified as one of the causes.

Amy Orben: And all of a sudden we have people really concerned, the media is concerned. And then it goes into a political realm, really, where the electorate is now putting pressure on politicians to do something.

Jason Feifer: And politicians love this, because complex problems just became simple. You don’t need to address structural inequality, when you can just point to a pinball manufacturer in the 1950s, or a Silicon Valley company in 2020, and say, “That. That is the problem, and I will stand up to it.”

Amy Orben: Because you’re not blaming your voters, you’re not blaming your own policy, and it’s a very easy story.

Jason Feifer: And of course, you’ll be rewarded with even more media attention, which is how you get Mayor LaGuardia of New York City smashing pinball machines with a sledgehammer, and throwing them into the river. But it doesn’t just end there. Now, here’s the part that really made me sit up, because Amy has a personal front row seat for what happens next after the politicians smell opportunity.

Amy Orben: People want scientific evidence, funding starts flowing and interest starts flowing, and so scientists all of a sudden, they’re like, “Oh, I’ve got to study social media,” or, “I’ve to study video games.”

Jason Feifer: Or as Matt Damon would say on Mars…

Matt Damon: I’m going to have to science the shit out of this.

Jason Feifer: Because in the past few decades, Amy says, our culture has increasingly turned to science for answers to everything.

Amy Orben: People are seeking science as a sort of advice giver, where previously that might have been religion, or it might have been closer family units, or stronger communities where how to raise your child wasn’t a question for science for a really long time. It was more of a question of family traditions, or religious traditions, community traditions.

Jason Feifer: And it’s not like science doesn’t have a lot to offer here, science does. But there are two big problems: the first is that according to Amy, scientists aren’t really trained to take long-term social learnings into account. They’re trained to treat each phenomenon as new, just as she was. And that means the lessons of past technology scares don’t get factored into new research. And here’s problem number two: for as much as society at large now turns to science for answers, society at large also doesn’t really understand science, and that creates a lot of confusion. So, let’s break this down. If high profile people like politicians have a question, they want answers from scientists. And scientists are eager to formulate an answer.

Amy Orben: It’s interesting, and kind of helps our careers if we research something people actually care about.

Jason Feifer: And politicians, they want answers now. But science isn’t actually very good with now.

Amy Orben: We have all of a sudden, scientists trying to figure out an incredibly fast moving target, technologies develop in such increasing rates, and research is really slow. And we are decades, if not five years, I always say between five or six, seven years behind actual technological development. What we’re figuring out now, we should have figured out five years ago.

Jason Feifer: Why? Because good science takes time, and drives to consensus. Good science means looking at a dataset, drawing a conclusion, publishing it, and then other scientists looking at that and saying, “I don’t know,” and then doing their own research. Like when Amy looked back and discovered the many errors in previous social media research, that is how the process is supposed to work, but who has the time for that? Nobody.

Amy Orben: We actually tweeted out to people on a day-to-day basis, we talked to journalists on a day-to-day basis. And so, people are getting the research very quickly and there’s not enough time for people to actually weed out what is good evidence, what needs to be revised?

Jason Feifer: And once scientists are delivering answers, we don’t just ask for their findings. We ask them how to apply their findings to our lives. Amy discovered this when she researched social media’s impact on children.

Amy Orben: I was amazed over multiple years, that I would have a lot of authors call me up who were writing books about parenting, asking me about advice, about parenting advice. And I felt I was the least qualified to give parenting advice around social media. Naturally, I’m doing some of the forefront work around how social media relates to well-being, but the way you need to do research is that you average across so many different factors. We average all kids of a certain age range, over thousands of different data points.

Jason Feifer: Amy can tell you about 350,000 children at once, but she can’t tell you what to do with your kid. So, when you call someone like her for parenting advice, you’ve basically taken this…

Matt Damon: I’m going to have to science the out of this…

Jason Feifer: And you’ve turned it into…

Ed Helms: So, are you sure you’re qualified to be taking care of that baby?

Zach Galifinakis: What are you talking about? I’ve found a baby before.

Ed Helms: You found a baby before?

Zach Galifinakis: Yeah.

Jason Feifer: So, this brings us to the final phase of the Sisyphean Cycle of Tech Panic, which is the early drafts of science get picked up by the media and by politicians, and there is no consensus on anything, and it becomes a toxic free for all. And what does that look like? Well, let’s say you’re an ambitious Senator named Estes Kefauver in the 1950s, everyone’s worried about juvenile delinquency and you’re positioning yourself for a run for president, so you hype a Senate investigation into the harmful effects of comic books. And you call as your star witness, a scientist. A psychologist named Frederic Wertham, who has done a seemingly scientific review of the problem and can tell the entire nation about it.

Fredric Wertham: It is my opinion, without any reasonable doubt and without any reservation, that comic books are an important contributing factor in many cases of juvenile delinquency.

Jason Feifer: Never mind that Wertham’s work would eventually be revealed to be full of holes and total fabrications, or that future studies would find massive benefits to reading comic books, from improved literacy to fostering children’s healthy imaginations, that process was too slow. So, what we got instead was terrifying, first draft science, presented as the final word, which birthday censorship authority that squelched the comic book industry for decades. We see this today too, of course, with people like Senator Josh Hawley, who’s talking about this imperfect science, when he uses the word evidence in a sentence like this?

Josh Hawley: My thesis is, I think the evidence is more and more strongly suggesting that there is something that is deeply troubling, maybe even deeply wrong with the entire social media economy.

Jason Feifer: And what is the final result? According to Amy, it’s mostly a lot of fuss. Science is too slow, policymakers make a lot of noise, very little is achieved, and then something else comes along. Some newer technology, or I don’t know, a pandemic, and this entire system from culture, to media, to politics, to science, just lurches over to the next thing, and that is Sisyphus’s boulder rolling back down the mountain, having drawn no final conclusion and having learned very little.

Amy Orben: What happens, is that actually quite quickly, the conversation dies down. There’s no more funding, there’s no more interest, scientists don’t get their media interviews. Policy-makers don’t have the pressure to do something anymore.

Jason Feifer: Which made me wonder, is there a way out of this? Amy said there might be, but it means science has to play a different role in the cycle. It has to be proactive. If researchers need five years to truly begin to understand something, then that five-year process should not start while everyone’s hyped up, it should start before anyone cares.

Amy Orben: If we know that a new panic is coming in maybe five or 10 years, what we should be doing now is putting our feelers out and trying to figure out what that might be, and start collecting data the moment we think there’s an inkling of a panic starting.

Jason Feifer: So, where is there an inkling right now? What isn’t on the average person’s radar, but that in five to 10 years could be the next boulder we push up the hill? That is why I called Kevin Roose and Peter Diamandis, who I told you about at the beginning of the show. Their answers are coming up, after this break.

Jason Feifer: All right, we’re back. So before the break, we heard tech researcher Amy Orben say that if we want to head off the next cycle of tech panic, we need to start right now, in researching the things that people will be afraid of in five to 10 years. And what will those things be? I wanted to put the question to two people who look at the future from very different perspectives. So, let’s meet our guides.

Peter Diamandis: I normally wouldn’t enter this kind of a conversation because I’m the person who’s really focused on not what do we fear, but how do we solve anything that we fear?

Jason Feifer: This is Peter Diamandis, founder, and executive chairman of the XPRIZE Foundation, executive founder of Singularity University and author of a bunch of best-selling books, including most recently one called The Future is Faster Than You Think. The way he sees it, history is a steady march of progress.

Peter Diamandis: There is no problem we cannot solve. We now have access to more capital, more knowledge, more computational power, more exponential technologies. And if you look at historical data across most all of human time, the world has gotten better at an extraordinary rate.

Jason Feifer: So, that is Peter the optimist. And on the other side, we have New York Times technology columnist, Kevin Roose.

How personally concerned are you about these kinds of things?

Kevin Roose: Very. That’s my job, is to be the person thinking and talking about how this could go wrong.

Jason Feifer: Kevin has a book coming out in January of 2021 called Futureproof: Nine Rules for Humans in the Age of Automation. And when he agreed to talk to me about what people would be panicking about in the future, he half jokingly said that he would be playing the role of the panicker.

Kevin Roose: I do think technology always brings good things and bad things into our lives, but I think the position of, as you would say, the pessimist, or I would say the critic, is an important one. Because it can help us anticipate problems with future technologies and avoid them. So, the fact that we spent the past several years talking about what was wrong with platforms like Facebook, and Twitter and YouTube, means that the next generation of social networks will start thinking about things like content moderation and hate speech much earlier than they would have if none of this had been part of the discourse at all.

Jason Feifer: So, okay. I asked Peter and Kevin the same question, which was what is the average person not aware of today, but that in five to 10 years will seem very alarming? They both offered different answers, but interestingly, their first answers were the same. It was artificial intelligence. Here’s Kevin.

Kevin Roose: I think we’re getting close to the point where artificial intelligence is capable of very realistic simulated conversation. So, this morning I was playing around with this new AI that organization called OpenAI just built called, GPT-3. And it’s the sort of next version of their GPT machine learning model, that basically takes text input and completes text based on initial conversations. So, I was doing quote unquote, interviews this morning, where I would type in, it says who do you want to talk to, quote unquote? And I would say Mark Zuckerberg, and then I would start asking questions, and GPT-3 would spit back answers. And they were not only coherent and full sentences, I think they would pass the Turing Test.

Jason Feifer: Let’s be honest. There’s not that much difference between how a robot talks and how Mark Zuckerberg talks. He said it himself on NBC News.

Mark Zuckerberg: I just come across as robotic.

Jason Feifer: But still, point taken. GPT-3 is powerful and impressive. Kevin guesses that most people will interact with it first in customer service, when they open a chat window with Verizon or whatever, and may not realize they’re talking to a bot. But that’s the innocent stuff.

Kevin Roose: You can just imagine the many ways that that could be used in an election, or in a period of civil unrest.

Jason Feifer: We think that we have a problem with misinformation now, but you ain’t see nothing yet, Kevin says. And now here is Peter on AI.

Peter Diamandis: Artificial intelligence is just entering the knee of the curve, where AI is going to become involved in everything we do, whether you’re a lawyer, whether you’re a doctor, any profession. There’s going to be a point in the next five years where it is malpractice not to do a diagnosis in partnership with AI, because no human doctor can really keep track of all of the journals, all of the data that’s impacting a patient. So, we’re going to see this change, and it’s going to impact every single company. I seriously and jokingly say, that there are two companies at the end of this decade, those that are fully using AI, and those that are bankrupt.

Jason Feifer: And it’s interesting to hear Peter say this, because you know, he’s an optimist, but he’s not blind to the disruption that this will cause. He compares the introduction of exponential technologies, to that of an asteroid strike.

Peter Diamandis: 65 million years ago, hits the earth, the environment changes so fast that the lumbering slow dinosaurs die off. They can’t evolve, and the furry little mammals eventually evolve into you and me. Exponential tech is that same asteroid strike, that the slow moving companies are unable to change and they go bankrupt. And then we have these furry mammals of Apple, and SpaceX, and Tesla and Amazon.

Jason Feifer: And it’ll happen again. This is real change, this is people losing their jobs. And yeah, AI may go on to create many new jobs and industries that don’t even exist yet, but that may not happen immediately. When I asked Kevin and Peter about what else people will be concerned about in five to 10 years, their answers were different, but they followed similar themes. They both talked about ways in which computers get to know us, either how we look or think or more. We’ll start with Kevin who brought up facial recognition technologies. A colleague of his at the Times recently wrote about Clearview AI, which has been selling tools to law enforcement agencies that were described as Shazam for faces.

Kevin Roose: It’s like you can just point it at someone and it will tell you who that person is. And that’s a really terrifying piece of technology, in my opinion. And there was a big debate for a few days within the AI community about whether things like that should be allowed. And that I think to some people would feel like panic, to me that feels like a healthy airing out of potential problems, and getting ahead of some of the issues with facial recognition. We’re now seeing entire cities banning the use of facial recognition by law enforcement, so I think that will continue to accelerate.

Jason Feifer: Kevin also talked about automation, which we think of now in a straightforward way, like someone did a job at a factory, and now a robot does that job. But what if automation enters our personal lives? For example, recommendation engines.

Kevin Roose: Where do I want to go on vacation? What brand of dog food do I want to buy on Amazon? What Netflix show do I want to watch? We’re giving over choices to these machines, and trusting that they will make the right choices for us that will produce the best outcomes, but that’s a kind of automation.

Jason Feifer: And if you extrapolate that out the way Kevin does, then we begin replacing more and more of our own thoughts with that of a thing that thinks for us. That could be concerning. Now, facial recognition and automation are things we encounter in some form already, but Peter offered something more futuristic. But interestingly, it still speaks to the same concerns about privacy and independence that Kevin had raised. So, here’s Peter.

Peter Diamandis: An area that seems absolutely science fiction, but we can see the roots for it solidly today, which is the whole concept of brain computer interface. How do we connect the hundred trillion neurons in your brain to the cloud? How do we connect you so that as you think and query, your mind is Googling the answers out there?

Jason Feifer: I asked Peter how he thinks people will adapt to this. And his answer was fascinating, because many people will hear it and think it sounds crazy and dystopian. But it reminds me that the things we think of as normal today were also considered completely beyond the bounds of what people of past generations would accept. So, okay. Ready? Here’s Peter’s answer.

Peter Diamandis: One of the things I ask people is, do you really, truly, still believe you have privacy? Honestly, do you? Because as we’re heading towards a world where we’ll have this year, a trillion sensors and 20 billion connected devices, my Aura ring, my Apple Watch, my iPhone, all of these things, by 2030, a hundred trillion sensors. Imaging, sensing, listening to everything. We’re heading towards a world where anything you want to know, you can know. You can ask a question and the data is out there for your AI to gather an answer.

One of the benefits of sensors everywhere, is when something is done wrong, people see it. So there’s a higher level, a degree of supervision in people of moral or ethical action. Because people know that hey, if I do something wrong, someone’s going to see it. When a dictator or a despot is oppressing his or her people, and there’s a CNN camera watching, they behave differently. And then ultimately, the ability to connect the brain to eight billion other brains on this planet, does that cause a new level of empathy, where I care because I feel what other people feel like?

Jason Feifer: Now, maybe that sounds comforting, maybe that sounds horrifying. What do I think? I think that what I think of the year 2020, is not relevant to the year 2030. Because even if we can predict the technology, we can’t predict the context in which it’ll be experienced, or the needs it’ll fulfill, or the expectations it’ll meet or shift. Go back to the mid 1800s, when people were complaining about how the telegraph was increasing the speed of their lives, and tell them about cell phones, and they would think it is a form of torture. They couldn’t have imagined that we’d want these things, or even be able to survive these things. We tend to imagine new technologies as an object that collides with us, creating change like a wrecking ball. That I think, is what so often propels fear.

But that’s not what actually happens with innovation. We are more fluid than that. We change and we create change, but sometimes the greatest realization we have is how little we changed. We realized that the old and familiar is still with us, that children still play and explore, that we still form communities and connect, it just looks different. And if you want to hear that mindset shift in action, well, let me take you back to Justin E. H. Smith, the philosophy professor we met at the beginning of the show. To remind you, he was writing a book about the dangers of the internet, and then the lockdowns changed his relationship with the internet, and then he felt he could no longer be so passionate about his argument.

Justin E. H. Smith: Fortunately, my editor at Princeton is a very understanding guy. He’d said, “Don’t worry, we can rethink this book project.”

Jason Feifer: Justin’s original premise was that the internet challenges our sense of self. It’s a place that only a certain kind of self, and that creates a real and lasting shift in us. We stop being ourselves, and instead become the thing that the algorithms reward.

Justin E. H. Smith: Impulse to free expression and cultivation of the self, is deformed and mutated by the hidden algorithms of social media, that then give us this distorted and reduced version of ourself, that’s expressing itself only in the way that the channeling algorithms pressure it to express itself.

Jason Feifer: I’ve heard versions of this argument before, and I am sure you have too, but as I listened to Justin make the case, and then talk about the fuller experience that he was now having with the internet once COVID began, I got to thinking about the disconnect. What was he missing before? And then something occurred to me, so I said, “I want to run a theory by you,” and then I explained my own evolution on social media. About four years ago, I became the editor in chief of Entrepreneur Magazine, which put me in a role of prominence in the world of entrepreneurship. And suddenly, I had an audience of people who expected something of me personally. I had never had that before. And frankly, at first I didn’t know what to do with it.

So, I looked at some famous entrepreneurial personalities, people like Tim Ferriss and Gary Vaynerchuk, and I studied what they were putting on social media, and I came to a realization: they are not putting their whole selves out there. Not even close. They are playing characters, characters whose names are Tim Ferriss and Gary Vaynerchuk, and these characters are simple and on-message, and they’ve tuned into what their audience wants. The audience wants inspiration, or insights, or motivation or whatever it is, so the characters of Tim and Gary deliver on that on repeat, and only that.

So, I started doing this too. I experimented a lot with what people wanted from me, and then I created a character named Jason Feifer, and it’s probably five percent of who I really am, but I made it a hundred percent of what people see. I made it repeatable, and predictable and interactive. I gave people what they want, and they started coming back for more. And a funny thing happened as a result. Here’s me telling it to Justin.

I’m very responsive on social media. And when people reach out to me, and sometimes I will actually meet them or talk to them. And they always tell me that they feel like they know me so well, which is so interesting because they don’t. But what they do know, is they know the entirety of the character that I have presented to them, a simplified version. And so, it is my hypothesis that everybody, even if they’re not consciously doing it, and it’s not a part of their job in the way that it’s a part of my job, everybody’s internalized some version of that. And that’s not bad, because that’s just the nature of the tool. Social media is not a large enough venue for your whole self, it is a narrow venue for a part of yourself. And that doesn’t mean that you lose the rest of yourself, it’s just that it isn’t reflected in this particular space.

But now it’s different. And it won’t be different forever probably, but it’s different now because now, our digital connectivity tools actually have to be where our whole person is, because we have nowhere else to put our whole person. So, Zoom is becoming a place where people don’t just chat, but squeeze their entire lives into. When I’m talking to my colleagues, we used to be embarrassed if something went wrong or if a kid burst into the room or something, and now nobody cares. The pants are down, and everybody’s just showing whatever. And that, I think to me, just shows you that social media and these digital connectivity tools don’t limit us and don’t shape us, it’s just that we put what we need into them, and what we need changes depending on our circumstances. What do you think of that?

Justin E. H. Smith: Wow. I can honestly say that’s the most compelling account of the situation, both before and after March 2020, that I’ve yet heard.

Jason Feifer: Oh, thanks.

Justin E. H. Smith: I always had a kind of sense that whatever self I’m going to present to the world by electronic means, is going to be as good a reflection of the whole person as I can possibly produce. And that might have something to do with just a particularly stubborn character refusing to acknowledge the limitations of the medium, but also a concern that however much people are using it lucidly and maturely like you are, there are also so many people for whom it came to be the only kind of venue for the cultivation of the self. And what might partially explain why my old critiques now seem so irrelevant to me, which is that this limited or restricted domain of self presentation, that was the internet prior to March, 2020, is now just blown wide open to include all of human experience. And it might, for that reason, start to be the place where people are able to cultivate themselves in those most noble ways to which I’ve alluded, and that we’re starting to do that now. We’ll see, we’ll see.

Jason Feifer: I could be wrong about all this of course, I accept that. But let me tell you one more thing: last year, the Washington Post ran a piece with the headline, “Twitter is eroding your intelligence. Now there’s data to prove it.” And the piece was about a study done by economists in Italy, where they had 1500 students study a particular Italian novel. Half the students studied it with a teacher in a classroom, and the other half studied it by posting thoughts and quotes on Twitter. Then the students were tested, and the results showed that the group who learned about the novel on Twitter, did 25 to 40% worse than the average result. This is why the Washington Post headline claims that Twitter erodes our intelligence, because the students learned worse on Twitter.

But that is nonsense. The study tested something that Twitter was never meant to do. Nobody said Twitter is a replacement for classroom education. That’s like saying Yelp harms our taste buds, because we can’t taste food on Yelp. But the thing is, this study is instructive. Not for its stupid results, but because it tells us how people misunderstand a common tool. They see a space for a certain kind of communication, and they expect it to serve all communication perfectly. When it doesn’t, they are alarmed.

So, what does that tell us? What does this all tell us, what does it mean for one cycle of fear to end? I think it means that once the cloud of panic is gone, we begin to understand what a new technology is. And maybe even more importantly, what it is not. We begin to see the benefits, and we begin to forget our fears. Our lives just begin to feel more normal again, and time marches on. Time will bring new innovations, new concerns, new changes.

And our great challenge, which we never exactly meet, but that we should never stop striving for, is to remember where we came from. To go forward, and yet also learn from this moment. This moment right now, this time where we found some comfort in the discomfort, when our fears of the unfamiliar dissolved into the comfort of the familiar, where we realized that we are not nearly as fragile as we think, and that when we lose the fight, we sometimes actually win. Can we go forward remembering that we aren’t starting from scratch every time, remembering that the things we’re most comfortable with were once so uncomfortable? I don’t really know, but I do know that we will have a chance to try, because the future is coming. This will happen all over again.

And that’s our episode. But hey, do you want to hear the funniest part about interviewing authors for this show? I have got a little clip to play for you. But first, Pessimists Archive is made because people like you enjoy it. And the more people there are like you, the more we can make. So please, subscribe, tell a friend, and give us a rating and review on Apple Podcasts and stay in touch. You can follow us on Twitter or Instagram @pessimistsarc, Pessimists A-R-C, where we’re constantly sharing the ill-conceived words of pessimists throughout history. You can also reach us by email at [email protected]. And our website, where we have links to many of the things you heard about in this episode, is pessimists.co.

Pessimists Archive is me and Louis Anslow. Sound editing this episode by Alec Bayliss. Our webmaster is James Steward. Our theme music is by Caspar Babypants. Learn more at babypantsmusic.com. The voice you heard reading some articles this episode, was Gia Mora. You can find more about her at giamora.com. And a big shout out to Pen Name Consulting, who is now helping us grow.

Pessimists Archive is supported in part by the Charles Koch Institute. The Charles Koch Institute believes that advances in technology have transformed society for the better, and is looking to support scholars, policy experts, and other projects and creators who focus on embracing innovation, creating a society that fosters innovation, and encouraging people to engineer the next great idea. If that’s you, then get in touch with them. Proposals for projects in law, economics, history, political science, and philosophy are encouraged. To learn more about their partnership criteria, visit cki.org. That is cki.org.

All right, one final amusing note. Earlier in this episode, you heard Justin say the name of the book that he had been writing. I cleaned up the audio a little bit for the show, but here is the original.

Justin E. H. Smith: Called Against the Algorithm. And the full title is something like Against the Algorithm: Human Freedom in a Data-Driven World.

Jason Feifer: And when he stumbled over the book title, I laughed a little to myself. Because book authors always struggled to say their subtitles, like always. I notice it every interview. And then I talked to Kevin Roose.

Can you just introduce yourself so I have it on tape, and then I’ll let you go?

Kevin Roose: Yeah, I’m Kevin Roose. I’m a tech columnist at the New York Times. And the author of Futureproof.

Jason Feifer: You’re not going to say whatever the deck is, what’s the deck on that? No author ever knows the subtitle of their book.

Kevin Roose: I don’t know, do people actually pay attention to this?

Jason Feifer: I don’t know.

Kevin Roose: It’s… Wow. It’s Nine Rules for… Nine Rules for Humans in the Age of Automation. That’s what it is.

Jason Feifer: There you go.

This is like a little joy to me, every time I do an episode. Hot tip to authors: memorize the name of your book. One day, I might just ask you for it. All right, that is it for this time. Thanks for listening to Pessimists Archive. I’m Jason Feifer, and we’ll see you in the near future.