fbpx
01 / 05
Musk’s Neuralink Gets FDA’s Breakthrough Device Tag

Reuters | Health & Medical Care

Musk’s Neuralink Gets FDA’s Breakthrough Device Tag

“Elon Musk’s brain-chip startup Neuralink said on Tuesday its experimental implant aimed at restoring vision received the U.S. Food and Drug Administration’s ‘breakthrough device’ designation.

The FDA’s breakthrough tag is given to certain medical devices that provide treatment or diagnosis of life-threatening conditions. It is aimed at speeding up development and review of devices currently under development.

The experimental device, known as Blindsight, ‘will enable even those who have lost both eyes and their optic nerve to see,’ Musk said, opens new tab in a post on X.”

From Reuters.

E&E News | Energy Production

BLM Approves Geothermal Project, Moves to Ease Permitting

“The Bureau of Land Management issued a decision record approving the Cape Geothermal Power Project in southwest Utah, which would have the capacity if fully built to generate 2,000 megawatts of electricity, which is enough to power about 2 million homes.

The Interior Department also said it is proposing a new categorical exclusion that would streamline the process to evaluate and approve ‘geothermal resource confirmation operations’ of up to 20 acres. These could include drilling wells that would be used to to confirm the existence of a geothermal resource, the agency said.

The goal is to ‘accelerate the discovery of new geothermal resources throughout the West,’ and particularly in Nevada, which the agency says is ‘home to some of the largest undeveloped geothermal potential in the country.'”

From E&E News.

Axios | Air Transport

Feds OK Rules for US To Begin Electric Air Taxi Service

“The Federal Aviation Administration on Tuesday Issued Long-Awaited Rules That Will Help Pave the Way for the Commercialization of Electric Air Taxis as Soon as Next Year…

Driving the News: FAA Administrator Mike Whittaker Announced the Final Regulation During a Speech at a Business Aviation Convention in Las Vegas.

  • It Includes Qualifications and Training Requirements for Pilots of These New Aircraft Which Have Characteristics of Both Airplanes and Helicopters.
  • The Rule Also Addresses Operational Requirements, Including Minimum Safe Altitudes and Required Visibility.
  • The Rule Is ‘The Final Piece in the Puzzle’ for Safely Introducing These New Aircraft to the u.s. Airspace, He Said.’”

From Axios.

Blog Post | Communications

Digital Technology and the Regulatory State | Podcast Highlights

Chelsea Follett interviews Jennifer Huddleston about the benefits of digital technologies as well as how we should think about the risks and problems they pose.

Read the full transcript or listen to the podcast here.

We hear so much about the risks and downsides of technology. What are some areas where you believe digital technologies have improved our lives?

There are so many areas that we’ve seen transformed by technology over the last decade. Think about when we were faced with the COVID-19 pandemic, and so much of our lives shifted to our homes. Now imagine if that same thing had happened in 2010. How different would that have been? How much more limited would the options have been to stay connected to friends and family, entertain yourself at home, and continue your education and job?

Because the US has maintained a light-touch regulatory approach to the technology sector, we empowered entrepreneurs to create products that benefit consumers, sometimes in ways that we never could have imagined. I still remember the days when you had to have atlases in your car. And I remember when MapQuest seemed like such a huge deal. Now, if you’re going somewhere new, you often don’t even look it up in advance.

I’m hearing a lot of calls for more regulation of digital technologies. President Biden is saying we need to clamp down on AI, while Nikki Haley has said we must deanonymize social media. What are some of the dangers of over-regulating these technologies?

I’m going to start by asking you a question. How often do you think you use AI?

When it comes to ChatGPT, every few days. But I’m sure that what you’re hinting at is that AI is incorporated into far more than we’re even aware of.

Exactly. Most of us have been using AI for much longer than we realize. Search engines and navigation apps use AI. If you’ve ever tried to do a return and interacted with a chatbot, some of that is possible because of advances in AI. We’ve also benefited from AI in indirect ways. For example, AI can be used to help predict forest fires and to assist in medical research. Because AI is such a general-purpose technology, a lot of the calls for regulation may lead to fewer of those beneficial applications and could even make it harder to use many of the applications we’re already used to.

Oftentimes, people just don’t think about the consequences of regulation. When we think about an issue like anonymous speech, many people immediately jump to their negative experiences with anonymous trolls online. But we should also think about the costs of deanonymizing speech. Think about dissidents trying to communicate with journalists or people trying to alert each other to social problems in authoritarian regimes. Anonymous speech is incredibly valuable to those people, and we have a long-standing tradition of protecting that kind of speech in the US. When we look at creating backdoors or deanonymizing things, that’s not just going to be used for going after the bad guys. It’s also going to be exploited by a whole range of bad actors.

And this country was arguably founded on a tradition of pseudonymous and anonymous speech; think of the Federalist Papers.

Right.

What do you think is driving this distrust of new technologies?

Disruptive new technologies like social media and artificial intelligence are naturally going to make us uncomfortable. They create new ways of doing things and force societal norms to evolve. This is something that happened in the past, for example, with the camera. We’re now used to having cameras everywhere, but we had to develop norms around when, where, and how we can take pictures. With AI, we’re watching that process happen in real-time.

The good news is that we’re adapting to new technologies faster than ever. When you look at the level of adoption of technologies like ChatGPT and the comfort level that younger people have with them, innovations seem to be becoming socially acceptable at a much quicker pace than in the past.

The current technology panics are also not unique to the present. We’ve seen a lot of concern about young people and social media recently, but before that, it was young people and video games, and before that, it was magazines and comic books. We even have articles from back in the day of people complaining that young people were reading too many novels.

There’s also this fear of tech companies having too much market share. Can you walk us through that concern and provide your take on it?

I’m sure you’re talking about Myspace’s natural monopoly on social media. Or maybe you’re talking about how Yahoo won the search wars. These were very real headlines 20 years ago with a different set of technology giants. So, my first point is that innovation is our best competition policy.

My second point is that before we implement competition policy, we need to figure out why big companies are popular. If a company is popular because it’s serving its consumers well, that’s not a problem; that’s something we should be applauding. When we think about incredibly popular products like Amazon’s Prime program, people choose to engage with it because they find it beneficial.

We should really only want to see antitrust or competition policy used if anti-competitive behavior is harming consumers. We don’t want a competition policy that presumes big is bad. And we certainly don’t want to see competition policy that focuses on competitors rather than consumers. We don’t want a world where the government dictates that the Model T can’t put the horseshoe guys out of business.

People of all stripes want to restrict how private companies moderate content. People on the left are concerned about potential misinformation online, while those on the right worry about political bias in content moderation. What’s your take on this issue?

Online content moderation matters for a lot more than social media. We often think about this in the context of, “Did X take down a certain piece of content or leave up a certain piece of content?” But this is actually much bigger. Think about your favorite review site. If you travel and you’re going to a new place and looking for somewhere to stay or go to dinner, you’re probably going to go to your favorite review site rather than read what some famous travel reporter has said.

The review sites allow you to find reviewers with your same needs. Maybe you’re traveling with young children, or you have someone with dietary restrictions. This is something that only user-generated content can provide. But what about bad or unfair reviews? What happens when someone starts trying to get bad reviews taken down? We want these sites to be able to set rules that keep reviews honest, that keep the tool useful, where they’re not being overrun by spam, and they aren’t afraid of a lawsuit from someone who disagrees with a review.

This is one example of why we should be concerned about these online content moderation policies. When it comes to questions of misinformation, I think it’s important to take a step back and think, “Would I want the person I most disagree with to have the power to dictate what was said on this topic?” Because if we give the government the power to label misinformation and moderate content, the government will have that power whether or not the people you agree with are in charge. So not only do we have First Amendment concerns here in the US from a legal point of view, but we should also have some pretty big first principles concerns regarding some of these proposals.

That’s a good segue into another concern a lot of people have with new technology, which is its effect on young people. What do you make of those concerns?

Youth online safety can mean so many different things. Some people are concerned about how much time their child spends online. Some people are concerned about issues related to online predators. Others are just concerned about particular types of content that they don’t want their children exposed to. The good news is we’ve seen the market respond to a lot of these concerns, and there are a lot of tools and choices available to parents.

The first choice is just when you allow your child to use certain technology. That’s going to vary from family to family. But even once you’ve decided to allow your child to have access to a device, you can set time limits or systems that alert you to how the child is using the device. There, we have seen platforms, device makers, and civil society respond with a great deal of tools and resources for parents. To reduce harm to children, we should look to education rather than regulation. We need to empower people to make the choices that work best for them because this isn’t going to be a one-size-fits-all decision, and policy intervention will result in a one-size solution.

Many people are also concerned about privacy. Whenever there is a large gathering of data, that data can be leaked to the government or to bad actors. How should we think about data privacy?

When we talk about privacy, I think it’s important to distinguish between the government and private actors. We need very strong privacy protections against government surveillance, not only for consumers but also for the companies themselves, so that they can protect their consumers and keep the promises they’ve made to consumers regarding data privacy.

When it comes to individual companies, we need to think about the fact that there are a lot of choices when it comes to data privacy, some of which we don’t even think are data privacy choices.

One example is if you go to a website and sign up for a newsletter in order to get a ten percent off coupon, you’re technically exchanging a bit of data, such as your email address, for that 10 percent off coupon. You get a direct benefit in that moment. That’s a privacy choice you make. If we think about privacy as a choice, we start to see that we make these choices every day. Even where we choose to have a conversation is a data privacy choice.

The other element when it comes to data privacy is that an individual’s data, while we deeply care about it, is not actually that valuable. What’s been valuable is how data can be used in the aggregate to improve services. So, when we hear that we should just treat data like any other piece of property, it doesn’t necessarily work because data doesn’t act like other forms of property in many cases. Not only is the value of the data not tied to a single data point, but the data also is often not tied to a single user. This makes regulating data privacy very complicated. If you and I are in a picture together, whose data is that? Is it the person who took the picture’s or people in the picture’s? Or does it belong to the location we were in while taking the picture? Can you invoke a right to be forgotten that removes the picture? And if so, then what does that do to the person who took the picture’s speech rights? These are not easy questions, and they’re often better solved on an individual basis than with a one-size-fits-all approach.

The Human Progress Podcast | Ep. 53

Jennifer Huddleston: Digital Technology and the Regulatory State

Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, joins Chelsea Follett to discuss the benefits of digital technologies as well as how we should think about the risks and problems they pose.