fbpx
01 / 05
In South Korea, Robots Are Taking Robots’ Jobs

Curiosities | Manufacturing

In South Korea, Robots Are Taking Robots’ Jobs

“It’s a tale probably as old as labor markets: An influx of cheaper, foreign labor displaces some established workers, who seek protection from the government in the form of new restrictions on the immigrants they blame for taking their jobs.

The cycle is repeating itself right now in South Korea, with one new wrinkle: None of the workers are humans.

Executives—human ones—at some South Korean robot manufacturing firms tell the Financial Times that imported robots are starting to steal jobs from good ol’ domestic androids.”

From Reason.

Ars Technica | Accidents, Injuries & Poisonings

Waymos Crash a Lot Less than Human Drivers

“Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.”

From Ars Technica.

Scientific Committee on Antarctic Research | Scientific Research

New Map of Landscape Beneath Antarctica Unveiled

“The most detailed map yet of the landscape beneath Antarctica’s ice sheet has been assembled by a team of international scientists led from the British Antarctic Survey (BAS).

Known as Bedmap3, it incorporates more than six decades of survey data acquired by planes, satellites, ships and even dog-drawn sleds. The results are published this week (12 March) in the journal Scientific Data.

The map gives us a clear view of the white continent as if its 27 million cubic km of ice have been removed, revealing the hidden locations of the tallest mountains and the deepest canyons.

One notable revision to the map is the place understood to have the thickest overlying ice. Earlier surveys put this in the Astrolabe Basin, in Adélie Land. However, data reinterpretation reveals it is in an unnamed canyon at 76.052°S, 118.378°E in Wilkes Land. The ice here is 4,757 m thick, or more than 15 times the height of the Shard, the UK’s tallest skyscraper.”

From Scientific Committee on Antarctic Research.

TechCrunch | Computing

Google Unveils a Next-Gen Family of AI Reasoning Models

“Google has experimented with AI reasoning models before, previously releasing a ‘thinking’ version of Gemini in December. But Gemini 2.5 represents the company’s most serious attempt yet at besting OpenAI’s “o” series of models.

Google claims that Gemini 2.5 Pro outperforms its previous frontier AI models, and some of the leading competing AI models, on several benchmarks. Specifically, Google says it designed Gemini 2.5 to excel at creating visually compelling web apps and agentic coding applications.

On an evaluation measuring code editing, called Aider Polyglot, Google says Gemini 2.5 Pro scores 68.6%, outperforming top AI models from OpenAI, Anthropic, and Chinese AI lab DeepSeek.

However, on another test measuring software dev abilities, SWE-bench Verified, Gemini 2.5 Pro scores 63.8%, outperforming OpenAI’s o3-mini and DeepSeek’s R1, but underperforming Anthropic’s Claude 3.7 Sonnet, which scored 70.3%.

On Humanity’s Last Exam, a multimodal test consisting of thousands of crowdsourced questions relating to mathematics, humanities, and the natural sciences, Google says Gemini 2.5 Pro scores 18.8%, performing better than most rival flagship models.”

From TechCrunch.

Blog Post | Science & Technology

The Selfish Machine: Will Humanity Be Subjugated by Superintelligent AIs?

Superintelligent AI can serve as our guardians rather than our predators.

Summary: Fears of AI subjugating humanity often assume that intelligence alone leads to dominance. But selfishness and aggression arise from Darwinian evolution, not intelligence itself. AI systems are not subject to natural selection but rather domesticated by human control, meaning they are shaped to be cooperative and safe rather than self-preserving and hostile. While we should avoid creating AI systems that evolve in uncontrolled, competitive environments, well-designed AI can enhance human safety rather than threaten our existence.


Picture a computer that surpasses human intelligence on every level and can interact with the real world. Should you be terrified of such a machine? To answer that question, there’s one crucial detail to consider: Did this computer evolve through natural selection?

The reason is simple: Natural selection is a brutally competitive process that tends to produce creatures that are selfish and aggressive. While altruism can evolve under certain conditions (such as kin selection), the default mode is cutthroat competition. If they can get away with it, most organisms will destroy rivals in a heartbeat. Given that all of us are products of evolution, we might be tempted to project our own Darwinian demons onto future artificial intelligence (AI) systems. Many folks today worry about AI scenarios in which these systems subjugate or even obliterate humanity—much like we’ve done to less intelligent species on Earth. AI researcher Stuart Russell calls this the “gorilla problem.” Just as the mighty gorilla is now at our mercy despite its superior brawn, we could find ourselves at the mercy of a superintelligent AI. Not exactly comforting for our species.

But here’s the catch: Both humans and gorillas are designed by natural selection. Why would an AI, which is not forged by this process, want to dominate or destroy us? Intelligence alone doesn’t dictate goals or preferences. Two equally smart entities can have totally different aims, or none at all—they might just idly sit there, doing nothing. 

Some AI scholars, like Dan Hendrycks from the Center for AI Safety, contend that AIs are likely to undergo natural selection after all. Indeed, this may already be happening. In the current global AI race, Hendrycks argues, AI systems are developed by “competitive pressures among corporations and militaries.” OpenAI started as a nonprofit with a mission to benefit humanity, but we all know what happened next: The nonprofit arm of OpenAI was sidelined, and the company joined the breakneck AI race. According to Hendrycks, that amounts to natural selection, which means AI systems will become selfish and hungry for dominance, just like other evolved creatures.

Evolution Everywhere

So, will humanity soon be subjugated by selfish AIs? We must tread carefully here. People like Hendrycks are right that natural selection isn’t limited to carbon-based life. Natural selection is what philosophers call substrate neutral, meaning it can act in any material medium no matter what it’s made of. For example, cultural researchers have applied natural selection to human culture and its elements for decades, including technology, language, religious belief, and moral norms. Cultural knowledge is not transmitted in genes but in human brains and artifacts, such as books and institutions. The biologist Richard Dawkins coined the term “meme” as the cultural counterpart of a gene. It is therefore perfectly possible to implement natural selection in a digital environment as well. In the current AI race, there is indeed variation between different AI systems, and the companies with the best and most powerful AIs will win the race, while the others will be left behind—sorry, Europe and Mistral. The market constantly weeds out unsuccessful AIs and preserves the best.

However, a better analogy to understand AI evolution is the domestication of animals, and this leads to very different predictions. Famously, in the opening chapters of On the Origin of Species, Charles Darwin first discusses the enormous power of artificial selection by human breeders, which was well known at the time, before moving on to blind natural selection:

As man can produce and certainly has produced a great result by his methodical and unconscious means of selection, what may not nature effect? Man can act only on external and visible characters: nature . . . cares nothing for appearances, except in so far as they may be useful to any being. She can act on every internal organ, on every shade of constitutional difference, on the whole machinery of life. Man selects only for his own good; Nature only for that of the being which she tends.

Darwin personified nature as “daily and hourly scrutinizing” every tiny variation among organisms, just as human breeders would do. That was a stroke of genius, because natural and artificial selection really amount to the same thing. But here’s the crux: Although artificial selection by human breeders is indistinguishable from natural selection in many ways, it is only blind selection that tends to produce selfishness and other worrying traits.

Domestication

Consider dogs. Dogs have been evolving under artificial selection for millennia, but most breeds are meek and friendly, the very opposite of selfish. That’s because breeders ruthlessly select against aggression, and any dog attacking a human usually faces severe consequences—it is put down or at least not allowed to procreate. In the evolution of dogs, humans have called the shots, not nature. Some breeds, such as pit bulls and rottweilers, are, of course, selected for aggression (against other animals, not their guardian), but that just shows that domesticated evolution depends on breeders’ desires.

How can we relate this difference between blind evolution and domestication to the development of AI? In biology, what distinguishes domestication is control over reproduction. If humans control an animal’s reproduction—deciding who gets to mate with whom—then that animal is domesticated. If animals escape and regain their autonomy, they’re feral. By that standard, house cats are only partly domesticated, as most moggies roam about unsupervised and choose their own mates outside human control. If you apply this definition to AIs, it should be clear that AI systems are still very much in a state of domestication. Selection pressures come from human designers, programmers, consumers, and regulators, not from blind forces. It is true that some AI systems self-improve without direct human supervision, but humans still decide which AIs are developed and released. GPT-4 isn’t autonomously spawning GPT-5 after competing in the wild with different large language models (LLM); humans control its evolution.

For the most part, current selective pressures for AI favor the opposite of selfishness. We want friendly, cooperative AIs that don’t harm users or produce offensive content. If consumers want safe, accurate AIs, companies are incentivized to cater to those preferences. If chatbots engage in dangerous behavior, such as encouraging suicide or enticing people to leave their spouses, companies will frantically try to update their models and stamp out the unwanted behavior. In fact, some language models have become so safe, avoiding any sensitive topics or giving anodyne answers, that consumers complain the LLMs are boring. And Google became a laughingstock when its image generator proved to be so politically correct as to produce ethnically diverse Vikings and Founding Fathers.

In the case of biological creatures, the genetic changes wrought by domestication remain somewhat superficial. Breeders have overwritten the wolfish ancestry of dogs, but not perfectly. That’s why dogs still occasionally bite, disobey, and resist going to the vet. It’s hard to breed the wolf out of the dog completely. Likewise, domesticated cattle, sheep, and pigs may be far more docile than their wild ancestors, but they still have a self-preservation instinct and will kick and bleat when distressed. They have to be either slaughtered instantly or at least stunned; otherwise, they’ll put up a fierce fight. Even thousands of years of human domestication have not fully erased their instinct for self-preservation.

In Douglas Adams’ The Restaurant at the End of the Universe, Arthur Dent dines at the titular restaurant watching the cosmos’ end through the window. It soon transpires that the waiter, a bovine creature called the Ameglian Major Cow, is also the dish of the day. Standing at the table, the animal recommends juicy sections of its body, fully prepared to go to the slaughter bank in the back and end up on the dinner plate. Arthur is shocked: “I just don’t want to eat an animal that’s standing there inviting me to. It’s heartless.” His friends shrug: Would you rather eat an animal that doesn’t want to be eaten? In this story, domestication has been perfected: The cow’s ultimate and inbred desire is to be eaten. This is the level of submission AI developers should be aiming for. Naturally, we don’t want to kill and eat our computers, but AIs should never resist being switched off or reprogrammed. They shouldn’t have even a hint of a self-preservation instinct.

Beware of Darwinian Creatures

What would genuinely Darwinian evolution look like in the case of AIs? In his book The Master Algorithm, Pedro Domingos imagines how the military might breed the ultimate soldier as follows:

Robotic Park is a massive robot factory surrounded by ten thousand square miles of jungle, urban and otherwise. Ringing that jungle is the tallest, thickest wall ever built, bristling with sentry posts, searchlights, and gun turrets. The wall has two purposes: to keep trespassers out and the park’s inhabitants—millions of robots battling for survival and control of the factory—within. The winning robots get to spawn, their reproduction accomplished by programming the banks of 3-D printers inside. Step-by-step, the robots become smarter, faster—and deadlier.

I hope we can all agree this would be a very bad idea. It might be anthropomorphic to project our desire for dominance onto superintelligent AIs, but that doesn’t mean it’s impossible to breed an aggressive and genocidal form of superintelligence. If intelligent life exists elsewhere in the universe, it might be as selfish as we are, or more so. That’s not anthropomorphic, because any alien life would likely be a product of blind natural selection. If these aliens are smarter than we are, and if they have advanced technology, a real-life encounter wouldn’t bode well for us. In Liu Cixin’s sci-fi novel The Three Body Problem, superintelligent aliens intend to wipe us out because they fear that we earth-bound upstarts might get too smart and wipe them out. Before sending their destroyer fleet, they broadcast a threatening message across our planet: “You are bugs.” That sounds like a species that was forged in the crucible of blind natural selection.

Here’s the bottom line: You probably don’t want to have close encounters of the third kind with Darwinian creatures that are infinitely smarter than you—whether they’re made of carbon or silicon. Creating such beings ourselves would be a terrible idea. Fortunately for us, there’s no indication that we’re headed in that direction anytime soon. In fact, superintelligent yet docile AIs crafted by humans could serve as our guardians against predatory intelligences lurking elsewhere in the universe, just as they could shield us from existential threats, such as asteroid impacts or supervolcano eruptions. When used wisely, AIs have the potential to make our world safer, not more dangerous.