fbpx
01 / 05
China Begins Assembling Its Supercomputer in Space

The Verge | Computing

China Begins Assembling Its Supercomputer in Space

“China has launched the first 12 satellites of a planned 2,800-strong orbital supercomputer satellite network, reports Space News. The satellites, created by the company ADA Space, Zhijiang Laboratory, and Neijang High-Tech Zone, will be able to process the data they collect themselves, rather than relying on terrestrial stations to do it for them…

Each of the 12 satellites has an onboard eight-billion parameter AI model and is capable of 744 tera operations per second (TOPS) — a measure of their AI processing grunt — and, collectively, ADA Space says they can manage five peta operations per second, or POPS. That’s quite a bit more than, say, the 40 TOPS required for a Microsoft Copilot PC. The eventual goal is to have a network of thousands of satellites that achieve 1,000 POPs, according to the Chinese government.

The satellites communicate with each other at up-to-100Gbps using lasers, and share 30 terabytes of storage between them.”

From The Verge.

UCL | Communications

UK Neuralink Patient Uses Thought to Control Computer

“A patient with motor neurone disease was able to control a computer just by using his thoughts following the UK’s first Neuralink implant surgery in a study led by UCL and UCLH clinical researchers.

The surgery is part of the GB-PRIME study evaluating the safety and functionality of Neuralink’s robotically implanted brain-computer interface (BCI), which aims to improve independence for people who are paralysed. 

The surgery, which took place at UCLH’s National Hospital for Neurology and Neurosurgery (NHNN) in October 2025, went as planned, and on the day following the procedure, the patient was able to begin using their BCI implant to move a computer cursor with their thoughts and to return home from the hospital.”

From UCL.

New York Times | Computing

Google’s Quantum Computer Makes a Big Technical Leap

“On Wednesday, Dr. Devoret and his colleagues at a Google lab near Santa Barbara, Calif., said their quantum computer had successfully run a new algorithm capable of accelerating advances in drug discovery, the design of new building materials and other fields.

Leveraging the counterintuitive powers of quantum mechanics, Google’s machine ran this algorithm 13,000 times as fast as a top supercomputer executing similar code in the realm of classical physics, according to a paper written by the Google researchers in the scientific journal Nature…

In another paper published on Wednesday on the research site arXiv, the company showed that its algorithm could help improve what is called nuclear magnetic resonance, or N.M.R., which is a technique used to understand the structure of tiny molecules and how they interact with one another.

N.M.R. is a vital part of effort to develop new medicines for fighting disease and new materials for building everything from cars to buildings. It can help understand Alzheimer’s disease or drive the creation of entirely new metals, said Ashok Ajoy, an assistant professor of chemistry at Berkeley who specializes in N.M.R. and worked with Google’s researchers on the new paper.”

From New York Times.

Nature | Science & Technology

OpenAI’s GPT-5 Hallucinates Less than Previous Models Do

“In one literature-review benchmark known as ScholarQA-CS, GPT-5 ‘performs well’ when it is allowed to access the web, says Akari Asai, an AI researcher at the Allen Institute for Artificial Intelligence, based in Seattle, Washington, who ran the tests for Nature. In producing answers to open-ended computer-science questions, for example, the model performed marginally better than human experts did, with a correctness score of 55% (based on measures such as how well its statements are supported by citations) compared with 54% for scientists, but just behind a version of institute’s own LLM-based system for literature review, OpenScholar, which achieved 57%.

However, GPT-5 suffered when the model was unable to get online, says Asai. The ability to cross-check with academic databases is a key feature of most AI-powered systems designed to help with literature reviews. Without Internet access, GPT-5 fabricated or muddled half the number of citations that one of its predecessors, GPT-4o, did. But it still got them wrong 39% of the time, she says.

On the LongFact benchmark, which tests accuracy in long-form responses to prompts, OpenAI reported that GPT-5 hallucinated 0.8% of claims in responses about people or places when it was allowed to browse the web, compared with 5.1% for OpenAI’s reasoning model o3. Performance dropped when browsing was not permitted, with GPT-5’s error rate climbing to 1.4% compared with 7.9% for o3. Both models showed worse performance than did the non-reasoning model GPT-4o, which had an error rate of 1.1% when offline.”

From Nature.

Wired | Science & Technology

OpenAI Just Released Its First Open-Weight Models Since GPT-2

“OpenAI just dropped its first open-weight models in over five years. The two language models, gpt-oss-120b and gpt-oss-20b, can run locally on consumer devices and be fine-tuned for specific purposes. For OpenAI, they represent a shift away from its recent strategy of focusing on proprietary releases, as the company moves towards a wider, and more open, group of AI models that are available for users…

What sets apart an open-weight model is the fact that its ‘weights’ are publicly available, meaning that anyone can peek at the internal parameters to get an idea of how it processes information. Rather than undercutting OpenAI’s proprietary models with a free option, cofounder Greg Brockman sees this release as ‘complementary’ to the company’s paid services, like the application programming interface currently used by many developers. ‘Open-weight models have a very different set of strengths,’ said Brockman in a briefing with reporters. Unlike ChatGPT, you can run a gpt-oss model without a connection to the internet and behind a firewall.”

From Wired.