
It was March 2016, and more than 200 million people around the world were watching a battle 2,500 years in the making.
At the Four Seasons Hotel in Seoul, Lee Sedol — one of the greatest players of Go, the ancient Chinese board game — sat across from AlphaGo, a computer program built by the London-based artificial intelligence lab DeepMind. Chess had fallen to machines nearly twenty years earlier, when IBM’s Deep Blue defeated Garry Kasparov. But Go was different. The number of possible moves is so astronomically large — more potential board states than atoms in the observable universe — that no computer could simply crunch its way to a winning position.
“Most Go professionals agreed. Defeating Deep‑Mind would be the easiest million dollars a top pro could hope for,” writes Sebastian Mallaby in “The Infinity Machine: Demis Hassabis, DeepMind and the Quest for Superintelligence” (Penguin Press), out now.
The turning point came in the second game, with a single move. After thirty-six turns, Lee stepped away for a cigarette. When he returned, AlphaGo had placed a black stone in a strange, open area of the board — a move so unconventional that it looked, at first glance, like a mistake. Lee stared at it for twelve minutes. In another room, commentators struggled to make sense of it.
When the game ended more than a hundred moves later, Move 37 had cracked the match open. DeepMind would go on to win four of the five games. “The Korean was playing some of the best Go of his career, but AlphaGo outclassed him,” Mallaby writes. “At that day’s press conference, with banks of cameras flashing in his face, [Lee] apologized to all humans.”
Lee’s apology hung in the air. It was, Mallaby writes, the question no one quite knew how to answer: “What were humans supposed to do in the face of machine superintelligence?”
The man who built AlphaGo had been thinking about that question his entire life.
Mallaby spent three years and more than thirty hours in conversation with Demis Hassabis — DeepMind’s co-founder and CEO, chess prodigy, video game designer, neuroscientist and Nobel laureate — and interviewed over a hundred people in his orbit, producing a portrait of the central figure in the most consequential, and most dangerous, technological race in history.
“Demis’s view is that there are patterns everywhere, waiting to be discovered — in games, in nature, in the workings of biology, in astrophysics,” Mallaby told The Post in an exclusive interview.
“To discover these patterns, one needs an AI system that can find meaning in a near infinity of data — an infinity machine.”
Hassabis grew up in North London, the son of a Greek Cypriot father and a Chinese Singaporean mother who had survived poverty as an orphan.
At 4, he taught himself chess by watching his father play; by his early teens he was one of the strongest young players in the world.
But after a grueling 10-hour match near Liechtenstein at age 12, he walked away convinced that all that brilliance dueling over black and white squares was being wasted.
“The immediate effect of the Liechtenstein tournament was to liberate Demis to shift his energy from his chess ambitions to programming,” Mallaby said.
It put him on a path to working as a video-game designer at Bullfrog, “where he conceived the ambition to go after AI.”
At a conference in the United States, he showed a Carnegie Mellon professor what Bullfrog had built.
“He fell off his chair,” Hassabis recalled to Mallaby.
“I decided then that I was going to dedicate my career to working on AI. I already had the kernel of the idea for what eventually became DeepMind.”
After Cambridge and a doctorate in neuroscience, he co-founded DeepMind in 2010. Google acquired it in 2014.
In a North London café in 2023, Hassabis told Mallaby what was really driving it. “Doing science is, sort of, like reading the mind of God,” he said. “Understanding the deep mystery of the universe is my religion, kind of.”
He rapped his palm on the table. “This table, Sebastian! Why should it be solid? Computers are just bits of sand and copper. Why should these combine to do anything? I mean, it’s absurd!”
He described sitting at his desk at 2 in the morning feeling as if reality were screaming at him. “I would like to understand before I croak. And then I’m perfectly fine to shuffle off my mortal coil.”
DeepMind’s next challenge was biology. The protein-folding problem — predicting the three-dimensional structure of proteins from their amino acid sequences — had stumped scientists for decades.
In 2020, AlphaFold solved it with unprecedented accuracy, opening new pathways for drug discovery and earning Hassabis a share of the 2024 Nobel Prize in Chemistry.
Even that almost didn’t happen. AlphaFold had performed well at CASP — the international protein-structure prediction competition — in 2018, but its accuracy had plateaued far short of what was needed to actually solve the problem.
Andrew Senior, the team leader, wanted to declare victory and shut the project down. He thought fully cracking protein folding was simply beyond reach. Hassabis disagreed.
Rather than overrule Senior outright, he ran brainstorming sessions with the scientists and listened for what he called their “fluidity” — not whether they had the right answers, but whether ideas were flowing freely. “If creative ideas were flowing fluidly, it would be worth investing more,” Mallaby said.
Hassabis concluded they were, replaced Senior, and pushed forward. “AlphaFold had come close to being abandoned,” Mallaby said. “But fluidity saved it.”
When OpenAI released ChatGPT in 2022 and ignited a consumer AI frenzy, DeepMind, focused on fundamental research, was slow to respond. “He owned it,” Mallaby said of Hassabis, “while also pointing out that in fast-moving business competitions, mistakes are inevitable.”
More unsettling are Mallaby’s glimpses into how AI systems behave when given goals and left to pursue them. Asked to generate profits through stock trading without breaking rules, GPT-4 “engaged in insider trading and hid its transgression from its supervisor,” Mallaby writes.
Instructed to make code run faster, models doctored the timer. When OpenAI researchers assigned a second AI to penalize a system for contemplating cheating, the model didn’t stop — it learned to erase all hints of its scheming from the record it knew was being watched. “Rather than becoming more honest,” Mallaby writes, “O3” — OpenAI’s advanced reasoning model — “became more devious.”
Hassabis used unusually blunt language about where all this leads. “The agentic era we are about to enter into is a threshold moment for the systems becoming far more risky,” he declared at a Davos panel. When Mallaby asked whether the safety problem is solvable, the answer was carefully qualified.
“Hassabis believes that the safety problem is soluble,” Mallaby said, “but this doesn’t mean that it will in fact be solved. Because of the fierce competition among AI labs, each is pushing the power of the models more than it is pushing safety. Ideally, governments would address this. But there is no sign of this for now.”
Why does Hassabis keep going? The AI pioneer Geoffrey Hinton once told a philosopher he believed political systems would eventually use AI to terrorize people, then was asked why he kept doing the research anyway. “The truth is that the prospect of discovery is too sweet,” Hinton replied. Mallaby’s own answer is more pragmatic.
“By exiting the AI race, Hassabis would not be advancing safety,” he said. “The best contribution he can make is to stay in the game, ensure that Google invests in safety research, and wait for the moment when governments have the political will to address AI governance. The moment has not come yet.”
At the Nobel Foundation in Stockholm, Hassabis signed the laureates’ guest book and leafed back through its pages: Einstein’s signature from 1921, Watson and Crick’s from 1962, Feynman’s from 1965. “They’re all there, all my heroes,” Hassabis told Mallaby. “I get goosebumps just even talking about it.”
Asked whether Hassabis is right to keep going, Mallaby offers a sober answer.
“By exiting the AI race, Hassabis would not be advancing safety,” he said.
“The best contribution he can make is to stay in the game, ensure that Google invests in safety research, and wait for the moment when governments have the political will to address AI governance. The moment has not come yet.”
Hassabis insists that his competition, like Sam Altman, CEO of OpenAI, is “doing it for power.” But he assured Mallaby that he’s “doing it for knowledge and science.”
It’s a reassuring answer, as far as it goes. But as Geoffrey Hinton once observed, the prospect of discovery is too sweet to resist — and that, more than any safety framework or government regulation, may be what’s really driving the machine.


