AI Is All Humans, All the Way Down
The inside story of how greed, ambition, and rivalry destroyed AI’s safety mechanisms — and why human failings, not technology, are driving us toward catastrophe.
Despite everything you think you know about artificial intelligence — the models, the capabilities, the existential predictions — it’s simply humans all the way down. Men building things, making choices, placing bets, and abandoning safeguards the moment competitive pressure intensifies.
On this week’s WhoWhatWhy podcast we talk with Sebastian Mallaby, who spent three years embedded in the world of AI as it evolved from research lab to civilizational force. With over 30 hours of access to Demis Hassabis, CEO of Google’s DeepMind, alone, Mallaby was present at the creation. His book The Infinity Machine reveals the inside story you’ve never heard.
You may have seen Hassabis on 60 Minutes. You read about Sam Altman daily. Elon Musk plays his part, Sergey Brin hovers at Google. But the real story isn’t about technology — it’s about how these men’s ambitions, avarice, rivalries, and miscalculations created the AI revolution and why there are almost no safety guardrails on its use today.
Hassabis’s “singleton vision” — one lab building AI safely for humanity — died when Musk walked out of the first ethics board meeting and launched OpenAI as deliberate competition.
Every safety mechanism Hassabis designed soon collapsed: the oversight board that never met again, the three-year war to spin DeepMind out of Google’s control, the ban on using AI to provide weapons for the Pentagon that evaporated when contracts appeared.
It’s not because his plan to make sure AI is used for the benefit of humanity was inherently unworkable, but because intention became irrelevant with hundreds of billions of dollars at stake and powerful rivals racing ahead.
This is the behind-the-scenes story of how we arrived at the cliff edge — without brakes.
Apple Podcasts
Google Podcasts
RSS
Full Text Transcript:
Jeff: Welcome to the WhoWhatWhy podcast. I’m your host, Jeff Schechtman. We talk about artificial intelligence as if it’s the weather rolling in, inevitable, impersonal, something happening to us: the models, the parameters, the emerging capabilities, data centers consuming entire power grids, deep fakes eroding reality, new versions so powerful that creators won’t release them to the public. Clarke’s third law: any sufficiently advanced technology is indistinguishable from magic. But at the end of the day, there is no magic at all. It’s just men building things. The telegraph belonged to Samuel Morse, the telephone to Alexander Graham Bell, IBM to Tom Watson, the iPhone to Steve Jobs, the electric car’s resurrection to Elon Musk. When we remember these technologies decades later, we remember them through the people who willed them into existence: their obsessions, their gambles, their particular way of seeing the world that made the invention possible.
Artificial intelligence will be no different. Years from now, this won’t be remembered as the moment transformers scaled or neural networks achieved emergence. It will be remembered through the people: Demis Hassabis, Sam Altman, the researchers and entrepreneurs whose ambitions, weaknesses, and vanities shaped what got built and what got released, and what got held back. My guest, Sebastian Mallaby, understands this. He has spent a career writing about power through the people who wield it: Alan Greenspan and central banking, the hedge fund titans, the venture capitalists who funded Silicon Valley. His new book does for AI what those earlier works did for finance. It tells the story through a man, not a machine: Demis Hassabis, chess prodigy at five, game designer at seventeen, neuroscience PhD, DeepMind founder, Nobel laureate in 2024. Someone who describes his work in almost religious terms, someone who built his entire career around making AI safe, then watched every protection he designed collapse under competitive pressure.
Mallaby spent three years trying to answer one question: why would someone who’s warned about AI doom since before technology could recognize a cat devote his entire life to building it anyway? What he found was someone without a fifty percent mode, a person whose psychology made the mission possible and simultaneously made stopping it impossible. The chess champion, who heard “do your best” and interpreted it as “push yourself to near death.” The outsider from immigrant poverty who became the insider controlling humanity’s most transformative technology. Someone brilliant and driven, convinced his intentions matter, building something whose trajectory just may be set by forces larger than the individual.
The central tension isn’t technical; it’s human. Every safety mechanism Hassabis designed — ethics boards, legal protections, independence from Google — failed, not because the ideas were wrong, but because intention can’t override competition, when hundreds of billions of dollars are in motion and the next lab is ready to build what you won’t. We’ve lost sight of this in our AI discourse. We debate capabilities and timelines and existential risk as if these are natural phenomena. But every decision — what gets built, what gets released, what safety measures get abandoned when the race accelerates — is still being made by people. Mallaby’s book reminds us that when we talk about AI, we’re talking about human choices all the way down.
Sebastian Mallaby is a senior fellow at the Council on Foreign Relations, the author of six books exploring power, finance, and innovation, and his current work is The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence. It is my pleasure to welcome Sebastian Mallaby back to the WhoWhatWhy podcast. Sebastian, thanks so much for joining us.
Sebastian: Thank you, Jeff. That was a beautiful introduction.
Jeff: Well, thank you so much for being with us today. I appreciate it. I want to talk first about the human aspect of this and that we get so caught up in the technology of AI that we seem to lose sight of the human part of this, that so many of the decisions that get made, the evolution of the technology, the dangers of the technology, really come out of human frailty, human actions. Talk about that first.
Sebastian: Well, I think you put it very well in the introduction. This is the creation of human intelligence, and the personality of the human behind it does matter, both in terms of the incredible drive that makes it possible but also makes stopping it impossible. But also I think the consequences of the technology are shaped by the individuals. You can build AI more safely or less safely. You can apply AI to scientific progress that will unlock medical breakthroughs for human beings, or you can apply AI to weapons or deepfakes or something bad. These choices are shaped by people.
Jeff: Talk about Demis Hassabis and who he is.
Sebastian: Well, he comes out of the melting pot of London. London is a city where I live, which has about forty percent of the workers who were not born in Britain. Sometimes Americans think of themselves as having a bit of a monopoly on the melting pot. It’s not really true. Demis’s mother was of Chinese Singaporean origin, and his father was Greek Cypriot. He grew up going to the local school — when he went to school at all. Sometimes he dropped out and taught himself because he was so busy with his chess that he didn’t have time to actually show up in class. He finished school at sixteen — you know, early — and was admitted to Cambridge University, but preferred to spend a year as a video game designer and coder. And he did that with such success that at the end of it, the game that he built sold more than five million copies, and his boss wrote him a check for more than a million dollars in today’s money to turn down the place at Cambridge University and just stick with game design. Of course, we know lots of examples — Sam Altman being one — of people who rather quickly dropped out of college in order to pursue business. Demis, on the other hand, was the opposite. He refused to cash that check even though his family was not at all rich. He turned down the million-plus dollars, and he did go and study computer science because he cared about knowledge more than he cared about money. I think therein lies one early insight into who he is. Some people say the problem with AI is that you’ve got these money-grubbing leaders. It’s not really about money for Demis. It’s about scientific curiosity, the desire to build artificial intelligence to unlock the deep mysteries of physics and biology. As he put it to me once, Isaac Newton failed; he didn’t understand the full fabric of reality. Only if I build a superhuman intelligence can I have a chance of surpassing Isaac Newton, and that is what I aim to do.
Jeff: There’s another side to this, which is that as this story unfolds with Hassabis, that it was maybe his lack of understanding of human nature, and how other people would react and how other people are driven by both competition and money that really caused many problems for him along the way.
Sebastian: I think that’s fair. Yes, he was naïve in thinking that there might be just one united artificial intelligence research effort, one lab. Sometimes this was called a singleton scenario. The idea was that if you had the entire AI community unite in one effort, then there’d be no competition and you could really take your time stress-testing any AI model before you released it into the wild and therefore you would make it safe. He genuinely believed that might happen, and he was properly confused and outraged when a rival, Elon Musk with Sam Altman, set up OpenAI, ending a situation in which, for the first four or five years of its existence, DeepMind, the Demis Hassabis lab, had had a monopoly on cutting-edge AI. It was a shock to him when that monopoly was ended.
Jeff: And talk about his lab, DeepMind, getting sold ultimately to Google.
Sebastian: He set it up in 2010, which is kind of a miracle because AI was so primitive it couldn’t do anything yet. Raising capital to pursue a technology that might never work — and hadn’t, in fact, worked, through these AI winters that existed in the second half of the twentieth century… How do you get funding for that? He went to see Peter Thiel. Peter Thiel had this ultra-contrarian desire to put bets on things that everybody else thought was absurd. And driven by that contrarianism, Peter Thiel wrote a small check — 2 million dollars. Nothing for a venture capitalist, but it was enough to get DeepMind started.
And Demis set the company up in London. He attracted… you know, just as he was skillful enough to attract Peter Thiel to invest, he was skillful enough to persuade PhD scientists to come and join him when they had plenty of other things they could have done. The company gathered momentum through 2013 — so through the first three years — producing one breakthrough, which was a system that combined deep learning of the sort that had been used in the image-recognition breakthrough, and another kind of AI called reinforcement learning, which is learning from experience, trial and error. By putting these two together, DeepMind created a system called DQN, which played Atari games, which you may remember from the ’70s and ’80s, you know, Pong, Breakout, Seaquest, etc. And the system developed kind of human-like strategies in playing those games and was extremely good at them. This was a kind of proof of DeepMind’s potential.
By 2013, there were three contenders to buy DeepMind — bigger companies that wanted to get in the game. The leading one was Google. Although Demis met Larry Page and quite liked him because of his academic disposition, he didn’t trust Google to close the deal in any kind of useful timeframe, and he was going to run out of venture money soon.
So he went to see Mark Zuckerberg at Facebook to try to get a second bid to force Google to go faster. And he mentally set this test for Mark Zuckerberg. He walked into Zuck’s house in California, and Zuck said, you know, AI is the best thing ever, the most important thing ever, and I’ll buy your company, I want you to be at Facebook, and we’ll do great things together. So Demis sort of nodded politely, and then an hour later, he raised other technologies. So he said, well, 3D printing is really cool, and augmented reality is really cool. And whatever he brought up, Zucker would say, yeah yeah, super cool, super cool! And so Demis just clocked him as a bullshitter, that any new technology Zuck would say is super cool, whereas in Demis’s mind, only AI stood out as being “the one.” So he turned down Zuckerberg and was going to sell to Google, and at the last minute, Elon Musk showed up with a strong demand to be the one to buy DeepMind.
It was kind of crazy because at the time, Tesla and SpaceX really didn’t have much in the way of big data centers, big computing power, the kind of thing you’d need to do AI research, whereas clearly Google did. Musk was possessed of the idea that Google was an evil for-profit corporation, whereas Tesla and SpaceX were not. He was indignant about being turned down by Demis, who preferred to sell to Google. And therein lay the roots of the bitter rivalry between Elon Musk and Demis Hassabis. For a while, Elon would go around saying, Demis is an evil genius, he’s going to destroy the world. We have to start OpenAI to stop him. And I think they’ve started patching it up a bit more recently, but it was definitely a clash of the titans.
Jeff: At that point, how concerned was Hassabis about not just the potential of AI, but the safety of AI?
Sebastian: He was very concerned. When he sold to Google, there was a condition in the sale that said there had to be a safety and ethics committee making the final call on the deployment of superintelligence. You couldn’t just have the corporate board of Google having that power. And also, there was a sort of AI principles charter which banned the use of AI in military applications.
Jeff: And how did that work? How did that process work?
Sebastian: Well, what happened with the ban on military uses is that it was observed and obeyed until it wasn’t. I mean, you know, if you go forward to the last two or three years, you know, Google’s been quite keen to pitch for business with the national intelligence and security establishment. And so AI just reached a point of maturation where the military really, really wanted it. Other labs were going to provide it if Google didn’t, and Google decided there was no point holding out. There was a race dynamic. They weren’t going to make the world better by saying no to the Pentagon or no to the surveillance state. And so they ditched their principles.
On the oversight board, there is a very interesting story there because Demis did push Google to hold the first meeting a year after the Google acquisition of DeepMind. And he was so serious about binding in potential rivals and trying to stick to that singleton vision, where AI would be safe because there would just be one lab, that he had the idea of approaching Elon Musk, even though he was definitely a frenemy with the emphasis on enemy. He approached him and said, would you like to host this oversight board? You will have the honor of doing it. We can do it at SpaceX if you like, and then we’ll bring in these other credible independent figures. People like Reid Hoffman came, as well as the Google leadership and the DeepMind leadership.
And so Elon Musk hosted this event and essentially sort of sat there absorbing the plans that were laid out by the DeepMind leaders, and used those plans in his own formulation of the plans for OpenAI. And so within four months of that secret meeting at SpaceX, OpenAI’s formation was announced, and Demis suddenly understood that the person he had thought was his sounding board, consultant ally in AI safety was actually a kind of camel inside the tent, a Trojan horse who was taking all his ideas and going off and starting a rival. And so, that not only infuriated Demis, it also meant that Google understandably said, we’ve had enough of oversight committees. That’s enough for now. You know, if we get strong independent technologists to come and be on it, people who are smart enough and in the technology enough to understand AI, then by definition, those people are going to be ambitious. They’re going to want to do their own AI lab. And as Reid Hoffman put it to me, you know, humans are disputatious and jealous and rivalrous, and in the face of a machine with godlike potential, they want in. And indeed, Reid funded OpenAI, one of the people who funded OpenAI, he himself wanted in. So he was describing his own somewhat traitorous actions. Because he’d sat in that safety committee, listened to Demis’s plans, and then basically copied them.
Jeff: And that committee really never met again. Nothing really would come of that.
Sebastian: Correct, although the sort of desire for such a committee was still powerful in Demis’s mind, and I should also say the mind of his co-founder, Mustafa Suleyman. And the two of them conducted this secret thing they called Project Mario, where they put pressure on the Google board to resurrect the idea of an independent oversight board. And if not resurrect it in the same way, maybe give DeepMind independence from Google as a research lab. Again, the point would be there’d be separation between the Google corporate board and the DeepMind technology, and DeepMind technology’s deployment would be a matter where there would be checks and balances built in. And so there ensued this secret fight between Demis, on the one hand, and Sundar Pichai, the chief executive by now of Google, on the other. And it went on for three years. And there were lawyers’ documents about the spinout, you know, long, fat documents detailing how a potential spinout might look. And, you know, these were leaked to me. I had board presentations from DeepMind, which were presented to the Google board to try to persuade them to let them spin out. And again, nothing came of it in the end, and Demis was so exhausted by the negotiations, and fed up with dealing with lawyers, that he sort of caved in 2019. But I mean, the point here is that the three-year effort is a demonstration of how seriously Demis did want to make the systems safe. He did care about AI safety sincerely.
Jeff: And his battles weren’t just with Sundar Pichai, but also with Sergey Brin at the time.
Sebastian: Yeah. Sergey Brin was this sort of strange figure at Google because, you know, he was obviously the co-founder in the details of the way that Google went public, the two co-founders retained an enormous amount of power. And therefore Sergey could throw his weight around even whilst putting most of his weight on his surfboard and other extracurricular activities. So it was a slightly strange situation, but Sergey did weigh in and was on the board and did get angry with DeepMind at one point, because DeepMind had been trying to help the National Health Service in Britain by producing AI that would help healthcare. And this resulted in a completely unfair privacy backlash against DeepMind. I mean, the British public thought that DeepMind was giving British patient data to Google, the nasty tech behemoth. And so the kind of optics and the political strife around this collaboration between Google/DeepMind and the National Health Service became quite costly to Google’s reputation. So Sergey Brin was saying, you know, why were you so naive as to think that you could wade into something as controversial as healthcare and it would all be fine? Of course, you’re going to get a publicity disaster, and you’ve landed Google in trouble. We’ve got other fish to fry. We’re a huge company. We don’t want to be, you know, tarnishing our reputation with some tiny healthcare collaboration. What were you even thinking? So yes, there was some tension there.
Jeff: How different was Demis’s vision for what AI was and where AI could go? How different was that vision from what Musk’s vision was, what Sam Altman’s vision was, other visions of AI at the time?
Sebastian: Well, one of the big divisions in AI goes back to this distinction between deep learning, on the one hand, and reinforcement learning, on the other. They had been, in the academy, very separated indeed. So there was sort of a center for deep learning under Professor Geoffrey Hinton at Toronto, as well as some other good professors elsewhere. But Toronto was probably the leading place. And then for reinforcement learning, there was a center in Edmonton, Alberta. And weirdly, the two leading lights of these two centers were both British, but both doing their work in Canada. And they basically didn’t talk to each other, even though they were both trying to build superintelligence. There were a few people who spanned those two worlds. One was Vlad Mnih, who had been doing graduate work at both places and then joined DeepMind and was behind that Atari system I mentioned earlier. But in general, the culture of DeepMind insisted on trying to do both, and they combined deep learners with someone like David Silver, who was the key reinforcement scientist, who had been to Edmonton for his PhD. And he was really the expert on making AI learn through trial and error. You put an AI in a Go game, for example, the AI makes a random move. And at the end of a set of random moves as a result of the game, you either win or you lose. And that signal from the system, from the environment, which is a win or a loss, tells the computer whether that random sequence of moves had any merit to it. And if there is merit, the computer subtly tries to figure out how to repeat successful experiences and avoid ones which resulted in a loss. And as I say, the guru of this was David Silver. And not only did the AlphaGo system defeat the human champion of Go in 2016, but then there was an even stronger system that David Silver led in 2017 that could not just play Go, but could play chess as well and shogi, and was far stronger at Go than the predecessor had been. And so there was a series of reinforcement-led systems. And this was really not in the culture of the mainstream of other labs. Other labs tried a bit of reinforcement learning, because DeepMind was the leader and they were trying to copy it, but where their heart lay was really in pure deep learning. And that’s essentially, you know, recognizing patterns in large amounts of data and learning through that as opposed to learning through trial and error.
Jeff: How much did Demis Hassabis’s upbringing, and his sense of really not being the best observer of human nature, to what extent did that impact the way he saw this developing? Because there does seem to be some relationship there.
Sebastian: I think the single biggest thing was to sort of underestimate the ruthlessness and competitiveness of other people. And so we saw that earlier as I described, the formation of OpenAI, which came as a shock to Demis. But it came up again later in the sense that, if you fast forward to 2022, DeepMind had begun to work on language models and it had a conversational system. You could talk to it, it would give you answers. And at the same time, of course, OpenAI had a conversational system, but OpenAI released this and that was ChatGPT, whereas Google and DeepMind were reluctant to do that. They were just more cautious about releasing a product into the market without very extensive testing. And they wanted to get all the kinks out before they released it. And whereas, you know, the first version of ChatGPT hallucinated wildly all the time, it didn’t stop Sam Altman from releasing it. So I think that’s the biggest way in which Demis’s view of human nature misled him.
Jeff: Interesting, coming from someone who was a chess master and a game theorist as well.
Sebastian: Absolutely. I mean, there’s clearly a connection between his love of reinforcement learning, which is normally applied to games in the early phases, at least. You know, your testing ground for your reinforcement learning is to can it beat the Atari game, can it beat the Go champion, can it beat the chess player? So yes, there’s a very close link. And in fact, David Silver, who I’ve been talking about as the reinforcement learning guru, came out of youth chess as well. You know, Demis was the best in Britain as a young chess player. David Silver was not on that level, but he did play youth chess, and he did meet Demis through the chess circuit. And then together they would play Go later on, and they were pretty competitive at that. So they both had this passion for games, which fed through into the way that DeepMind chose to develop its AI.
Jeff: As he watched other things developed, as he saw what was going on with OpenAI and ChatGPT, to what extent did that raise additional concerns with him about safety, and how did he respond to that?
Sebastian: It did definitely raise additional concerns. I mean, the more powerful the systems get and the more widely they are disseminated, the more worried we should all be. And one thing that Demis did in response to this risk was that when he released the early work on language systems, and this goes back now to 2020, he insisted that there should also be a strand of work on a kind of a code of conduct for AI systems. And he had a kind of academic sociologist/ethicist working on this. In fact, there was a bit of a team of these people. And they were supposed to advise the engineers on how to set a bunch of rules that would effectively act like a constitution on the model. So, you know, the human user would query the model and say, “Please do this” or “Tell me this,” and the model would collaborate, cooperate, unless doing so would violate one of its constitutional rules. So the rule might say, you know, don’t mislead people, don’t make it easier for somebody to harm himself, don’t do something which makes the user more likely to harm other people. You know, a set of common sense rules, and Demis was pushing on those. In fact, he had a safety researcher called Geoffrey Irving working for him, who was partly the developer of the early language systems. And, you know, just a normal guy, a scientist building the cutting edge of the model. But he had a kind of personal commitment to safety, which really came through in 2023 when he became the first chief scientist at the UK AI Safety Institute. So he went to work for the government trying to enforce safe AI, because he felt that he wanted to spend one hundred percent of his time on that and not be at a private lab where he could do some of that, but not all his time.
Jeff: Hassabis often talked about this in a religious context, and that really addresses some of these issues. Talk about that.
Sebastian: Yeah. I mean, to my astonishment, one day I was sitting in a park in London talking to him. We would have these two hour conversations and we would do that most months. And on this occasion it was nice weather, so we were outside in this park and there were other people at the different cafeteria tables. I could hear one guy with his smartphone trying to do some business deal, and then on some other table, there were a couple of women meeting for a coffee and talking about their friend who had gone to hospital with some issue. So just like a normal summer day with people doing their stuff, and then in front of me, there was Demis, who just suddenly got into this riff about how he had to discover strong AI. He had to use it to understand physics and biology and the deep fabric of reality. And he had to do this because this was a spiritual mission. “And at two o’clock in the morning, Sebastian, I’m sitting at my desk and I’m reading the scientific paper, and reality is screaming at me, staring at me in the face, saying, Demis, Demis, I’m here to be discovered. You must discover me. And that’s your mission. And Sebastian, I think, I think that is my mission. And, and by the way, you know, if you understand nature more deeply, nature is probably the creation of a divine entity. And so you’re sort of getting closer to, to maybe what we could call God.” And this all kind of came out. And I was just so happy that my iPhone was on the table recording every word of it. But yes, I mean, Demis is fundamentally a scientist and he reaches for spiritual language to express the intensity of his scientific curiosity. And that is what fundamentally motivates him to carry on.
Jeff: Talk about his Nobel Prize, and what he won that for, and how that really played into this. It was like the safest area that you could pursue AI in some respects, right?
Sebastian: I mean, clearly advancing science, if it’s going to be used for medicine, is a very good thing to do for humanity. So he was attracted to this long standing problem in biology, which is the problem of understanding the shapes of the building blocks of nature. These are proteins. And the way proteins work is that, if you think of a strand which you can stretch out kind of like a piece of string, and on this strand there are amino… It’s an amino acid strand. And there is the DNA code on the strand, and the combination of the code predicts the way that, when you let go of the string, it’s going to suddenly fold itself up and do a sort of self-executing origami model and produce a very intricate, beautiful shape. Now the string can fold itself up in billions of ways. So predicting how a string goes from being straight to being squiggled up into a shape is very, very complex in terms of the combinatorial possibilities that you’re trying to predict. And Demis had always known about this challenge because a friend of his at Cambridge was a biologist. So when he was an undergraduate, he had this in the back of his mind. This would be a cool thing to try and work on one day. And when he got to a point where, around about 2016, it felt as though DeepMind’s artificial intelligence systems were getting really good, he said to David Silver, right, now is the moment we can, you know, we’re strong enough now, we understand enough about computer science and building AI, that we can go after the challenge of protein folding. And he set up a team and the team started to work on this. And it got pretty far.
But after two years, the leader of the team said, okay, Demis, we have now got the best protein prediction algorithm in the world. There were lots of academic labs trying to work on this as well. And there’s a competition every two years for who’s got the best system. And DeepMind had won that competition, so it was the best in the world. But it wasn’t accurate enough in its prediction of protein shapes to mean that you really knew the shape in enough detail to be able to design a molecule that could bind onto the protein and act as a drug to cure a disease. You needed to know the shape of the protein with more precision for that to be possible. And to Demis, that was the objective. It wasn’t just being the best in the world, it was solving the scientific problem. And so he listened to this leader of the team saying, let’s, you know, declare victory and quit. And he sat in on the meetings of the team members and listened to the way they were discussing the challenge. And he was listening for what he calls the fluidity. And his idea was, if the conversation is fluid and people are bubbling with ideas about new avenues of research that they could pursue to get even better protein prediction, then we should continue because there are plenty of ideas that we should investigate. If the ideas are not fluid, if you sit in the room and people kind of, you know, pause and there are long silences, yeah, sure, then give up. So he sat in the room and there were lots of ideas. And so he basically switched out the leadership. The previous leader, you know, went on to some other project and a new leader came in. That was John Jumper. And John Jumper was passionate about this challenge and drove it for the next two years to a point where DeepMind could predict pretty much precisely the way that the protein strand folded itself up into an intricate shape. And when it did that, it then published its predictions of 200 million proteins in nature and open-sourced them, just gave them to the world. Scientists anywhere can now kind of basically, with the equivalent of a Google search, find out the shape of any protein they want to work on to try and design a drug that would bind onto it. And so that was an enormous gift to science and to humanity. It fulfilled Demis’s mission to make AI good. It also fulfilled his curiosity about pushing the frontiers of science, and that’s what led to his Nobel Prize in 2024.
Jeff: And how did this impact his concern, his ongoing concern about safety?
Sebastian: I think in a way, you know, by this point he had reconciled himself a bit reluctantly to the idea that by doing science he could make AI good. But trying to make the whole of AI safe was just extraordinarily difficult. If you’re one lab and you make your AI safe, but then there are five other labs that don’t make their AI safe, the world isn’t safer. You haven’t helped. And particularly when it started to be the case in 2025 that China also had extremely good AI models and was distributing them on an open weight basis, where people can basically do whatever they want with the model without anyone controlling them. It just felt almost futile to be trying to make all of AI safe. That’s a role for governments, I believe. And I think Demis believes that if governments were serious and particularly the Chinese government and the American government, because these are the two technological superpowers, then you could actually make AI a lot safer than it is right now. But one lab leader in Demis’s shoes, the best you can do is to speak to political leaders and encourage them to take charge. And he did that with some success. Right after ChatGPT, he advised the British prime minister, Rishi Sunak, to hold a safety conference in Bletchley Park in England, where some of the early computer science work was done. And that conference took place. The Chinese showed up. There was the beginning of a conversation about AI safety on a global basis, which is what we need. So Demis contributed to that. He was very happy for one of his employees to go over to the newly created UK AI Safety Institute and work for the government, as I mentioned. So he was happy to support safety and even encourage the government to move on safety. But I think his focus within Google DeepMind, his organization, was on building out the technology, staying in the race and acknowledging that this race dynamic was pretty much irresistible.
Jeff: And how does he see the landscape today? And additionally, as someone who spent three years immersed in this world as it evolved, you were present at the creation in many respects. How do you see the landscape today?
Sebastian: Well, as I’ve just said, I’m worried by the landscape because we have these models getting more and more powerful, and that’s dangerous and they’re not being controlled. And, you know, an exception would be just a couple of days before we’re taping this. Anthropic announced that it had an extremely powerful model that it was not going to release on a general basis, because it would basically empower cyber hackers to get inside everybody’s computer and do whatever they liked. I mean, it was just so catastrophically excellent that they couldn’t release it. So what they’re doing instead is they’re using the model, they’re giving it on a selective basis to kind of responsible software companies that will use it to protect the systems that we all use. So if Microsoft and Google and Salesforce and whoever else incorporate the learnings from this very powerful AI into their own software systems, the AI will discover weaknesses and bugs. And then that’ll give the labs, the companies a chance to patch those vulnerabilities. And then maybe the Anthropic system can be released more widely in the future.
But, you know, that’s sort of an exception. What’s generally happening is that people are building powerful AI and they’re just releasing it. And I’m very worried about that. And I think you could make an argument that, you know, Demis is morally reprehensible for continuing to build this AI when his vision of how you would make it safe through a singleton scenario or through oversight committees negotiated with Google, those things just aren’t working. On the other hand, if he quit and accepted a professorship at a university, that wouldn’t make the world safer either. So what would be the point of that? And so I have some sympathy with the idea that he’s staying in the game on the grounds that, you know, from that position of influence, he can seek moments when politicians are willing to listen and he can try to persuade them to take charge. And I also think, and you know, Jeff, you may have a different take on this, but I feel that all humans, you and I included, look at technology and we see something that is both scary but also exciting. And in general, we move ahead with the technology because we’re willing to take that trade. We want progress and we’re willing to take the risk. And if we weren’t doing that, we would still be living in caves. So you could see Demis, as I say, as somebody to be morally condemned, or you could see him as an enlarged version of you and me, with the fear of technology, the excitement about the technology, the willingness to go forward. It’s just that Demis is ten times more influential in this world than anybody else, and so he’s a magnification of the rest of us.
Jeff: And does he realize at this point that the players that are in the game right now and the risks, the extent of the risk that they’re willing to take?
Sebastian: Oh, he totally realizes that. Yeah. I mean, he was furious for that reason with Sam Altman and Elon Musk when they decided to found OpenAI and the vision of one lab being safe. He was again furious when ChatGPT was early to be released, despite the fact that it was going to hallucinate and so forth. And he’s generally furious with people who are accelerating with no view to being careful. And you might say, well, he’s doing the same, and there’s some truth in that. But I think at least his heart is in the right place, and he has demonstrated that he’s willing to put energy into that. When he thinks the door is a bit open, then he can push and open it completely. What he doesn’t seem to want to do is to push against a closed door, because he feels that just doesn’t change outcomes, and he’s not willing to expend his own political capital picking a fight with the Pentagon, for example, as Anthropic did. Because that didn’t change what the Pentagon wound up doing with AI. It just vilified Anthropic for trying to say you can’t have autonomous lethal weapons. It’s not, you know, systems are not accurate enough for that. The Pentagon just rolled Anthropic, vilified Anthropic, and did a deal with OpenAI instead. It didn’t improve the world. And I think Demis is very pragmatic, sees that and says, I see no point in repeating that experiment. I’d rather bide my time and find a moment when politicians are more open to reasonable discussion about safety. But he does believe in safety.
Jeff: And finally, how does he see this all playing out? What is his game theory? Where does that lead him?
Sebastian: Well, he sometimes would say to me, when we were having these long conversations, you know, two hours at a time, and we’d come back to the safety issue quite a lot. And by the way, I should say, it’s fascinating how conscious he is of the Robert Oppenheimer parallel, you know, the Manhattan Project with the atom bomb was in one sense amazing science, and in another sense an existential terror. And AI is sort of similar, and Demis is fully aware of that strong parallel. So, he thinks about this stuff a lot. And he would say to me sometimes, you know, this is a frightening moment. We’re in this race. I didn’t think it was going to be like this. I did predict that we would build superhuman intelligence around 2030. It’s true, he did predict that. But I didn’t predict that it would be this crazy competition when we got to the point of being on the threshold of superintelligence. And what do I do? And, you know, maybe I can’t take it, maybe I want to go and be a professor at Princeton. You know, he would entertain these thoughts. But in the end, he would say, I’m optimistic still. And that’s the last line of his, you know, the last line in my book that I quote.
And why is he optimistic still? Well, I think it’s almost a Rorschach test. People look at existential threats and whether they take them to heart, or whether they carry on doing what they’re doing kind of depends on their temperament. And Demis, you know, is not only an entrepreneur, he’s a sort of extreme entrepreneur because he founded a company in a technology that wasn’t going to produce anything for a long time. He did it not in Silicon Valley, but in London of all places. So he really has that extreme optimism that you need to go off and do impossible companies. And I think that’s why he ends up saying, I’m optimistic still.
Jeff: Sebastian Mallaby, the book is Infinity Machine: Demis Hassabis, DeepMind and the quest for superintelligence. Sebastian, I thank you so much for spending time with us today.
Sebastian: Jeff, it’s been a pleasure. Thank you so much for your questions.
Jeff: Thank you. And thank you for listening and joining us here on the WhoWhatWhy podcast. I hope you join us next week for another WhoWhatWhy podcast. I’m Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to WhoWhatWhy.org/donate.


