We have all been guilty of falling under the foundation model spell of the past year-and-a-half, initiated by OpenAI’s unveiling of ChatGPT to the public.
But it is not only where large language models (LLMs) such as GPT-4 are concerned that incredible progress has been made in the field of artificial intelligence. And one company has been behind more impressive milestones than most — DeepMind, acquired by Google in 2014 for a reported £400mn to £650mn.
Speaking at the TED 40th anniversary conference in Vancouver, Canada, on Monday, DeepMind’s CEO and head of Google’s entire AI R&D efforts, Demis Hassabis, confirmed that Google has no intention of slowing down investment in the technology. Quite the opposite.
While Hassabis said Google does not talk about specific numbers, the company will surpass the $100 billion that Microsoft and OpenAI plan to invest in their “Stargate” AI supercomputer over the coming years.
“We are investing more than that over time, and that is one of the reasons we teamed up with Google,” Hassabis said. “We knew that in order to get to AGI, we would need a lot of compute and Google had, and still has, the most computers.”
While this sounds like the perfect scenario for an artificial intelligence arms race that could lead to “rolling the dice” on things like reinforcement learning and AI safety, Hassabis reiterated that this must be avoided.
Getting through the “bottleneck” of safe AGI
According to the DeepMind CEO, this is especially important as we come nearer to achieving artificial general intelligence — AI that can match or surpass human cognitive abilities such as reasoning, planning, and remembering.
“This technology is still relatively nascent, and so it’s probably ok what is happening at the moment,” Hassabis said. But as we get closer to AGI we need to “start thinking as a society about the types of architectures that get built.”
“The good news is that most of these scientists who are working on this, we know each other quite well, we talk to each other a lot at conferences,” Hassabis stated. (Raise your hand if you are only mildly reassured by this particular piece of information.)
Hassabis further added that learning to build safe AGI architectures is a kind of “bottleneck” that humanity needs to get through, in order to emerge on the other side to “a flourishing of many different types of systems” that have emerged from the initial ones with mathematical or practical guarantees around what they do.
Era of “radical abundance”
The responsibility for preventing a “runaway race dynamic” from happening, Hassabis believes, rests not only with AI industry labs, but many other parts of society: governments, civil society, and academia. “If we get this right, we could be in this incredible new era of radical abundance, curing all diseases, spreading consciousness to the stars, and maximum human flourishing.”
One of the themes of this year’s TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), we’ve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30{c87e2df4b343d0515d304e127afe4653a549475791ab451641a18e09bd64e760} off your business pass, investor pass or startup packages (Bootstrap & Scaleup).