DeepMind researcher claims new AI could lead to AGI, says ‘game is over’

According to Doctor Nando de Freitas, a lead researcher at Google’s DeepMind, humanity is apparently on the cusp of finding a solution. artificial general intelligence (AGI) within our lives.

In response to an op-ed written by yours truly, the scientist posted a thread on Twitter that started with what may be the boldest statement we’ve seen from anyone at DeepMind about current progress toward AGI:

My take: It’s all about scale now! The game is over!

Greetings humanoids

Sign up now for a weekly recap of our favorite AI stories

Here is the full text of the Freitas thread:

Someone’s opinion piece. My take: It’s all about scale now! The game is over! It’s about these models bigger, safer, more computationally efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N

Solving these scaling challenges is what AGI will deliver. Research focused on these issues, e.g. S4 for more memory, is needed. Philosophy about symbols is not. Symbols are tools in the world and big nets have no problem creating and manipulating them 2/n

Finally and important, [OpenAI co-founder Ilya Sutskever] @ilyasut is right [cat emoji]

Rich Sutton is also right, but the AI ​​lesson is not bitter but rather sweet. I learned it from [Google researcher Geoffrey Hinton] @geoffreyhinton a decade ago. Geoff predicted what was predictable with uncanny clarity.

There’s a lot to unpack in that thread, but “it’s all about scale now” is a pretty difficult statement to interpret.

How did we get here?

DeepMind recently released a research paper and published a blog post on its new multimodal AI system. The system, called ‘Gato’, can perform hundreds of different tasks, ranging from controlling a robotic arm to writing poetry.

The company called it a “generalistic” system, but hadn’t gone so far as to say it was capable of general intelligence in any way — you can learn more about what that means here.

It’s easy to confuse something like Gato with AGI. The difference, however, is that a general intelligence could learn to do new things without prior training.

In my opinion, I compared Gato to a game console:

Gato’s multi-tasking ability is more like a video game console that can store 600 different games than a game that you can play in 600 different ways. It’s not general AI, it’s a bunch of pre-trained, narrow models neatly bundled together.

That’s not a bad thing, if that’s what you’re looking for. But there’s just nothing in Gato’s accompanying research paper to indicate that this is even a look in the right direction for AGI, let alone a stepping stone.

Doctor de Freitas disagrees. That’s not surprising, but what I found shocking was the second tweet in their thread:

The piece above about “philosophy about symbols” could have been written in direct response to my op-ed. But just as surely as Gotham’s criminals know what the Bat signal means, those who follow the world of AI know that naming symbols and AGI together is a surefire way to summon Gary Marcus.

Enter Gary

Marcus, a world-renowned scientist, author and the founder and CEO of Rugged.AI, has argued in recent years for a new approach to AGI. He believes the entire field needs to change its core methodology to build AGI, and wrote a bestseller titled “AI restartwith Ernest Davis.

He is debated and discussed his ideas with everyone from Yann LeCun of Facebook to Yoshua Bengio of the University of Montreal.

And, for the inaugural edition of his newsletter on SubstackMarcus took over de Freitas’ statements in what amounted to a fervent (but respectful) statement of refutation.

Marcus calls the hyperscaling of AI models as a perceived path to AGI “Scaling Uber Everything”, and refers to these systems as attempts at “Alt intelligence” – as opposed to artificial intelligence that tries to imitate human intelligence.

About the exploration of DeepMind he writes:

There is nothing inherently wrong with pursuing Alt Intelligence.

Alt Intelligence represents an intuition (or rather, a family of intuitions) about building intelligent systems, and since no one yet knows how to build a system that matches the flexibility and ingenuity of human intelligence, it is certainly fair game for people to pursue multiple different hypotheses about how to get there.

Nando de Freitas is as blunt as possible about defending that hypothesis, which I will call Scaling-Uber-Everything. That name, Scaling-Uber-Everything, isn’t entirely fair, of course.

De Freitas knows very well (as I’ll discuss below) that you can’t just make the models bigger and hope for success. People have been scaling up a lot lately, have had some great successes, but have also run into some blockages.

Marcus goes on to describe the problem of incomprehensibility that engulfs the giant models of the AI ​​industry.

Essentially, Marcus seems to be arguing that no matter how great and amazing systems like OpenAI’s DALL-E (a model that generates custom images from descriptions) or DeepMind’s Gato get, they’re still incredibly brittle.

He is writing:

DeepMind’s newest star, just revealed, Gato, is capable of cross-modal feats never seen before in AI, yet if you look in the fine print, he remains stuck in the same land of dodgy, moments of brilliance in combination with absolute incomprehension.

Of course, it’s not uncommon for deep learning advocates to make the reasonable point that humans also make mistakes.

But anyone who is outspoken will recognize that mistakes like this reveal that something is deeply wrong right now. If one of my children routinely made mistakes like this, I would, without exaggeration, drop everything else I was doing and take them to the neurologist immediately.

While that’s definitely worth a laugh, there’s a serious undertone to it. When a DeepMind researcher declares that “the game is over,” it conjures up an illogical picture of the immediate or near future.

AGI? Real?

Neither Gato, DALL-E and GPT-3 are robust enough for unimpeded public consumption. Each of them requires harsh filters to keep them from tipping over to bias and, worse, none of them are capable of consistently producing solid results. And not just because we haven’t discovered the secret sauce for coding AGI, but also because human problems are often difficult and there isn’t always a single, trainable solution.

It’s unclear how scaling, even in combination with groundbreaking logic algorithms, could solve these problems.

That doesn’t mean that giant models aren’t useful or worthy efforts.

What DeepMind, OpenAI and similar labs do is very important. It is science at the cutting edge.

But to declare that the game is over? Insinuating that AGI will emerge from a system whose distinguishing contribution is how it serves models? Gato is great, but that feels like a stretch.

There’s nothing in de Freitas’s snappy rebuttal that can change my mind.

The creators of Gato are clearly brilliant. I’m not pessimistic about AGI because Gato isn’t stunning enough. On the contrary.

I fear that AGI is decades away – perhaps centuries – because of Gato, DALL-E and GPT-3. They each show a breakthrough in our ability to manipulate computers.

It’s nothing short of amazing to watch a machine deliver Copperfield-esque feats of deception and prejudice, especially when you understand that that machine is no more intelligent than a toaster (and arguably more stupid than the dumbest mouse).

It’s clear to me that we need more than just… Lake… to take modern AI from the equivalent of “is this your card?” to the Gandalfian sorcery of AGI promised to us.

As Marcus concludes in his newsletter:

If we want to build AGI, we will have to learn something from people, how they reason and understand the physical world, and how they represent and acquire language and complex concepts.

It is sheer hubris to believe otherwise.