DeepMind Gato and the long, uncertain path to artificial general intelligence - The Wire Science

DeepMind Gato and the long, uncertain path to artificial general intelligence – The Wire Science

Photo: Possessed Photography / Unsplash


  • Last month, DeepMind, a subsidiary of technology giant Alphabet, launched Silicon Valley when it announced Gato, perhaps the most versatile AI model available.
  • For some computer experts, this is proof that the industry is on track to reach a long-awaited, much-hyped milestone: artificial intelligence (AGI).
  • This would be huge for humanity. Think of everything you could accomplish if you had a machine that could be physically adapted to fit any purpose.
  • But a number of savvy people and researchers have argued that something fundamental is missing in the grand plans to build Gato-like AI into full-fledged AGI machines.

Last month, DeepMind, a subsidiary of technology giant Alphabet, launched Silicon Valley when it announced Cat, perhaps the most versatile artificial intelligence model in existence. Gato is billed as a “generalist agent” and can perform over 600 different tasks. It can run a robot, text images, identify objects in images and more. It is probably the most advanced AI system on the planet that is not dedicated to a single function. And for some computer experts, this is evidence that the industry is on track to reach a long-awaited, much-hyped milestone: artificial intelligence.

Unlike regular AI, artificial general intelligence (AGI) would not require large amounts of data to learn a task. While ordinary artificial intelligence must be trained or programmed to solve a specific set of problems, a general intelligence can learn through intuition and experience.

An AGI could in theory learn everything a human being can, if given the same access to information. Basically, if you put an AGI on a chip and then put that chip in a robot, the robot can learn to play tennis the same way you or I do: by swinging around a racket and getting a feel for the game. This does not necessarily mean that the robot would be sensible or capable of cognition. It would not have thoughts or feelings, it would just be really good at learning to do new tasks without human help.

This would be huge for humanity. Think of all that you could accomplish if you had a machine with a human intellectual capacity and loyalty from a trusted dog companion – a machine that could be physically adapted to suit any purpose. That is the promise from AGI. Its C-3PO without feelings, Master Data without curiosity, and Robots Rosey without the personality. In the hands of the right developers, it could be a symbol of the idea of human-centered AI.

But how close is the dream of AGI really? And is Gato actually moving us closer to that?

For a specific group of researchers and developers (I call this group “Scaling-Uber-AllesThe crowd, which adopts a term coined by world-renowned AI expert Gary Marcus) Gato and similar systems based on transformer models for deep learning, has already given us the plan to build AGI. Essentially, these transformers use huge databases and billions or trillions of adjustable parameters to predict what will happen next in a sequence.

The Scaling-Uber-Alles crowd, which includes notable names such as OpenAI’s Ilya Sutskever and the University of Texas at Austin’s Alex Dimakis, believes transformers will inevitably lead to AGI; all that remains is to make them bigger and faster. Like Nando de Freitas, a member of the team that created Gato, tweeted recently: “It’s about scale now! The game is over! It’s about making these models bigger, more secure, computationally efficient, faster at sampling, smarter memory … ”De Freitas and the company understand that they need to create new algorithms and architectures to support this growth, but they also seem to believe that an AGI will emerge on its own if we continue to make models like Gato bigger.

Call me old-fashioned, but when a developer tells me that their plan is to wait for an AGI to magically emerge from the vast amount of big data like a clay fish from primordial soup, I tend to think they’re skipping a few steps. I’m apparently not alone. A number of savvy people and scientists, including Marcus, have argued that something fundamental is missing in the grand plans to build Gato-like AI into full-fledged generally intelligent machines.

I recently explained my thinking in one trilogy of essays for Next webs Neural vertical, where I am the editor. In short, a key premise for AGI is that it should be able to get its own data. But deep learning models, such as transformer AIs, are little more than machines designed to draw conclusions from the databases already delivered to them. They are librarians and as such they are only as good as their educational library.

A general intelligence could theoretically calculate things even if it had a small database. It would intuit the methodology to perform its task based on nothing but its ability to choose which external data were and were not important, as a human being deciding where to put his attention.

Gato is cool and there is nothing like it. But basically it is a smart package that undoubtedly presents the illusion of a general AI through the expert use of big data. Its gigantic database, for example, probably contains data sets built on the entire content of websites like Reddit and Wikipedia. It’s amazing that people have managed to do so much with simple algorithms just by forcing them to analyze more data.

In fact, Gato is such an impressive way of faking public intelligence, it makes me wonder if we might be barking at the wrong tree. Many of the tasks Gato can handle today were once thought to be something that only an AGI could do. It feels like the more we accomplish with regular AI, the more difficult the challenge of building a general agent seems to be.

For these reasons, I am skeptical that only deep learning is the way to AGI. I think we will need more than larger databases and additional parameters to adjust. We need a completely new conceptual approach to machine learning.

I believe that humanity will eventually succeed in its quest to build AGI. My best guess is that we will knock on AGI’s door sometime in the early to mid-21st century, and that when we do, we will discover that it looks completely different from what the researchers at DeepMind imagine.

But the beauty of science is that you have to show your work, and right now DeepMind is doing just that. It has every opportunity to prove that I and the other no-sayers are wrong.

I really hope it works out.

Tristan Greene is a futurist who believes in the power of human-centered technology. He is currently the editor of The Next Web’s futurism vertical, Neural.

This article was first published by Unmarked.


#DeepMind #Gato #long #uncertain #path #artificial #general #intelligence #Wire #Science

Leave a Comment

Your email address will not be published.