Dr. Nando de Freitas, Director of Research at Google DeepMind, responded in a tweet to an article published on the Next Web, whose author expressed the view that humanity will never be able to create artificial general intelligence, if you so desire. .
It just depends on the size! game over!
Freitas Books.
We just need to make models bigger, safer and more efficient, smarter use of memory, more modes, innovative data, operation both online and offline… If we meet these challenges, we have Artificial General Intelligence
he added.
The trouble came from DeepMind, which has been developing artificial general intelligence for a decade, and recently unveiled its intelligence called Gato, which can learn a variety of tasks from old Atari games to robotic bots, as the most complete machine learning package to date. paying off.
Current MI systems are mostly capable of performing a very narrowly defined task. The importance of our work lies in our ability to solve hundreds of tasks using a single model
Scott Reed, a researcher at DeepMind, detailed the importance of the system.
Gato is not really proficient yet: of the 604 tasks he can solve, the 450 can handle the performance of a skilled person, in others he is not. In the conversation, he said that it is nonsense and distorts the gender of people.
By the way, Gato is very similar to the MI systems used today, which is also the so-called switch, the chassis used since 2017 to solve complex tasks. GPT-3 is an order of magnitude smaller in terms of language intelligence in terms of the parameters it deals with. The fact that it’s smaller, and not accidental, according to its developers, can be dealt with any “task, behavior, or anything interesting” when implemented on a larger scale.
At the moment, Jato is not only small, but also has twice the characteristics of the most advanced artificial intelligence, a limited contextual field of maneuver, and even simpler: memory. GPT-3, for example, can produce completely original text, but it cannot write a long article or book because it simply loses thread. Forgetting what the current task is about is one of the weaknesses of machine learning.
The end is near
Artificial General Intelligence is also a philosophical problem. Some scholars on the subject believe that this thinking machine is dangerous and could plunge humanity into an existential catastrophe. In fact, they fear that by displacing the humans, the AI will become the master of the planet, as it can gain super-intelligence that surpasses humans in one fell swoop and prevents downgraded apes to stop it.
Safety is one of the most important issues, Freitas said, as London’s DeepMind is working on a so-called “big red button” that allows human intervention if the machine’s actions lead to unintended consequences, meaning the power can be turned off in an emergency.
It is a question of when such a button will be needed. The Turing test is the conversation that reveals whether a machine can be distinguished from a rational responder. Asked by his colleague how far Gato is from passing the test, Fretias said so
far away now.
(does not depend onAnd Next WebAnd Take Crunch)
(Cover Photo: Jason Alden/Bloomberg via Getty Images