Google's DeepMind robotics team has announced several advances that could make robots faster, more accurate, and safer in everyday situations.
A department known for its research in the field of artificial intelligence (AI). In a blog post We shared the most interesting lessons and developments over the past seven months. Among these, it stands out that they have prepared the so-called “robot constitution”, which immediately regulates the tasks that autonomous systems can perform and carry out. The inspiration for this was, of course, science fiction writer Isaac Asimov's Three Laws of Robotics (https://hu.wikipedia.org/wiki/A_robotika_három_tóvénye), which states that a robot cannot harm anyone, and must obey or protect them. The same, if it does not contradict the observance of the first two rules. Of course, according to Google, the first is the most important, but this is supplemented by regulations such that their robots cannot perform tasks that also affect people, interact with animals, sharp objects or electronic equipment, and even only under static conditions. Human supervision they can act. In the case of collaborative robots, all this is complemented, for example, by the fact that if they feel too much pressure on their joints, they should stop.
However, this is only a small portion of the new products that have just been announced.
The specialists also talked about technological solutions that bring us closer to the fact that a question that is easy for a human to understand and do, such as cleaning or cooking, does not pose a challenge for a robot. Their system, called AutoRT, combines large language models, large visual models, and robot control models to help robots navigate and learn in new situations. The visual model evaluates the environment, and the linguistic model invents possible tasks that can be performed and then chooses among them, relying in part on the robot's aforementioned constitution. The company tested its robots on more than six thousand five hundred tasks and more than seventy thousand sub-tasks during the last period.
On the other hand, SARA-RT will be responsible for making robots' attention work more efficiently. So far, if the amount of data received in detection AI models has doubled, for example due to better quality cameras, the demand for calculations has quadrupled. In contrast, here there is actually a linear relationship between the two factors. This could subsequently mean significant savings not only for bots, but also for any transformer technology, chatbots that operate with large language models.
The third system, RT-Trajectory, specifically aids perception: it practically redraws the human hands and robot arms visible through the robot's camera, thus focusing attention. Thus, machines do not have to figure out for themselves how to clear a table, for example, but they can take dishes from others. For now, the company has entrusted its robots with tasks like putting snacks on the table or flipping a Coke can, but with the developments mentioned above, robots may be closer to cooking our lunch and even cleaning the kitchen.
(Cover image: Google)
Google says there will be a robot revolution, but it won't be like the one seen on TV
Alphabet's X division announced the Everyday Robot Project, which is developing robots that can learn independently of their environment. They already know how to sort garbage.