Connect with us

Hi, what are you looking for?

Tech

Index – Tech-Science – Google’s big launch is coming, but be careful, because it might lie!

Index – Tech-Science – Google’s big launch is coming, but be careful, because it might lie!

There is a lot of excitement at Google, because more than 100 experts are testing the new development, an AI-controlled Life Guidance Advisor. company supposed to It also began research in this field to outperform its competitors, Microsoft and the first artificial intelligence research laboratory, OpenAI.

An OpenAI development is ChatGPT, which uses interpretive models to manage input information while automating ongoing communication with users. ChatGPT has become incredibly pervasive and popular in the past year, so many technology companies, including Google, Facebook, and Snapchat, have also started developing generative AI technology to improve communication with users and provide more human answers to queries.

From hallucinations to realities

But many problems have arisen, they are also of a data protection nature, and it is also a problem that not all AI tools give correct answers. Experts have reported several cases of chatbots producing realities of so-called “AI hallucinations”.

Such a mistake occurred in the case of a non-profit company that supports people with eating disorders, when chatbots provided harmful dietary advice to users. Artificial intelligence experts warn that chatbots can be very persuasive when they have to answer questions, but they are often wrong and not telling the truth.

Google’s new initiative to use AI as a life advisor does not match existing guidelines for the Bard chatbot (also developed by Google, a conversational AI chatbot), which warns users against using AI for medical, legal, financial, or other professional advice, nor Offer him confidential information while communicating with him.

The keyboard betrays us

However, AI can not only manipulate and misuse our data, but it can Internet attack He can also file a lawsuit against us, which is almost 100 percent certain that he will steal our passwords in such an attack. Using a new hacking method, the AI ​​monitors what we type on the keyboard based on the different sounds of the key and has already obtained the access codes.

The scientists also tested the AI’s accuracy during a Zoom call, recording keystrokes using the laptop’s microphone

AI can already reproduce what we type with up to 93 percent accuracy, because it recognizes unique patterns – the sound, intensity and time of each keystroke.

But the AI ​​system doesn’t work the same way on keyboards, in fact, almost every keyboard has to be taught separately which letter each stroke corresponds to. But we can defend against such attacks by changing our typing style, using touch keyboards – which reduced the success of AI spying by 24 percent – using randomly generated passwords or incomplete words as passwords. Because large language models such as ChatGPT can predict characters for word completion.

(Cover photo: Ali Song/Reuters)