the ATV for straight talk His guest was Arthur Kelety, a cybersecurity futurist and founder of the ITBN Cybersecurity Conference. The researcher responded to Elon Musk's words, who say that humanity has a year to go, because artificial intelligence will already be smarter than the average human in the next year. According to Arthur Keleti, some believe that this is already the case. Artificial general intelligence and its realization are a matter of debate. For example, it is questionable whether the developer of ChatGPT, OpenAI, has actually achieved this.
According to the researcher, the question is also interesting because it has economic implications. For example, there is an agreement between Microsoft and OpenAI that once OpenAI reaches this certain level, the relationship between the two will be terminated, at least according to the researcher. To the contract. In addition, it is not clear how exactly this can be determined, the tests are really suitable for this, and in many of them AI has achieved serious results.
So, yes, I would have to say that AI is very close, and has already surpassed human capabilities in some ways
– Mentioning the secrets of the cyber future.
He said that in order for us to be able to use the artificial intelligence system normally, it is necessary to give it some kind of independence. With the kind of independence a person has, otherwise he would not be able to do his job. Additionally, the systems seem to work well when these assistants can communicate with each other.
So, Elon Musk's fear that this could be a problem is entirely reasonable. Roughly 10 to 20 percent of AI researchers believe this will lead to the specific Terminator situation we've seen in the movies.
– said Arthur Keleti.
The real danger lies in their basic ideas
The expert drew attention to the fact that when we ask these devices, they become a little defensive, like any person, they “feel” that they are being asked and will react differently.
He sees the real big danger in the fact that even though these systems work just like us, we don't have the tests to confirm what they're really thinking.
Today, researchers are responsible for explaining AI decisions. “However, the situation is that these super-intelligent systems may not be able to explain to us why they made the decisions they did. I think that is the biggest risk,” the researcher said.
It's not worth approving the new Meta feature
Finally, the specialist was also asked about Meta's new AI functions, as the company has already started making its AI service available in Hungary, which intends to use our personal data.
a sample hvg Meta wrote in the past few days that Meta began requesting data from users on Thursday morning in reference to the fact that it can train itself artificial intelligence. In the letter, Meta informed this It will soon “expand” the functionality of the AI-based platform to include this region as well. Eastern Arthur recommends against accepting this.
(Cover photo: Stringer/Anadolu Agency/Getty Images)
Indamedia's media and marketing communication publication has been launched.
For more professional content, click and follow us!