“On the right to warn about advanced artificial intelligence” Former and current employees of OpenAI have published an open letter online.
According to the authors of the letter, everyone realizes that the development of artificial intelligence entails serious risks. This is growing social inequality, the spread of misinformation and manipulation, as well as the fact that humanity is losing control of machines that are becoming more intelligent than it – this so-called singularity, which threatens the extinction of humanity.
As they write, these risks can be avoided through the engagement of the scientific community, the work of legislators and the public, and ultimately AI.
It can be of great benefit to humanity.
However, companies working in this field try to evade censorship in every way, and for this purpose they try to silence all dissenting opinions.
OpenAI stopped providing detailed technical descriptions of its products years ago. It is not known exactly what abilities their models have (which is why rumors of machine awakening to self-awareness appear again and again), what potential danger they may pose to the individual and society, and how the company defends itself against them.
The authors of the letter warn that although they are colleagues
Only they have actual insight
about the state of AI development, but they can't talk because of their non-disclosure agreement. They don't have protections for whistleblowers who go public with abuse, because what happens there is that someone discloses specific abuses to the public, but OpenAI and companies like it are moving into entirely new, unregulated territory. The legislator does not see what is happening in these companies, but the employees do, which brings us to the starting point of the paragraph. Bottom line: Nothing protects those who work in AI, even though they believe there are things to talk about.
The letter was signed by five former OpenAI employees, a former Google DeepMind employee, a former Anthropic employee and a current Google DeepMind employee, and six active OpenAI employees who signed anonymously. Three academics joined the message: Stuart J. Russell, Joshua Bengio, and Geoffrey Hinton, who is considered one of the founders of artificial intelligence.
The last time OpenAI saw such busy times was during the Palace Revolt last winter. At the same time that the statement/open letter was published, which included insiders, ChatGPT collapsed globally. It is not known whether this was a technical error, a cyber attack or the cause of the error, just that it has been fixed.
This is not the first serious open letter in the history of artificial intelligence. First, five months after ChatGPT was introduced, in March 2023, thousands of people — including Max Tegmark, Yuval Noah Harari, Steve Wozniak, and Elon Musk — called for the new AI training to be suspended for half a year. Looking at the 2023 announcements from Meta, Google, Anthropic, and others, it is now clear that this open letter had no impact on the world. Whether the last letter will contain it depends on the strength of the echo.
(CNN, Right to warn…, the edge)
(Cover Photo: Jaap Ahrens/NoorPhoto/Getty Images)