They chose an interesting way to fight AI in the US: ChatGPT’s manufacturer, OpenAI, will be fined by the FTC for reputational damage.
the The Washington Post reported it, that the authority contacted OpenAI with a twenty-page document requiring the company to comply with consumer protection laws. All this despite the fact that American political decision makers have lagged far behind in technology. The new legislative package on AI can only be prepared by the Senate in a few months.
Regardless, the Commerce Commission can impose fines if it finds that consumer protection laws are being violated. Meta, Amazon and Twitter have already been punished.
New forms of fraud and deception
The committee collected several complaints from US citizens alleging that the language form provided “false, misleading, derogatory, or harmful” information about them. If it can be shown that the company caused harm – with intent – by breach of reputation, they will have to pay.
The head of the commission, Lina Khan, tried to persuade the politicians during her hearing in the House of Representatives a few days ago. According to him, ChatGPT “bugs” can be new forms of fraud and deception. On the part of the committee, they will use all means to oppose harmful practices in the field.
This, among other things, is
- artificial intelligence fraud,
- manipulate potential customers,
- Wrongly exaggerating the capabilities of AI products.
The committee also wants to know whether OpenAI has evidence of how reliable users believe the accuracy of the output generated by its AI tools is, and whether it handles complaints about false claims.
The Washington Post also reports on a case in which ChatGPT “accused” an American lawyer of sexual harassment.
The article reveals that the United States has lagged behind the European Union in terms of data protection, and according to the author, it will be difficult to come back from here, as lawmakers try to balance the need for innovation with ensuring adequate protection of technology. .
This is hinted at by the fact that Kamala Harris, Vice President of the United States, recently spoke at a press conference about how to protect consumers and promote technological change at the same time.
If there is an injury, there is someone responsible
According to Gabor Poliak, there is no doubt that some kind of regulation will be needed, but the lecturer of the media department of ELTE is no longer sure that this will be implemented in a short time, and even if it does, it is unlikely that the solution will still be adequate. He cited as an example that in the case of Facebook, Europe is still only in the process of establishing the legal framework, while the platform has lost much of its importance in the past three years.
The university professor also confirmed to Napi.hu that the main problem is there.
When I ask what ChatGPT knows about me, the answer is a lot of bullshit, which I laugh at, but if someone else starts from this misinformation, it’s really illegal, it really does harm
– confirmed, adding: If there is an infection, there is someone responsible, and that is ChatGPT, which uses data from which inappropriate conclusions can be drawn.
black box
This does not happen because someone entered wrong data or the algorithm was lost on the Internet. This is a deep learning technology. He explained that beyond a certain point, developers have no idea how these layers of knowledge, based on each other, will produce the final result visible on the screen. This is the black box. Unskilled programmers run the system with long algorithms, much like the human brain works.
The head of the PROGmasters programming school and IT training center told Napi.hu. Read more >>>
“According to the law, you are responsible for what you have control over, and you have no control over deep learning in the technological sense,” Gabor Poliak warned.
“All the user can do is point out to the developer that the information is incorrect. What we can currently expect from manufacturers of models like ChatGPT is to ensure that offending errors are corrected if they become aware of them through an external monitoring system or their own monitoring system. But the algorithm cannot Get to know her, and it is not at all certain that she will not spread false information next time, even with the best of intentions,” said the ELTE lecturer.
According to Gábor Polyák, even if a US court says the developer is responsible for all the answers, in the end they won’t want to stifle their own business.
“This is a global competition, and it is clear that China will not manipulate personal rights, and America can do nothing to jeopardize developments,” the expert said. Gábor Polyák In the online world, the opportunity to remedy all wrongdoing has slipped from our hands, and “it’s only going to get worse.”
“If a Hungarian citizen tries to get compensation for the lies published about him, he can try to sue OpenAI, but it is not even certain that his case will go to a Hungarian court based on a Hungarian legal order. The same problem we are struggling with appears in the case of Facebook and Google, where That we are not in a position to get into any kind of legal battle with them,” Gabor Polyak shared his suspicions.
We believe in it
By the way, 73 percent of Hungarian users trust content generated or generated by an algorithm.
Moreover, the demand for the use of artificial intelligence seems to be increasing.
- Nearly half (43 percent) of consumers want organizations to use generative AI in customer relationship management.
- Moreover, for 70 percent of consumers, tools based on generative AI are already the most important thing when looking for recommendations for new products and services.
- The majority of consumers (64 percent) are open to buying based on this advice.
- There is not much difference between age groups: 67 percent of consumers look forward to the ability of generative AI to provide personalized fashion and interior design advice.