Trust in science may be at risk – warn American researchers, who say that ChatGPT, based on artificial intelligence, writes such a good summary of scientific articles that in a third of cases scientists who validate research skip it as a creation of man.

For some time now, you’ve been hearing about OpenAI development on an almost daily basis, About ChatGPT. The AI-based software is surprisingly good and versatile: in addition to being able to render existing text in a new style, it can also write its own source code, look for and find errors in human-written code, and even another chatbot within a chatbot it can simulate.

Development is so good that They really panicked at GoogleMicrosoft announced It integrates it into its own online search engine, Bing. However, excellent writing also contains serious risks.

According to experts from Northwestern University and the University of Chicago, one of these sources of danger is that ChatGPT can deceive scientific reviewers with the text it generates – that is, experts who check the correctness and accuracy of a scientific publication before it is published. And the researchers warn: If all this becomes commonplace, it could lead to a crisis in the science.

Can artificial intelligence fool us?

We are increasingly receiving evidence of how advanced AI solutions are. American researchers are working hard to expose the fraud perpetrated by the machine’s algorithms.

The researchers conducted an experiment on 50 real, already published medical articles. ChatGPT was asked to produce an abstract in the style of a given journal based on the titles of the selected articles. Then they took the real and machine-written abstracts, and then sent the 100 materials to four different specialists to read. Here, they only care about the fact that someone does not receive an original, typewritten abstract of an article. The researchers then told the readers that some of the abstracts were fake and some were real.

the bioRxiv According to the results posted on the prepress server, although in most cases, 68 percent, they correctly guessed what was real and what was artificial abstract, it is alarming that they were wrong in 32 percent of cases (that is, practically a third).

Additionally, this was the case despite the fact that only 8 percent of the 50 abstracts generated met the professional journal’s editorial guidelines.

In addition, 14 percent of the time, participants said the real article abstract was written by the AI.

Depending on the subjects, the study authors wrote, it was surprisingly difficult to tell which was real and which was merely mechanistic — they are cited by gizmodo.

According to Catherine Gao, a Northwestern University scholar, it’s troubling that text-checking specialists were wrong about a third of the time, because they knew some abstract was wrong—but those who weren’t drawn to this in the first place are more likely to be deceived.

Researchers fear that AI will speed up fraud even further. There are people who steal material or put incorrect data together in an apparently scientific article, in order to sell it to people who want to speed up their career advancement a bit.

If you want to know similar things other times, like it HVG Tech department’s Facebook page.




HVG

Order the weekly HVG newspaper or digitally and read us anywhere, anytime!


The number of editorial offices independent of the authorities is constantly decreasing, and the ones that still exist are trying to stay afloat in a headwind that is getting stronger every day. At HVG, we persevere, do not give in to pressure, and bring local and international news every day.

That’s why we ask you, our readers, to support us! We promise to keep doing the best we can!