Next-generation code generators based on machine learning produce no worse code than human programmers. In the past, there was a fear with such technologies (eg, that also running on GitHub’s Copilot helper) that they could write more buggy and dangerous code than real programmers – but the latter unfortunately seems to be no better than them , even inferior they can survive compared to its predecessors.
at least this Claims A recently published study in which such, so-called codes are produced by generators based on Large-Language Models (LLM) generators. Because they are taught to “program” using a huge amount of publicly available code that people have written previously, they tend to reproduce the errors they make in the programs they make.
In addition, codes that are inherently safe and in and of themselves can become dangerous if they are thoughtlessly combined with other code – and since AI doesn’t really think, it was feared that the codes it produced could contain more errors than they quite expected. Written by people.
However, current research has revealed that the tokens generated by AI will not make programs any less secure. What’s more, according to the researchers, “there are indications that the number of errors per line has decreased” in programs built with AI, which is very much evidence of the poverty of human programmers.
This, combined with the fact that AI can be used to write code much faster and more, means that programs generated by purely human labor may be at a competitive disadvantage in the future.
At the same time, the study authors note that only a relatively modest sample participated in their experiment: only 58 human programmers were recruited from college software courses, so the results cannot necessarily be considered statistically significant at present.