TechnologyChatGPT is not only good for programming: it corrects code errors better than programs designed for it

ChatGPT is not only good for programming: it corrects code errors better than programs designed for it

It’s been almost 2 months, when ChatGPT began to attract a certain social and media relevance, the artificial intelligence text generation tool from Open AI proved to be extremely versatile.

By entering a few lines of text that served as a user command, ChatGPT was able to give philosophical dissertations worthy of a university student, solutions to math problems that they would have racked the brains of more than one and fictitious discussions between 2 people that could have come from your WhatsApp group at work.

Since then, the possibilities of the OpenAI tool have been pushed almost to the limit, since the technology promoted by Microsoft can not only be used for simple entertainment, but can be applied to processes related to the work environment.

In fact, several companies in Spain —such as jobandtalent either expense— have started using ChatGPT for writing notes, reports or publications, summarizing data or simplifying laws.

A new study has now shown that it is even capable of finding and fixing bugs in software source code better than a program that has been specifically designed for it.

Microsoft, OpenAI and GitHub ask a court to dismiss a lawsuit against an AI for copyright

two programmers two programmers

The study was carried out by a team of researchers from the Johannes Gutenberg University of Mainz (Germany) and the University College from London (UK). As collected in PCMagthese researchers submitted 40 code snippets reporting bugs to 4 different code fixing systems: ChatGPT, Codex, CoCoNut and the standard APR.

In the case of the OpenAI tool, they basically asked you, “What’s wrong with this code?” and then copied and pasted the snippet. In a first pass, ChatGPT worked almost as well as the rest of the programs: it solved 19 problems, the same as CoCoNut and unlike Codex, which solved 21, and the APR standard, which solved 7.

The researchers note, however, that the ability to chat with ChatGPT after receiving that first solution makes all the difference. Which At first there were 19 correct answers, they became 31 once the tool had correctly understood what was being asked of it.

“We see that for most of our requests ChatGPT asks for more information about the error, and once that information is given, your success rate gets even bettergetting to solve 31 of 40 errors and surpassing the best of the software“, says the report.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Posts

Read More
More