
Not only the artificial intelligence text generator of Google he makes mistakes.
This Monday, Dmitri Brereton, a researcher specializing in AI and search engines, pointed out that the new artificial intelligence technology of Microsoft made a series of errors during its presentation to the public and noted that “definitely not ready for release“.
As part of the presentation of the new Microsoft browser, the AI of Bing to list the pros and cons of the 3 best-selling pet vacuums. The text generator made a list for the “Bissel Pet Hair Eraser” handheld vacuum, indicating as negative points its high noise level and too short a cable.
What was the problem? That, when Brereton compared the answer the tool gave to the article he linked to as a source, he realized that that review nowhere mentioned the noise of the vacuum cleaner. Besides, it was a cordless vacuum cleaner. “I hope the Bing AI likes being sued for defamation,” the researcher said.
In another example, Bing was asked to design a 5-day itinerary for a trip to Mexico City and was asked to make recommendations about nightlife in the capital of the Latin American country. The artificial intelligence responded with a descriptive list of bars and nightclubs.
After checking the bot’s responses against his own research, Brereton discovered that some of the descriptions offered by the tool were wrong.
In one particular case, Bing recommended visiting the website of a bar to make a reservation and consult its menu, but on the page of the establishment in question it was not possible to reserve a table or consult the menu. With 2 other restaurants, the bot ensured that there were no reviews on the internet. However, when researching about them, both had hundreds of reviews.
The most flagrant mistake Bing made during the demo of its text generator, according to Brereton, was falsify some figures when asked about key takeaways from Gap’s third-quarter 2022 financial results.
Microsoft technology misidentified quarterly earnings data as gross profit and other values, such as earnings per share, were “completely made up“.
“I am surprised that the Bing team offered this pre-recorded demo riddled with inaccurate information and confidently presented it to the world, as if it were a good one,” the AI researcher concluded in his post.
“We are aware of this study and have reviewed its findings in our effort to improve this experience,” he told Business Insider a Microsoft spokesperson. “We recognize that there is still work to be done and we expect the system to make mistakes during this testing period, so feedback is essential to learn and help improve the models.”
Brereton has not been the only user who has detected errors in the “new Bing”. a journalist from The Verge He asked the Microsoft search engine to compile a list of the films that were being shown in a certain London neighborhood. The bot included Spider-Man: No Way Home and Matrix Resurrectionsrelatively old tapes that are no longer in theaters.
The AI arms race may lead to the spread of disinformation
The assessments of specialized researchers such as Brereton come at a time when large technology companies such as Google or Microsoft seem to be starting an arms race for artificial intelligence.
Both the Redmond and Mountain View firms made public presentations last week related to AI. Although Microsoft was the first to present its new technologyGoogle plans to launch its own, Bard, in just a few weeks.
Even before Brereton publicized Bing’s mistakes, Gary Marcus, a former professor of neural sciences at New York University, noted on his blog how the two companies were dealing with this battle. Microsoft’s demo “was billed as a revolution,” while Google’s was presented as a “disaster”Marcus assessed.
Intense pressure to introduce these tools has led some tech industry executives—including john hennessy, president of Google’s parent company (Alphabet)—to claim that these technologies are being released hastily. Google employees called their own company’s announcement “botched” and “hasty.”
The hasty launch of these text generators can bring with it a significant spread of misinformationespecially if the users who use them expect fast and accurate answers as they do with search engines.
Although Brereton has indicated to Business Insider that search engines that use generative artificial intelligence, such as the new Bing, can be “quite innovative”, he has pointed out that launching them prematurely “could lead to big problems”.
“It is dangerous for a search engine that millions of people trust for correct answers to start litter with confidence“, concludes this researcher. “Bing tries to solve this problem by warning people that their answers are inaccurate, but they know and we know that nobody is going to pay attention to that.”