
SAN FRANCISCO — When Microsoft added a chatbot to its search engine, Bing, this month, people realized it was offering up all sorts of false information about the Gap, Mexican nightlife and singer Billie Eilish.
A weekly selection of stories in Spanish that you will not find anywhere else, with ñes and accents.
Then, when journalists and other early test users engaged in lengthy conversations with Microsoft’s artificial intelligence (AI) bot, it began displaying rude and disruptive behavior.
In the days since the Bing bot’s behavior became a global sensation, people have had a hard time understanding the weirdness of this new creation. More often than not, scientists have claimed that much of the blame lies with humans.
However, some mystery remains as to what the new chatbot is capable of and its motivation for doing so. Its complexity makes it hard to analyze and even harder to predict, plus researchers are studying it through a philosophical lens as well as under the hard code of computer science.
Like any other learner, an AI system can learn bad information from bad sources. And what is this strange behavior due to? It can be a chatbot’s distorted reflection of the words and intentions of the people who use it, said Terry Sejnowski, a neuroscientist, psychologist and computer specialist who helped lay the intellectual and technical foundations of modern artificial intelligence.
“This happens when you go deeper and deeper into these systems,” explained Sejnowski, a professor at the Salk Institute for Biological Studies and the University of California, San Diego, who published a research paper on this phenomenon last month in the scientific journal Neural. computer. “Whatever you are looking for, —desire what you desire—, they will give it to you.”
Last month, Google also showed off a new chatbot, Bard, but scientists and journalists immediately noticed that it was writing nonsense about the James Webb Space Telescope. OpenAI, a San Francisco start-up, kickstarted the chatbot boom in November when it introduced ChatGPT, a bot that also doesn’t always tell the truth.
The new chatbots are based on a technology that scientists call large language models (LLM, for its acronym in English). These systems learn by analyzing vast amounts of digital text pulled from the internet, including volumes of false, biased, and otherwise toxic material. The text that chatbots learn from is also a bit old-fashioned, because they have to parse it for months before the public can use them.
While sifting through the sea of good and bad information all over the internet, an LLM learns to do one thing in particular: guess the next word in a sequence of words.
It works like a giant version of autocomplete technology that suggests the next word for you when you type an email or instant message on your smartphone. In the sequence “Tom Cruise is a ____”, the LLM could guess “actor”.
When you talk to a chatbot, it doesn’t just draw on everything it has learned from the internet. Use everything you have told him and everything he has answered you. It doesn’t just guess the next word in your sentence. It is guessing the next word in the long block of text that includes both your words and theirs.
The longer the conversation becomes, the more influence the user has over what the chatbot says without knowing it. If you want him to get mad, get mad, Sejnowski said. If you encourage him to get weird, he gets weird.
The alarm reactions to the strange behavior of the Microsoft chatbot overshadowed an important point: the chatbot has no personality. It offers instant results that are spit out by an incredibly complex computer algorithm.
Microsoft seemed to curb the weirder behavior when it put a limit on the length of conversations with the Bing chatbot. It was like learning from a car test driver that going too fast for too long burns out your engine. Microsoft partner OpenAI and Google are also exploring ways to control the behavior of their bots.
But there is a caveat to this peace of mind: Because chatbots learn from so much material and combine it in such complex ways, researchers aren’t entirely clear on how they produce their end results. Researchers watch what the robots do and learn to put limits on that behavior, often after it happens.
Microsoft and OpenAI have decided that the only way to find out what chatbots will do in the real world is to give them free rein… and contain them when they get out of the way. Both companies believe that their big public experiment is worth the risk.
Sejnowski compared the behavior of the Microsoft chatbot to the Mirror of Erised, a mystical artifact from the novels of Harry Potter by JK Rowling and many of the films based on her witty world of young wizards.
“Esed” is “desire” backwards. When people discover the mirror, it seems to give them truth and understanding. However, it is not like that: it shows the deepest desires of anyone who sees their reflection in it. And some people go crazy if they stare at it for too long.
“Because humans and LLMs mirror each other, they will tend toward a common conceptual state over time,” Sejnowski explained.
According to Sejnowski, it shouldn’t be surprising that journalists have started to see strange behavior in the Bing chatbot. Consciously or unconsciously, they were pushing the system in an uncomfortable direction. As chatbots take in our words and feed them back to us, they can reinforce and amplify our beliefs and convince us to believe what they’re telling us.
Sejnowski was part of a tiny group of researchers who in the late 1970s and early 1980s began to seriously explore a type of artificial intelligence called a neural network, which powers today’s chatbots.
A neural network is a mathematical system that learns skills by analyzing digital data. It’s the same technology that allows Siri and Alexa to recognize what you say.
Around 2018, researchers at companies like Google and OpenAI began building neural networks that learned from vast amounts of digital text, including books, Wikipedia articles, chat logs, and other things posted on the internet. By locating billions of patterns in all that text, these LLMs learned to generate text themselves, including tweets, blog posts, speeches, and computer programs. They were even able to hold a conversation.
These systems are a reflection of humanity. They learn their skills by analyzing text that humans have posted on the internet.
But that’s not the only reason chatbots generate problematic language, says Melanie Mitchell, an artificial intelligence researcher at the Santa Fe Institute, an independent laboratory in New Mexico.
When they generate text, these systems do not repeat word for word what is on the Internet. They produce new text on their own by combining billions of patterns.
Even if researchers trained these systems solely on peer-reviewed scientific literature, they could produce scientifically ridiculous claims. Even if they learned only from truthful texts, they could produce falsehoods. Even if they only learned from unabridged texts, they might produce something spooky.
“There’s nothing stopping them from doing it,” Mitchell says. “They’re just trying to produce something that sounds like human language.”
Artificial intelligence experts have long known that this technology exhibits all sorts of unexpected behaviors. But they don’t always agree on how this behavior should be interpreted or how quickly chatbots will improve.
Because these systems learn from so much more data than we humans could possibly understand, even AI experts can’t understand why they generate a particular text at any given time.
Sejnowski said that he believes that, in the long term, the new chatbots will have the ability to make people more efficient and offer them ways to get things done better and faster. However, there is a caveat for both the companies that build these chatbots and the people who use them: They can also lead us away from the truth and into dark places.
“This is uncharted territory,” Sejnowski said. “Humans have never experienced it.”
Cade Metz is a technology reporter and author of the book Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and The World. It covers artificial intelligence, autonomous cars, robotics, virtual reality, and other emerging areas. @cademetz
Source: NYT Español