
Open AI has released the latest version of the technology on which its text generator is based, ChatGPT, this same week. As the AI chatbot adds new features, concerns are growing that these tools could be used for malicious purposes.
Ilya Sutskeverco-founder of OpenAI, has declared to The Verge that the day will come when artificial intelligence can be easily harnessed for evil. “These models are very powerful and are becoming more so,” Sutskever pointed out. “At some point it will be quite easy, if one wanted to, cause a lot of damage with those models“.
The OpenAI co-founder has made this assessment when trying to explain why the Microsoft-powered company no longer offers detailed information on the method of training these large linguistic models (LLMs).
“As the possibilities grow it makes sense that you don’t want to disclose them,” Sutskever said. “I hope that in a few years it will be obvious to everyone that open source AI is not a good idea“.
A position that clashes head-on with that of several experts consulted by Business Insider Spainwho bet precisely on open source as a way for these companies not to fully control such a powerful technology: “These tools can have a significant impact on society“.
sam altmanCEO of OpenAI, has shared similar views as his colleague in the past.
In an interview given in early 2023, Altman noted that while at best artificial intelligence will be “so unbelievably good” that “it’s hard to even imagine”, in the worst case scenario “it would be like lights out for all of us”.
in a thread of TwitterAltman indicated last month that he believes AI can help people be more productive, smarter and healthier, but also added that the world may not be “that far from tools potentially scary“, so regulating them will be “fundamental”.