
The race for Artificial Intelligence has begun. In recent months, giants like Microsoft or Google have deployed their new automatic search systems with which they hope to revolutionize the Internet of the future at a time when technology is not going through its best moment.
Now Goal (formerly Facebook), has come up with its own machine language AI model, very similar to OpenAI’s ChatGPT. LlaMA, which is how Meta has baptized its new system and which is an abbreviation of Large Language Model Meta AIhas been presented to the research community for further development and training over the coming months.
For the moment, it will be available under a non-commercial license for researchers and entities affiliated with the public administration, civil society and the academic world, according to the company in a statement.
LLaMA, according to its developers, outperforms GPT3 in most tests and is capable of competing with the Chinchilla70B and PaLM-540B models, the technology behind Google’s new search system called Bard.
ChatGPT Starts Sending Disturbing Messages: Microsoft-Powered AI Insults Users, Questions Its Own Existence
In a year marked by massive layoffs in the technology sector —almost 300,000 people have lost their jobs between 2022 and 2023— it looked like the metaverse would be the big revolution. After investing hundreds of millions in an area that has not quite taken off, artificial intelligence is now hogging all the attention.
In a post on his Facebook profile, mark zuckerberg has highlighted that “LLMs (Large Linguistic Models) have shown great promise for generating text, holding conversations, summarizing written material, and more complicated tasks such as solving mathematical theorems or predicting protein structures.”
“Meta is committed to this open research model and we will make our new model available to the AI research community,” he wrote.
One of the great challenges of these systems now involves correcting and fine-tuning some of their responses: they still frequently get it wrong, provide incorrect data, and even amplify some of the biases they are built on, leading in some cases to racist responses.
Meta has opened its LLaMA system to the research community to help mitigate these risks of bias, toxic comments and hallucinations in large linguistic models, the company said in a statement.
According to Meta, LLaMA has fewer biases and prejudices based on religion, gender, age, or nationality than its rivals OPT and GPT3. In addition, according to its test of truthfulness in the answers, LLaMA obtains better results giving truthful and informative answers than GPT3, although “the rate of correct answers is still low”they qualify.