

There might be a silver lining for Meta, given the recent leak. "This restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation."

"Even with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models," Meta's announcement reads. Last year, for instance, Meta's own BlenderBot 3 AI chatbot quickly turned racist in a matter of a single weekend.īy making its newest AI open-source, the company is hoping to make it more robust and less likely to turn into a monster. Ironically, Meta bragged in its announcement of the model's limited release that the company was "democratizing access" to large language models like LLaMa in part to avoid the kinds of toxic outputs that we've seen from these sorts of AIs in the past. According to Clement Delangue, head of the AI firm Hugging face, Meta has been filing takedown requests in an effort to stem the leak. What's more, Meta only announced the LLaMA model at the end of February, and in its statement to Vice, Meta did not deny that the language model had been leaked.īut it's trying to get ahead of the problem. This leak, as the report notes, likely marks the first time a proprietary AI has been shared before it was officially released by the company that owns it. It was designed to be beta tested by researchers and governments - which means that it was most likely leaked by someone who had been granted early access to it. Meta-formerly-Facebook's new artificial intelligence model, which is meant to rival the likes of OpenAI's GPT-3 model, has been leaked - and naturally, it ended up on 4chan.Īs Vice reports, the version of the company's Large Language Model Meta AI (LLaMa) that made its way to the controversial forum wasn't intended for public usage.
