The Fundamental AI Research team at Meta, Facebook's parent company, has introduced a new "state-of-the-art" artificial intelligence language model called Large Language Model Meta AI .
"LLMs have shown a lot of promise in generating text, having conversations, summarizing written material, and more complicated tasks like solving math theorems or predicting protein structures."have become a focus for both large tech companies and startups, with large language models such as Microsoft's Bing AI, OpenAI's ChatGPT, and Google's unreleased Bard AI helping to underpin applications.
Despite the fact that larger models have been successful in extending the technology's capabilities, they can be more expensive to operate, a stage known as "inference." The Chat-GPT 3 from OpenAI, for instance, includes 175 billion parameters. "We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens."Contrary to Google's LaMDA and OpenAI's ChatGPT, whose underlying models are private, Meta has also declared that their LLM will be made available to the AI research community.