Llama 2
The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations.
Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations.
We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
Learn more
Pi
Pi is your personal AI, designed to be supportive, smart, and there for you anytime. The name stands for ‘personal intelligence’, because Pi provides infinite knowledge based on your unique interests. Pi can be a coach, confidante, creative partner, sounding board and assistant. However big, small or random, Pi is here for it. Pi explains even the most complicated ideas in a clear and straightforward way. No matter what you’re going through, Pi is here to talk it over in a kind and compassionate way. Trying to think of a better phrase, a creative party theme, or a good gift? Pi will help you find inspiration and strengthen your ideas. Pi is there to talk it over, thinking through the pros and cons, and helping you figure out a way forward. Pi will help you organize your thoughts, make clear plans and act on them – whether you're changing jobs, trying to get healthier, or learning a new skill. Pi’s here to spice it up, shoot the breeze, explore new interests or just chit chat.
Learn more
StableCode
StableCode offers a unique way for developers to become more efficient by using three different models to help in their coding. The base model was first trained on a diverse set of programming languages from the stack-dataset (v1.2) from BigCode and then trained further with popular languages like Python, Go, Java, Javascript, C, markdown and C++. In total, we trained our models on 560B tokens of code on our HPC cluster.
After the base model had been established, the instruction model was then tuned for specific use cases to help solve complex programming tasks. ~120,000 code instruction/response pairs in Alpaca format were trained on the base model to achieve this result.
StableCode is the ideal building block for those wanting to learn more about coding, and the long-context window model is the perfect assistant to ensure single and multiple-line autocomplete suggestions are available for the user. This model is built to handle a lot more code at once.
Learn more
Vicuna
Vicuna-13B is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases. The cost of training Vicuna-13B is around $300. The code and weights, along with an online demo, are publicly available for non-commercial use.
Learn more