StableVicuna
StableVicuna is the first large-scale open source chatbot trained via reinforced learning from human feedback (RHLF). StableVicuna is a further instruction fine tuned and RLHF trained version of Vicuna v0 13b, which is an instruction fine tuned LLaMA 13b model.
In order to achieve StableVicuna’s strong performance, we utilize Vicuna as the base model and follow the typical three-stage RLHF pipeline outlined by Steinnon et al. and Ouyang et al. Concretely, we further train the base Vicuna model with supervised finetuning (SFT) using a mixture of three datasets:
OpenAssistant Conversations Dataset (OASST1), a human-generated, human-annotated assistant-style conversation corpus comprising 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
GPT4All Prompt Generations, a dataset of 437,605 prompts and responses generated by GPT-3.5 Turbo;
And Alpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003.
Learn more
Snowflake Cortex AI
Snowflake Cortex AI is a fully managed, serverless platform that enables organizations to analyze unstructured data and build generative AI applications within the Snowflake ecosystem. It offers access to industry-leading large language models (LLMs) such as Meta's Llama 3 and 4, Mistral, and Reka-Core, facilitating tasks like text summarization, sentiment analysis, translation, and question answering. Cortex AI supports Retrieval-Augmented Generation (RAG) and text-to-SQL functionalities, allowing users to query structured and unstructured data seamlessly. Key features include Cortex Analyst, which enables business users to interact with data using natural language; Cortex Search, a hybrid vector and keyword search engine for document retrieval; and Cortex Fine-Tuning, which allows customization of LLMs for specific use cases.
Learn more
Lightning Rod
Lightning Rod is an AI platform designed to transform messy, unstructured real-world data into verified, production-ready training datasets and domain-specific AI models without requiring manual labeling. It enables users to generate high-quality, citable question–answer pairs from sources such as news articles, financial filings, and internal documents, turning raw historical data into structured datasets that can be used for supervised fine-tuning or reinforcement learning. It operates through an agent-driven workflow where users describe their goal, and the system automatically gathers sources, generates questions, resolves outcomes based on real-world events, and adds contextual grounding before training a model. A key innovation is its “future-as-label” methodology, which uses actual outcomes as training signals, allowing AI systems to learn directly from real-world results at scale instead of relying on synthetic or manually annotated data.
Learn more
Humiris AI
Humiris AI is a next-generation AI infrastructure platform that enables developers to build advanced applications by integrating multiple Large Language Models (LLMs). It offers a multi-LLM routing and reasoning layer, allowing users to optimize generative AI workflows with a flexible, scalable infrastructure. Humiris AI supports various use cases, including chatbot development, fine-tuning multiple LLMs simultaneously, retrieval-augmented generation, building super reasoning agents, advanced data analysis, and code generation. The platform's unique data format adapts to all foundation models, facilitating seamless integration and optimization. To get started, users can register for an account, create a project, add LLM provider API keys, and define parameters to generate a mixed model tailored to their specific needs. It allows deployment on users' own infrastructure, ensuring full data sovereignty and compliance with internal and external regulations.
Learn more