Alice
Alice (formerly ActiveFence) is a security, safety, and trust platform built to protect AI systems and online platforms in the GenAI era. Powered by the world’s largest adversarial intelligence dataset, Alice safeguards over 3 billion users across more than 120 languages. Its Rabbit Hole intelligence engine continuously analyzes billions of toxic and manipulative data samples to detect emerging threats in real time. The WonderSuite platform includes tools like WonderBuild for pre-launch stress testing, WonderFence for runtime guardrails, and WonderCheck for automated red-teaming. By defending against prompt injection, jailbreaks, governance gaps, and harmful AI behavior, Alice enables enterprises and foundation model labs to innovate with confidence.
Learn more
LLM Guard
By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM Guard ensures that your interactions with LLMs remain safe and secure. LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository. Base functionality requires a limited number of libraries, as you explore more advanced features, necessary libraries will be automatically installed. We are committed to a transparent development process and highly appreciate any contributions. Whether you are helping us fix bugs, propose new features, improve our documentation, or spread the word, we would love to have you as part of our community.
Learn more
CrowdStrike Falcon AIDR
CrowdStrike Falcon AI Detection and Response (AIDR) is an enterprise security platform designed to protect the rapidly expanding AI attack surface by delivering real-time visibility, detection, and response across AI systems, users, and interactions. It provides unified visibility into how employees and AI agents use generative AI by mapping relationships between users, prompts, models, agents, and supporting infrastructure, while capturing detailed runtime logs for monitoring, compliance, and investigation. It continuously monitors AI activity across endpoints, cloud environments, and applications, enabling organizations to understand how data flows through AI systems and how agents operate within defined boundaries. AIDR detects and blocks AI-specific threats such as prompt injection, jailbreak attempts, malicious entities, harmful outputs, and unauthorized interactions, using behavioral analysis and integrated threat intelligence.
Learn more
LangProtect
LangProtect is an AI-native security and governance platform that protects LLM and Generative AI applications from prompt injection, jailbreaks, sensitive data leakage, and unsafe or non-compliant outputs. Built for production GenAI, it enforces real-time runtime controls at the AI execution layer by inspecting prompts, model responses, and tool/function calls as they happen. This allows teams to block high-risk behavior before it reaches end users, triggers downstream actions, or exposes confidential data.
LangProtect integrates into existing LLM stacks via an API-first approach with minimal latency and supports cloud, hybrid, and on-prem deployments for enterprise security and data residency needs. It also secures modern architectures such as RAG pipelines and agentic workflows with policy-driven enforcement, continuous visibility, and audit-ready governance.
Learn more