Envoy AI Gateway is an open-source gateway system designed to manage network traffic between applications and generative AI services using the Envoy proxy ecosystem. The project extends Envoy Gateway to support AI-specific workloads, enabling organizations to route, secure, and scale requests to large language models and other generative AI services. In a typical deployment, the architecture uses a two-tier gateway model where an outer gateway handles authentication, routing, and global rate limiting while a second gateway manages traffic to self-hosted model serving clusters. This design allows organizations to centralize control over AI service access while maintaining flexibility in how backend model infrastructure is deployed. The gateway provides policy enforcement, observability, and routing capabilities that are specifically designed for AI inference workloads, including intelligent endpoint selection and request optimization.
Features
- AI-optimized API gateway built on Envoy Proxy and Envoy Gateway
- Two-tier architecture separating global traffic control from model service routing
- Authentication, routing, and rate limiting for generative AI request traffic
- Support for routing requests to multiple model providers or self-hosted inference clusters
- Intelligent endpoint selection and load balancing for AI inference workloads
- Observability and policy management for monitoring and controlling AI traffic