BEIR is a benchmark framework for evaluating information retrieval models across various datasets and tasks, including document ranking and question answering.
Features
- Provides a standardized benchmark for IR model evaluation
- Supports multiple datasets and retrieval tasks
- Supports various ranking evaluation metrics
- Works with dense and sparse retrieval models
- Offers plug-and-play integration with transformer-based models
- Includes easy-to-use API for benchmarking retrieval performance
Categories
Natural Language Processing (NLP)License
Apache License V2.0Follow BEIR
Other Useful Business Software
Agentic AI SRE built for Engineering and DevOps teams.
NeuBird AI's agentic AI SRE delivers autonomous incident resolution, helping team cut MTTR up to 90% and reclaim engineering hours lost to troubleshooting.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of BEIR!