| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| llama_deploy-0.8.0-py3-none-any.whl | 2025-05-22 | 90.5 kB | |
| llama_deploy-0.8.0.tar.gz | 2025-05-22 | 2.1 MB | |
| README.md | 2025-05-22 | 1.4 kB | |
| v0.8.0 source code.tar.gz | 2025-05-22 | 2.2 MB | |
| v0.8.0 source code.zip | 2025-05-22 | 2.3 MB | |
| Totals: 5 Items | 6.7 MB | 0 | |
What's Changed
New Features 🎉
- docs: document delivery policies for message queues by @masci in https://github.com/run-llama/llama_deploy/pull/486
- chore: migrate project to uv by @masci in https://github.com/run-llama/llama_deploy/pull/491
- docs: add basic CONTRIBUTING.md by @masci in https://github.com/run-llama/llama_deploy/pull/492
- fix: actually use Redis as message queue by @masci in https://github.com/run-llama/llama_deploy/pull/497
- docs: fix buggy example code by @masci in https://github.com/run-llama/llama_deploy/pull/500
- feat: expose nextjs UI from the apiserver by @masci in https://github.com/run-llama/llama_deploy/pull/499
- fix: use a global settings instance by @masci in https://github.com/run-llama/llama_deploy/pull/501
- feat: add
llamactl servecommand for local development by @masci in https://github.com/run-llama/llama_deploy/pull/502 - feat!: improve local source manager by @masci in https://github.com/run-llama/llama_deploy/pull/503
- fix: pass base_path from the deploy command in llamactl by @masci in https://github.com/run-llama/llama_deploy/pull/504
- fix: add nodejs to the Docker image by @masci in https://github.com/run-llama/llama_deploy/pull/505
Full Changelog: https://github.com/run-llama/llama_deploy/compare/v0.7.2...v0.8.0