| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| koboldcpp-linux-x64-nocuda | 2024-02-26 | 50.7 MB | |
| koboldcpp-linux-x64 | 2024-02-26 | 382.2 MB | |
| koboldcpp_nocuda.exe | 2024-02-26 | 32.3 MB | |
| koboldcpp.exe | 2024-02-26 | 301.8 MB | |
| koboldcpp-1.59.1 source code.tar.gz | 2024-02-26 | 15.8 MB | |
| koboldcpp-1.59.1 source code.zip | 2024-02-26 | 16.0 MB | |
| README.md | 2024-02-26 | 1.5 kB | |
| Totals: 7 Items | 798.7 MB | 0 | |
koboldcpp-1.59.1
This is mostly a bugfix release to resolve multiple minor issues.
- Added
--nocertifymode which allows you to disable SSL certificate checking on your embedded Horde worker. This can help bypass some SSL certificate errors. - Fixed pre-gguf models loading with incorrect thread counts. This issue affected the past 2 versions.
- Added build target for Old CPU (NoAVX2) Vulkan support.
- Fixed cloudflare remotetunnel URLs not displaying on runpod.
- Reverted CLBlast back to 1.6.0, pending https://github.com/CNugteren/CLBlast/issues/533 and other correctness fixes.
- Smartcontext toggle is now hidden when contextshift toggle is on.
- Various improvements and bugfixes merged from upstream, which includes google gemma support.
- Bugfixes and updates for Kobold Lite
Fix for 1.59.1: Changed makefile build flags, fix for tooltips, merged IQ3_S support
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller. If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller. If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help flag.