redvault-ai/llama_proxy_man
2025-01-31 18:45:19 +01:00
..
src Fmt 2025-01-31 18:45:19 +01:00
Cargo.toml Add automatic open port picking 2024-11-27 10:04:29 +01:00
config.yaml conf: Update model configurations 2025-01-31 13:20:40 +01:00
README.md Update llama_proxy_man/README.md 2024-10-08 15:37:58 +00:00
TODO.org Add llama_proxy_man pkg 2024-09-19 17:21:46 +02:00

LLama Herder

  • manages multiple llama.cpp instances in the background
  • keeps track of used & available video & cpu memory
  • starts/stops llama.cpp instances as needed, to ensure memory limit is never reached

Ideas

  • smarter logic to decide what to stop
  • unified api, with proxying by model_name param for stamdartized /v1/chat/completions and /completion like endpoints