redvault-ai/llama_proxy_man
2025-01-31 12:29:47 +01:00
..
src Change backoff retry in proxy 2025-01-31 12:29:47 +01:00
Cargo.toml Add automatic open port picking 2024-11-27 10:04:29 +01:00
config.yaml Make nemotron run with desktop env 2024-11-27 10:18:54 +01:00
README.md Update llama_proxy_man/README.md 2024-10-08 15:37:58 +00:00
TODO.org Add llama_proxy_man pkg 2024-09-19 17:21:46 +02:00

LLama Herder

  • manages multiple llama.cpp instances in the background
  • keeps track of used & available video & cpu memory
  • starts/stops llama.cpp instances as needed, to ensure memory limit is never reached

Ideas

  • smarter logic to decide what to stop
  • unified api, with proxying by model_name param for stamdartized /v1/chat/completions and /completion like endpoints