- old alpha-state webview based GUI app for chatting with ai
- basic streaming
- manages a llama-server in the background
- currently unmaintained
#### llama-proxy-man:
- proxy which auto starts/stops llama.cpp instances for you
- retries requests in the background and keeps the connection open
- enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen)