diff --git a/README.md b/README.md index de972ce..848f702 100644 --- a/README.md +++ b/README.md @@ -2,10 +2,18 @@ ## Current Repos -- llama-forge-rs - old alpha-state webview based GUI app for chatting with ai, manages a llama-server in the background -- llama-proxy-man - proxy which auto starts/stops llama.cpp instances, while retring requests and keeping the connection open so you can run more models than your VRAM can fit, with all api requests working (they'll just be slow if you have to start instances to make them happen) +#### llama-forge-rs: + +- old alpha-state webview based GUI app for chatting with ai +- basic streaming +- manages a llama-server in the background +- currently unmaintained + +#### llama-proxy-man: + +- proxy which auto starts/stops llama.cpp instances for you +- retries requests in the background and keeps the connection open +- enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen) ## Ideas