No description
.nix | ||
docstub | ||
frozen_llama | ||
leptos_stub | ||
llama_forge_rs | ||
llama_proxy_man | ||
.envrc | ||
.gitignore | ||
Cargo.lock | ||
Cargo.toml | ||
clippy.yml | ||
flake.lock | ||
flake.nix | ||
package-lock.json | ||
package.json | ||
README.md | ||
rust-toolchain.toml |
Redvau.lt AI Monorepo
Current Repos
llama-forge-rs:
- old alpha-state webview based GUI app for chatting with ai
- basic streaming
- manages a llama-server in the background
- currently unmaintained
llama-proxy-man:
- proxy which auto starts/stops llama.cpp instances for you
- retries requests in the background and keeps the connection open
- enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen)
Ideas
- emacs-qwen-plugin
- use emacs-module-rs + llama.cpp/proxy to get awesome qwen integration into emacs
- agent
- add experimental rag+agent framework