No description
- Add `figment` for config yamls - Small `Makefile.toml` fixes ? (docset seems still broken ??) - Copy `config.yaml` workspace & forge - Embed proxy_man in forge - Remove `backend_process.rs` and `process.rs` - Update `llama_proxy_man/Cargo.toml` and `config.rs` for new dependencies - Format |
||
---|---|---|
.dist | ||
.forgejo/workflows | ||
.nix | ||
llama_forge_rs | ||
llama_proxy_man | ||
redvault_el_rs | ||
.envrc | ||
.gitignore | ||
Cargo.lock | ||
Cargo.toml | ||
clippy.yml | ||
config.yaml | ||
flake.lock | ||
flake.nix | ||
Makefile.toml | ||
package-lock.json | ||
package.json | ||
README.md | ||
rust-toolchain.toml |
Redvau.lt AI Monorepo
Short Term Todo
- Prepare proxy man for embedding
- [-] Improve markdown rendering in forge chat
- Embed proxy man & add simple ui in forge
- dumb embed process on startup
- View current instances/models/virtual endpoints
- Edit
- Add new (from configurable model folder)
Current Repos
llama-forge-rs:
- old alpha-state webview based GUI app for chatting with ai
- basic streaming
- manages a llama-server in the background
- currently unmaintained
llama-proxy-man:
- proxy which auto starts/stops llama.cpp instances for you
- retries requests in the background and keeps the connection open
- enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen)
Ideas
- emacs-qwen-plugin
- use emacs-module-rs + llama.cpp/proxy to get awesome qwen integration into emacs
- agent
- add experimental rag+agent framework