No description
Find a file
Tristan Druyen ad0cd12877
refactor(proxy_man): Org & modularize app
- Introduce new modules: `config.rs`, `error.rs`, `inference_process.rs`, `proxy.rs`, `state.rs`, `util.rs`
- Update `logging.rs` to include static initialization of logging
- Add lib.rs for including in other projects
- Refactor `main.rs` to use new modules and improve code structure
2025-02-10 23:40:27 +01:00
.nix Fix llama_forge_rs build & maintain a bit 2025-02-10 15:51:46 +01:00
llama_forge_rs Readd default feature 2025-02-10 21:48:54 +01:00
llama_proxy_man refactor(proxy_man): Org & modularize app 2025-02-10 23:40:27 +01:00
redvault_el_rs Add mvp plans for redvault-el 2025-01-31 13:20:26 +01:00
.envrc Init 2024-07-21 02:42:48 +02:00
.gitignore Fix llama_forge_rs build & maintain a bit 2025-02-10 15:51:46 +01:00
Cargo.lock Refactor llama_proxy_man 2025-02-10 23:22:31 +01:00
Cargo.toml Fix llama_forge_rs build & maintain a bit 2025-02-10 15:51:46 +01:00
clippy.yml Remove unused pkg 2024-09-09 15:49:41 +02:00
flake.lock chore: Update llama.cpp dependency to b4681 2025-02-10 16:05:26 +01:00
flake.nix chore: Update llama.cpp dependency to b4681 2025-02-10 16:05:26 +01:00
package-lock.json Fix llama_forge_rs build & maintain a bit 2025-02-10 15:51:46 +01:00
package.json Fix llama_forge_rs build & maintain a bit 2025-02-10 15:51:46 +01:00
README.md Update README.md 2024-10-08 15:36:08 +00:00
rust-toolchain.toml Update deps 2025-01-31 13:19:28 +01:00

Redvau.lt AI Monorepo

Current Repos

llama-forge-rs:

  • old alpha-state webview based GUI app for chatting with ai
  • basic streaming
  • manages a llama-server in the background
  • currently unmaintained

llama-proxy-man:

  • proxy which auto starts/stops llama.cpp instances for you
  • retries requests in the background and keeps the connection open
  • enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen)

Ideas

  • emacs-qwen-plugin
    • use emacs-module-rs + llama.cpp/proxy to get awesome qwen integration into emacs
  • agent
    • add experimental rag+agent framework