No description
Find a file
2025-01-31 18:45:24 +01:00
.nix WIP 2024-09-05 22:33:13 +02:00
docstub WIP 2024-09-05 22:33:13 +02:00
frozen_llama Remove iced dir & add as git dep 2024-09-09 11:46:22 +02:00
garnix Add garnix 2025-01-31 12:28:07 +01:00
llama_forge_rs Remove unused pkg 2024-09-09 15:49:41 +02:00
llama_proxy_man Fmt 2025-01-31 18:45:19 +01:00
redvault_el_rs Add mvp plans for redvault-el 2025-01-31 13:20:26 +01:00
.envrc Init 2024-07-21 02:42:48 +02:00
.gitignore WIP 2024-09-05 22:33:13 +02:00
Cargo.lock Update iced to 0.13 2025-01-31 18:45:24 +01:00
Cargo.toml Update iced to 0.13 2025-01-31 18:45:24 +01:00
clippy.yml Remove unused pkg 2024-09-09 15:49:41 +02:00
flake.lock Update to latest llamacpp 2025-01-31 13:22:11 +01:00
flake.nix Update to latest llamacpp 2025-01-31 13:22:11 +01:00
package-lock.json Init 2024-07-21 02:42:48 +02:00
package.json Init 2024-07-21 02:42:48 +02:00
README.md Update README.md 2024-10-08 15:36:08 +00:00
rust-toolchain.toml Update deps 2025-01-31 13:19:28 +01:00

Redvau.lt AI Monorepo

Current Repos

llama-forge-rs:

  • old alpha-state webview based GUI app for chatting with ai
  • basic streaming
  • manages a llama-server in the background
  • currently unmaintained

llama-proxy-man:

  • proxy which auto starts/stops llama.cpp instances for you
  • retries requests in the background and keeps the connection open
  • enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen)

Ideas

  • emacs-qwen-plugin
    • use emacs-module-rs + llama.cpp/proxy to get awesome qwen integration into emacs
  • agent
    • add experimental rag+agent framework