No description
Find a file
2024-11-26 18:30:51 +01:00
.nix WIP 2024-09-05 22:33:13 +02:00
docstub WIP 2024-09-05 22:33:13 +02:00
frozen_llama Remove iced dir & add as git dep 2024-09-09 11:46:22 +02:00
leptos_stub Remove unused pkg 2024-09-09 15:49:41 +02:00
llama_forge_rs Remove unused pkg 2024-09-09 15:49:41 +02:00
llama_proxy_man chore: Add redvault_el_rs emacs integration 2024-10-09 02:11:46 +02:00
redvault_el_rs WIP: bacon + sample.el file copied from tabby 2024-10-14 18:31:06 +02:00
.envrc Init 2024-07-21 02:42:48 +02:00
.gitignore WIP 2024-09-05 22:33:13 +02:00
Cargo.lock chore: Add redvault_el_rs emacs integration 2024-10-09 02:11:46 +02:00
Cargo.toml chore: Add redvault_el_rs emacs integration 2024-10-09 02:11:46 +02:00
clippy.yml Remove unused pkg 2024-09-09 15:49:41 +02:00
flake.lock Update flakes, llama-cpp for specdecode & fix npm 2024-11-26 18:30:51 +01:00
flake.nix Update flakes, llama-cpp for specdecode & fix npm 2024-11-26 18:30:51 +01:00
package-lock.json Init 2024-07-21 02:42:48 +02:00
package.json Init 2024-07-21 02:42:48 +02:00
README.md Update README.md 2024-10-08 15:36:08 +00:00
rust-toolchain.toml WIP 2024-09-05 22:33:13 +02:00

Redvau.lt AI Monorepo

Current Repos

llama-forge-rs:

  • old alpha-state webview based GUI app for chatting with ai
  • basic streaming
  • manages a llama-server in the background
  • currently unmaintained

llama-proxy-man:

  • proxy which auto starts/stops llama.cpp instances for you
  • retries requests in the background and keeps the connection open
  • enables you to run more models than your VRAM can fit, with HTTP-API requests working like they do (they'll just be slow if you have to start instances to make them happen)

Ideas

  • emacs-qwen-plugin
    • use emacs-module-rs + llama.cpp/proxy to get awesome qwen integration into emacs
  • agent
    • add experimental rag+agent framework