diff --git a/README.org b/README.org index b6458db..c68c663 100644 --- a/README.org +++ b/README.org @@ -4,15 +4,15 @@ GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models/backends. -| LLM Backend | Supports | Requires | -|-------------+----------+-------------------------| -| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | -| Azure | ✓ | Deployment and API key | -| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | -| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | -| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | -| PrivateGPT | Planned | - | -| Llama.cpp | Planned | - | +| LLM Backend | Supports | Requires | +|-------------+----------+---------------------------| +| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | +| Azure | ✓ | Deployment and API key | +| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | +| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | +| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | +| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] | +| PrivateGPT | Planned | - | *General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]]) @@ -46,6 +46,7 @@ GPTel uses Curl if available, but falls back to url-retrieve to work without ext - [[#gpt4all][GPT4All]] - [[#ollama][Ollama]] - [[#gemini][Gemini]] + - [[#llamacpp][Llama.cpp]] - [[#usage][Usage]] - [[#in-any-buffer][In any buffer:]] - [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]] @@ -221,6 +222,30 @@ You can pick this backend from the transient menu when using gptel (see Usage), #+html: +#+html:
+#+html: +**** Llama.cpp +#+html: + +Register a backend with +#+begin_src emacs-lisp +(gptel-make-openai ;Not a typo, same API as OpenAI + "llama-cpp" ;Any name + :stream t ;Stream responses + :protocol "http" + :host "localhost:8000" ;Llama.cpp server location + :models '("test")) ;List of available models +#+end_src +These are the required parameters, refer to the documentation of =gptel-make-openai= for more. + +You can pick this backend from the transient menu when using gptel (see Usage), or set this as the default value of =gptel-backend=: +#+begin_src emacs-lisp +(setq-default gptel-backend (gptel-make-openai "llama-cpp" ...) + gptel-model "test") +#+end_src + +#+html:
+ ** Usage (This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel.)