README: Add instructions for Llamafile

* README.org (* Llama.cpp): As it turns out, text-generation
Llamafile models (currently Mistral Instruct and Llava) offer an
OpenAI-compatible API, so we can use them easily from gptel.  Add
instructions for Llamafiles to the Llama section of the README.
This commit is contained in:
Karthik Chikmagalur 2023-12-31 14:37:26 -08:00
parent 48047c0600
commit 3ac5963080

View file

@ -6,12 +6,13 @@ GPTel is a simple Large Language Model chat client for Emacs, with support for m
| LLM Backend | Supports | Requires |
|-------------+----------+---------------------------|
| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] |
| Azure | ✓ | Deployment and API key |
| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] |
| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] |
| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] |
| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] |
| Azure | ✓ | Deployment and API key |
| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] |
| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] |
| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] |
| Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] |
| PrivateGPT | Planned | - |
*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])
@ -46,7 +47,7 @@ GPTel uses Curl if available, but falls back to url-retrieve to work without ext
- [[#gpt4all][GPT4All]]
- [[#ollama][Ollama]]
- [[#gemini][Gemini]]
- [[#llamacpp][Llama.cpp]]
- [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
- [[#usage][Usage]]
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
@ -138,7 +139,7 @@ Register a backend with
#+end_src
Refer to the documentation of =gptel-make-azure= to set more parameters.
You can pick this backend from the transient menu when using gptel. (See usage)
You can pick this backend from the menu when using gptel. (see [[#usage][Usage]])
If you want it to be the default, set it as the default value of =gptel-backend=:
#+begin_src emacs-lisp
@ -163,7 +164,7 @@ Register a backend with
#+end_src
These are the required parameters, refer to the documentation of =gptel-make-gpt4all= for more.
You can pick this backend from the transient menu when using gptel (see usage), or set this as the default value of =gptel-backend=. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default:
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]), or set this as the default value of =gptel-backend=. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default:
#+begin_src emacs-lisp
;; OPTIONAL configuration
@ -188,7 +189,7 @@ Register a backend with
#+end_src
These are the required parameters, refer to the documentation of =gptel-make-ollama= for more.
You can pick this backend from the transient menu when using gptel (see Usage), or set this as the default value of =gptel-backend=:
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]), or set this as the default value of =gptel-backend=:
#+begin_src emacs-lisp
;; OPTIONAL configuration
@ -212,7 +213,7 @@ Register a backend with
#+end_src
These are the required parameters, refer to the documentation of =gptel-make-gemini= for more.
You can pick this backend from the transient menu when using gptel (see Usage), or set this as the default value of =gptel-backend=:
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]), or set this as the default value of =gptel-backend=:
#+begin_src emacs-lisp
;; OPTIONAL configuration
@ -224,28 +225,29 @@ You can pick this backend from the transient menu when using gptel (see Usage),
#+html: <details>
#+html: <summary>
**** Llama.cpp
**** Llama.cpp or Llamafile
#+html: </summary>
(If using a llamafile, run a [[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server llamafile]] instead of a "command-line llamafile", and a model that supports text generation.)
Register a backend with
#+begin_src emacs-lisp
(gptel-make-openai ;Not a typo, same API as OpenAI
"llama-cpp" ;Any name
:stream t ;Stream responses
:protocol "http"
:host "localhost:8000" ;Llama.cpp server location
:models '("test")) ;List of available models
:host "localhost:8000" ;Llama.cpp server location, typically localhost:8080 for Llamafile
:models '("test")) ;Any names, doesn't matter for Llama
#+end_src
These are the required parameters, refer to the documentation of =gptel-make-openai= for more.
You can pick this backend from the transient menu when using gptel (see Usage), or set this as the default value of =gptel-backend=:
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]), or set this as the default value of =gptel-backend=:
#+begin_src emacs-lisp
(setq-default gptel-backend (gptel-make-openai "llama-cpp" ...)
gptel-model "test")
#+end_src
#+html: </details>
** Usage
(This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel.)