README: Update with instructions for perplexity.ai (#204)
README.org: perplexity.ai provides an OpenAI-style API. Also remove some extra whitespace.
This commit is contained in:
parent
af5444a2ea
commit
d8c604b53b
1 changed files with 32 additions and 3 deletions
35
README.org
35
README.org
|
@ -17,6 +17,7 @@ GPTel is a simple Large Language Model chat client for Emacs, with support for m
|
|||
| Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] |
|
||||
| together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] |
|
||||
| Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] |
|
||||
| Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] |
|
||||
|
||||
*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])
|
||||
|
||||
|
@ -55,6 +56,7 @@ GPTel uses Curl if available, but falls back to url-retrieve to work without ext
|
|||
- [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
|
||||
- [[#togetherai][together.ai]]
|
||||
- [[#anyscale][Anyscale]]
|
||||
- [[#perplexity][Perplexity]]
|
||||
- [[#usage][Usage]]
|
||||
- [[#in-any-buffer][In any buffer:]]
|
||||
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
|
||||
|
@ -166,7 +168,7 @@ If you want it to be the default, set it as the default value of =gptel-backend=
|
|||
Register a backend with
|
||||
#+begin_src emacs-lisp
|
||||
(gptel-make-gpt4all "GPT4All" ;Name of your choosing
|
||||
:protocol "http"
|
||||
:protocol "http"
|
||||
:host "localhost:4891" ;Where it's running
|
||||
:models '("mistral-7b-openorca.Q4_0.gguf")) ;Available models
|
||||
#+end_src
|
||||
|
@ -329,6 +331,33 @@ You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
|
|||
gptel-model "mistralai/Mixtral-8x7B-Instruct-v0.1")
|
||||
#+end_src
|
||||
|
||||
#+html: </details>
|
||||
#+html: <details><summary>
|
||||
**** Perplexity
|
||||
#+html: </summary>
|
||||
|
||||
Register a backend with
|
||||
#+begin_src emacs-lisp
|
||||
;; Perplexity offers an OpenAI compatible API
|
||||
(gptel-make-openai "Perplexity" ;Any name you want
|
||||
:host "api.perplexity.ai"
|
||||
:key "your-api-key" ;can be a function that returns the key
|
||||
:endpoint "/chat/completions"
|
||||
:stream t
|
||||
:models '(;; has many more, check perplexity.ai
|
||||
"pplx-7b-chat"
|
||||
"pplx-70b-chat"
|
||||
"pplx-7b-online"
|
||||
"pplx-70b-online"))
|
||||
#+end_src
|
||||
|
||||
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]), or set this as the default value of =gptel-backend=:
|
||||
#+begin_src emacs-lisp
|
||||
;; OPTIONAL configuration
|
||||
(setq-default gptel-backend (gptel-make-openai "Perplexity" ...)
|
||||
gptel-model "pplx-7b-chat")
|
||||
#+end_src
|
||||
|
||||
#+html: </details>
|
||||
|
||||
** Usage
|
||||
|
@ -385,7 +414,7 @@ The default mode is =markdown-mode= if available, else =text-mode=. You can set
|
|||
|
||||
**** Save and restore your chat sessions
|
||||
|
||||
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on =gptel-mode= before editing the buffer.
|
||||
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on =gptel-mode= before editing the buffer.
|
||||
|
||||
** FAQ
|
||||
#+html: <details><summary>
|
||||
|
@ -456,7 +485,7 @@ For more programmable usage, gptel provides a general =gptel-request= function t
|
|||
|
||||
#+html: </details>
|
||||
#+html: <details><summary>
|
||||
**** (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode
|
||||
**** (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode
|
||||
#+html: </summary>
|
||||
|
||||
Doom binds ~RET~ in Org mode to =+org/dwim-at-point=, which appears to conflict with gptel's transient menu bindings for some reason.
|
||||
|
|
Loading…
Add table
Reference in a new issue