2023-03-05 16:59:44 -08:00
#+title : GPTel: A simple ChatGPT client for Emacs
2023-03-19 19:58:19 -07:00
[[https://melpa.org/#/gptel ][file:https://melpa.org/packages/gptel-badge.svg ]]
2023-03-05 16:59:44 -08:00
GPTel is a simple, no-frills ChatGPT client for Emacs.
2023-04-06 17:29:40 -07:00
https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4
https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4
2023-03-05 16:59:44 -08:00
- Requires an [[https://platform.openai.com/account/api-keys ][OpenAI API key ]].
2023-05-31 18:35:08 -07:00
- It's async and fast, streams responses.
- Interact with ChatGPT from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
- ChatGPT's responses are in Markdown or Org markup.
- Supports conversations and multiple independent sessions.
gptel: saving and restoring state for Markdown/Text
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.
Chats can be restored by turning on `gptel-mode'. One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long. A better approach is needed.
Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.
* README.org: Mention saving and restoring chats where
appropriate.
2023-07-28 15:57:08 -07:00
- Save chats as regular Markdown/Org/Text files and resume them later.
2023-03-05 16:59:44 -08:00
- You can go back and edit your previous prompts, or even ChatGPT's previous responses when continuing a conversation. These will be fed back to ChatGPT.
2023-06-01 11:24:13 +08:00
GPTel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
2023-06-12 17:27:52 -07:00
** Contents :toc:
- [[#breaking-changes ][Breaking Changes ]]
- [[#installation ][Installation ]]
- [[#straight ][Straight ]]
- [[#manual ][Manual ]]
- [[#doom-emacs ][Doom Emacs ]]
- [[#spacemacs ][Spacemacs ]]
- [[#usage ][Usage ]]
- [[#in-any-buffer ][In any buffer: ]]
- [[#in-a-dedicated-chat-buffer ][In a dedicated chat buffer: ]]
- [[#using-it-your-way ][Using it your way ]]
- [[#extensions-using-gptel ][Extensions using GPTel ]]
- [[#additional-configuration ][Additional Configuration ]]
- [[#why-another-chatgpt-client ][Why another ChatGPT client? ]]
- [[#will-you-add-feature-x ][Will you add feature X? ]]
- [[#alternatives ][Alternatives ]]
- [[#acknowledgments ][Acknowledgments ]]
2023-06-01 11:24:13 +08:00
** Breaking Changes
2023-06-12 17:27:52 -07:00
- =gptel-api-key-from-auth-source= now searches for the API key using the value of =gptel-host= , /i.e./ "api.openai.com" instead of the original "openai.com". You need to update your =~/.authinfo= .
2023-06-01 11:24:13 +08:00
2023-03-05 16:59:44 -08:00
** Installation
2023-03-19 17:50:51 -07:00
GPTel is on MELPA. Install it with =M-x package-install⏎= =gptel= .
2023-03-05 16:59:44 -08:00
2023-03-19 17:50:51 -07:00
(Optional: Install =markdown-mode= .)
2023-03-05 16:59:44 -08:00
2023-03-28 12:23:04 -07:00
**** Straight
2023-03-05 16:59:44 -08:00
#+begin_src emacs-lisp
2023-03-19 17:50:51 -07:00
(straight-use-package 'gptel)
2023-03-05 16:59:44 -08:00
#+end_src
Installing the =markdown-mode= package is optional.
2023-03-28 12:23:04 -07:00
**** Manual
Clone or download this repository and run =M-x package-install-file⏎= on the repository directory.
2023-03-05 16:59:44 -08:00
Installing the =markdown-mode= package is optional.
2023-04-01 23:25:47 +02:00
**** Doom Emacs
In =packages.el=
#+begin_src emacs-lisp
(package! gptel)
#+end_src
In =config.el=
#+begin_src emacs-lisp
(use-package! gptel
:config
(setq! gptel-api-key "your key"))
#+end_src
2023-03-28 12:23:04 -07:00
**** Spacemacs
2023-03-25 11:26:00 +01:00
After installation with =M-x package-install⏎= =gptel=
2023-03-28 12:23:04 -07:00
- Add =gptel= to =dotspacemacs-additional-packages=
- Add =(require 'gptel)= to =dotspacemacs/user-config=
2023-03-25 11:26:00 +01:00
2023-03-05 16:59:44 -08:00
** Usage
Procure an [[https://platform.openai.com/account/api-keys ][OpenAI API key ]].
2023-04-23 22:55:34 -04:00
Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as:
2023-06-01 11:24:13 +08:00
- Storing in =~/.authinfo= . By default, "api.openai.com" is used as HOST and "apikey" as USER.
2023-04-23 22:55:34 -04:00
#+begin_src authinfo
2023-06-01 11:24:13 +08:00
machine api.openai.com login apikey password TOKEN
2023-04-23 22:55:34 -04:00
#+end_src
- Setting it to a function that returns the key.
2023-03-05 16:59:44 -08:00
2023-04-06 17:29:40 -07:00
*** In any buffer:
1. Select a region of text and call =M-x gptel-send= . The response will be inserted below your region.
2023-04-09 04:41:24 -07:00
2023-04-06 17:29:40 -07:00
2. You can select both the original prompt and the response and call =M-x gptel-send= again to continue the conversation.
2023-04-09 04:41:24 -07:00
3. Call =M-x gptel-send= with a prefix argument to
- set chat parameters (GPT model, directives etc) for this buffer,
- to read the prompt from elsewhere or redirect the response elsewhere,
- or to replace the prompt with the response.
[[https://user-images.githubusercontent.com/8607532/230770018-9ce87644-6c17-44af-bd39-8c899303dce1.png ]]
With a region selected, you can also rewrite prose or refactor code from here:
*Code* :
[[https://user-images.githubusercontent.com/8607532/230770162-1a5a496c-ee57-4a67-9c95-d45f238544ae.png ]]
*Prose* :
2023-04-06 17:29:40 -07:00
2023-04-09 04:41:24 -07:00
[[https://user-images.githubusercontent.com/8607532/230770352-ee6f45a3-a083-4cf0-b13c-619f7710e9ba.png ]]
2023-04-06 17:29:40 -07:00
2023-03-14 02:03:22 -07:00
*** In a dedicated chat buffer:
2023-03-10 04:12:32 -08:00
2023-03-14 02:03:22 -07:00
1. Run =M-x gptel= to start or switch to the ChatGPT buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (=C-u M-x gptel= ) to start a new session.
2023-03-05 16:59:44 -08:00
2023-03-14 02:03:22 -07:00
2. In the gptel buffer, send your prompt with =M-x gptel-send= , bound to =C-c RET= .
2023-03-05 16:59:44 -08:00
2023-03-23 14:38:22 -07:00
3. Set chat parameters (GPT model, directives etc) for the session by calling =gptel-send= with a prefix argument (=C-u C-c RET= ):
2023-03-14 02:03:22 -07:00
[[https://user-images.githubusercontent.com/8607532/224946059-9b918810-ab8b-46a6-b917-549d50c908f2.png ]]
2023-03-05 16:59:44 -08:00
That's it. You can go back and edit previous prompts and responses if you want.
gptel: saving and restoring state for Markdown/Text
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.
Chats can be restored by turning on `gptel-mode'. One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long. A better approach is needed.
Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.
* README.org: Mention saving and restoring chats where
appropriate.
2023-07-28 15:57:08 -07:00
4. Save the chat to a file. To resume, open the file and turn on =gptel-mode= .
2023-03-10 04:12:32 -08:00
The default mode is =markdown-mode= if available, else =text-mode= . You can set =gptel-default-mode= to =org-mode= if desired.
2023-04-08 18:00:18 -07:00
** Using it your way
GPTel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it.
If you want custom behavior, such as
- reading input from or output to the echo area,
- or in pop-up windows,
- sending the current line only, etc,
GPTel provides a general =gptel-request= function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by =gptel-send= . See the documentation of =gptel-request= , and the [[https://github.com/karthink/gptel/wiki ][wiki ]] for examples.
2023-06-09 14:09:28 -07:00
*** Extensions using GPTel
These are packages that depend on GPTel to provide additional functionality
- [[https://github.com/kamushadenes/gptel-extensions.el ][gptel-extensions ]]: Extra utility functions for GPTel.
- [[https://github.com/kamushadenes/ai-blog.el ][ai-blog.el ]]: Streamline generation of blog posts in Hugo.
2023-06-01 11:24:13 +08:00
** Additional Configuration
2023-06-09 14:09:28 -07:00
- =gptel-host= : Overrides the OpenAI API host. This is useful for those who transform Azure API into OpenAI API format, utilize reverse proxy, or employ third-party proxy services for the OpenAI API.
- =gptel-proxy= : Path to a proxy to use for GPTel interactions. This is passed to Curl via the =--proxy= argument.
2023-06-01 11:24:13 +08:00
2023-03-05 16:59:44 -08:00
** Why another ChatGPT client?
2023-05-19 23:11:30 -07:00
Other Emacs clients for ChatGPT prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
2023-03-05 16:59:44 -08:00
2023-05-19 23:11:30 -07:00
1. Something that is as free-form as possible: query ChatGPT using any text in any buffer, and redirect the response as required. Using a dedicated =gptel= buffer just adds some visual flair to the interaction.
2. Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way ChatGPT can generate code blocks that I can run.
2023-03-05 16:59:44 -08:00
2023-05-19 23:11:30 -07:00
** Will you add feature X?
2023-03-05 16:59:44 -08:00
2023-05-19 23:11:30 -07:00
Maybe, I'd like to experiment a bit more first. Features added since the inception of this package include
- Curl support (=gptel-use-curl= )
- Streaming responses (=gptel-stream= )
- Cancelling requests in progress (=gptel-abort= )
- General API for writing your own commands (=gptel-request= , [[https://github.com/karthink/gptel/wiki ][wiki ]])
- Dispatch menus using Transient (=gptel-send= with a prefix arg)
- Specifying the conversation context size
- GPT-4 support
- Response redirection (to the echo area, another buffer, etc)
- A built-in refactor/rewrite prompt
gptel: saving and restoring state for Markdown/Text
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.
Chats can be restored by turning on `gptel-mode'. One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long. A better approach is needed.
Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.
* README.org: Mention saving and restoring chats where
appropriate.
2023-07-28 15:57:08 -07:00
- Limiting conversation context to Org headings using properties (#58)
- Saving and restoring chats (#17)
2023-05-19 23:11:30 -07:00
Features being considered or in the pipeline:
gptel: saving and restoring state for Markdown/Text
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.
Chats can be restored by turning on `gptel-mode'. One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long. A better approach is needed.
Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.
* README.org: Mention saving and restoring chats where
appropriate.
2023-07-28 15:57:08 -07:00
- Fully stateless design (#17)
2023-05-19 23:11:30 -07:00
** Alternatives
Other Emacs clients for ChatGPT include
- [[https://github.com/xenodium/chatgpt-shell ][chatgpt-shell ]]: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- [[https://github.com/rksm/org-ai ][org-ai ]]: Interaction through special =#+begin_ai ... #+end_ai= Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: [[https://github.com/CarlQLange/chatgpt-arcana.el ][chatgpt-arcana ]], [[https://github.com/MichaelBurge/leafy-mode ][leafy-mode ]], [[https://github.com/iwahbe/chat.el ][chat.el ]]
** Acknowledgments
- [[https://github.com/algal ][Alexis Gallagher ]] and [[https://github.com/d1egoaz ][Diego Alvarez ]] for fixing a nasty multi-byte bug with =url-retrieve= .
- [[https://github.com/tarsius ][Jonas Bernoulli ]] for the Transient library.
2023-03-05 16:59:44 -08:00
2023-06-12 17:27:52 -07:00
# Local Variables:
# toc-org-max-depth: 4
# End: