#+title: GPTel: A simple ChatGPT client for Emacs [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]] GPTel is a simple, no-frills ChatGPT client for Emacs. https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4 https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4 - Requires an [[https://platform.openai.com/account/api-keys][OpenAI API key]]. - It's async and fast, streams responses. - Interact with ChatGPT from anywhere in Emacs (any buffer, shell, minibuffer, wherever) - ChatGPT's responses are in Markdown or Org markup. - Supports conversations and multiple independent sessions. - You can go back and edit your previous prompts, or even ChatGPT's previous responses when continuing a conversation. These will be fed back to ChatGPT. GPTel uses Curl if available, but falls back to url-retrieve to work without external dependencies. ** Breaking Changes 1. =gptel-api-key-from-auth-source= search the API KEY using the =gptel-host=, i.e., "api.openai.com" instead of the original "openai.com". You need to update your =~/.authinfo=. ** Installation GPTel is on MELPA. Install it with =M-x package-install⏎= =gptel=. (Optional: Install =markdown-mode=.) **** Straight #+begin_src emacs-lisp (straight-use-package 'gptel) #+end_src Installing the =markdown-mode= package is optional. **** Manual Clone or download this repository and run =M-x package-install-file⏎= on the repository directory. Installing the =markdown-mode= package is optional. **** Doom Emacs In =packages.el= #+begin_src emacs-lisp (package! gptel) #+end_src In =config.el= #+begin_src emacs-lisp (use-package! gptel :config (setq! gptel-api-key "your key")) #+end_src **** Spacemacs After installation with =M-x package-install⏎= =gptel= - Add =gptel= to =dotspacemacs-additional-packages= - Add =(require 'gptel)= to =dotspacemacs/user-config= ** Usage Procure an [[https://platform.openai.com/account/api-keys][OpenAI API key]]. Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as: - Storing in =~/.authinfo=. By default, "api.openai.com" is used as HOST and "apikey" as USER. #+begin_src authinfo machine api.openai.com login apikey password TOKEN #+end_src - Setting it to a function that returns the key. *** In any buffer: 1. Select a region of text and call =M-x gptel-send=. The response will be inserted below your region. 2. You can select both the original prompt and the response and call =M-x gptel-send= again to continue the conversation. 3. Call =M-x gptel-send= with a prefix argument to - set chat parameters (GPT model, directives etc) for this buffer, - to read the prompt from elsewhere or redirect the response elsewhere, - or to replace the prompt with the response. [[https://user-images.githubusercontent.com/8607532/230770018-9ce87644-6c17-44af-bd39-8c899303dce1.png]] With a region selected, you can also rewrite prose or refactor code from here: *Code*: [[https://user-images.githubusercontent.com/8607532/230770162-1a5a496c-ee57-4a67-9c95-d45f238544ae.png]] *Prose*: [[https://user-images.githubusercontent.com/8607532/230770352-ee6f45a3-a083-4cf0-b13c-619f7710e9ba.png]] *** In a dedicated chat buffer: 1. Run =M-x gptel= to start or switch to the ChatGPT buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (=C-u M-x gptel=) to start a new session. 2. In the gptel buffer, send your prompt with =M-x gptel-send=, bound to =C-c RET=. 3. Set chat parameters (GPT model, directives etc) for the session by calling =gptel-send= with a prefix argument (=C-u C-c RET=): [[https://user-images.githubusercontent.com/8607532/224946059-9b918810-ab8b-46a6-b917-549d50c908f2.png]] That's it. You can go back and edit previous prompts and responses if you want. The default mode is =markdown-mode= if available, else =text-mode=. You can set =gptel-default-mode= to =org-mode= if desired. ** Using it your way GPTel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. If you want custom behavior, such as - reading input from or output to the echo area, - or in pop-up windows, - sending the current line only, etc, GPTel provides a general =gptel-request= function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by =gptel-send=. See the documentation of =gptel-request=, and the [[https://github.com/karthink/gptel/wiki][wiki]] for examples. *** Extensions using GPTel These are packages that depend on GPTel to provide additional functionality - [[https://github.com/kamushadenes/gptel-extensions.el][gptel-extensions]]: Extra utility functions for GPTel. - [[https://github.com/kamushadenes/ai-blog.el][ai-blog.el]]: Streamline generation of blog posts in Hugo. ** Additional Configuration - =gptel-host=: Overrides the OpenAI API host. This is useful for those who transform Azure API into OpenAI API format, utilize reverse proxy, or employ third-party proxy services for the OpenAI API. - =gptel-proxy=: Path to a proxy to use for GPTel interactions. This is passed to Curl via the =--proxy= argument. ** Why another ChatGPT client? Other Emacs clients for ChatGPT prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted: 1. Something that is as free-form as possible: query ChatGPT using any text in any buffer, and redirect the response as required. Using a dedicated =gptel= buffer just adds some visual flair to the interaction. 2. Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way ChatGPT can generate code blocks that I can run. ** Will you add feature X? Maybe, I'd like to experiment a bit more first. Features added since the inception of this package include - Curl support (=gptel-use-curl=) - Streaming responses (=gptel-stream=) - Cancelling requests in progress (=gptel-abort=) - General API for writing your own commands (=gptel-request=, [[https://github.com/karthink/gptel/wiki][wiki]]) - Dispatch menus using Transient (=gptel-send= with a prefix arg) - Specifying the conversation context size - GPT-4 support - Response redirection (to the echo area, another buffer, etc) - A built-in refactor/rewrite prompt Features being considered or in the pipeline: - Limiting conversation context to Org headings using properties (#58) - Stateless design (#17) ** Alternatives Other Emacs clients for ChatGPT include - [[https://github.com/xenodium/chatgpt-shell][chatgpt-shell]]: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more. - [[https://github.com/rksm/org-ai][org-ai]]: Interaction through special =#+begin_ai ... #+end_ai= Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more. There are several more: [[https://github.com/CarlQLange/chatgpt-arcana.el][chatgpt-arcana]], [[https://github.com/MichaelBurge/leafy-mode][leafy-mode]], [[https://github.com/iwahbe/chat.el][chat.el]] ** Acknowledgments - [[https://github.com/algal][Alexis Gallagher]] and [[https://github.com/d1egoaz][Diego Alvarez]] for fixing a nasty multi-byte bug with =url-retrieve=. - [[https://github.com/tarsius][Jonas Bernoulli]] for the Transient library.