Forked from github.com/karthink/gptel
Find a file
AlessandroW 1f03655e2d
Add Doom Emacs installation instructions (#28)
README: Add Doom Emacs installation instructions
2023-04-01 14:25:47 -07:00
img Add gptel.el and a README. 2023-03-05 18:13:32 -08:00
gptel-curl.el gptel: More flexible callbacks 2023-03-31 19:37:25 -07:00
gptel-transient.el gptel: Turn API parameters into defcustoms 2023-03-24 17:51:00 -07:00
gptel.el gptel: More flexible callbacks 2023-03-31 19:37:25 -07:00
LICENSE LICENSE: Add GPLv3 license 2023-03-12 14:51:54 -07:00
README.org Add Doom Emacs installation instructions (#28) 2023-04-01 14:25:47 -07:00

GPTel: A simple ChatGPT client for Emacs

GPTel is a simple, no-frills ChatGPT client for Emacs.

/tristan/gptel/media/commit/1f03655e2d4b9f759213294c14c1130f910c1c14/img/gptel.png

  • Requires an OpenAI API key.
  • No external dependencies, only Emacs. Also, it's async.
  • Interact with ChatGPT from any buffer in Emacs.
  • ChatGPT's responses are in Markdown or Org markup (configurable).
  • Supports conversations (not just one-off queries) and multiple independent sessions.
  • You can go back and edit your previous prompts, or even ChatGPT's previous responses when continuing a conversation. These will be fed back to ChatGPT.

Installation

GPTel is on MELPA. Install it with M-x package-install⏎ gptel.

(Optional: Install markdown-mode.)

Straight
  (straight-use-package 'gptel)

Installing the markdown-mode package is optional.

Manual

Clone or download this repository and run M-x package-install-file⏎ on the repository directory.

Installing the markdown-mode package is optional.

Doom Emacs

In packages.el

(package! gptel)

In config.el

(use-package! gptel
 :config
 (setq! gptel-api-key "your key"))
Spacemacs

After installation with M-x package-install⏎ gptel

  • Add gptel to dotspacemacs-additional-packages
  • Add (require 'gptel) to dotspacemacs/user-config

Usage

Procure an OpenAI API key.

Optional: Set gptel-api-key to the key or to a function that returns the key (more secure).

In a dedicated chat buffer:

  1. Run M-x gptel to start or switch to the ChatGPT buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (C-u M-x gptel) to start a new session.
  2. In the gptel buffer, send your prompt with M-x gptel-send, bound to C-c RET.
  3. Set chat parameters (GPT model, directives etc) for the session by calling gptel-send with a prefix argument (C-u C-c RET):

https://user-images.githubusercontent.com/8607532/224946059-9b918810-ab8b-46a6-b917-549d50c908f2.png

That's it. You can go back and edit previous prompts and responses if you want.

The default mode is markdown-mode if available, else text-mode. You can set gptel-default-mode to org-mode if desired.

In any buffer:

  1. Select a region of text and call M-x gptel-send. The response will be inserted below your region.
  2. You can select both the original prompt and the response and call M-x gptel-send again to continue the conversation.
  3. Call M-x gptel-send with a prefix argument to set chat parameters (GPT model, directives etc) for this buffer, or to start a dedicated session from the selected region:

https://user-images.githubusercontent.com/8607532/224949877-08c44cb4-7bff-4ffc-963a-16fef7a4271f.png

Why another ChatGPT client?

Existing Emacs clients don't reliably let me use it the simple way I can in the browser. They will get better, but I wanted something for now.

Also, AI-assisted work is a new way to use Emacs. It's not yet clear what the best Emacs interface to tools like it is.

  • Should it be part of CAPF (completions-at-point-functions)?
  • A dispatch menu from anywhere that can act on selected regions?
  • A comint/shell-style REPL?
  • One-off queries in the minibuffer (like shell-command)?
  • A refactoring tool in code buffers?
  • An org-babel interface?

Maybe all of these, I don't know yet. As a start, I wanted to replicate the web browser usage pattern so I can build from there and don't need to switch to the browser every time. The code is fairly simple right now.

Will you add feature X?

Maybe, I'd like to experiment a bit first.