Compare commits

...

5 commits

Author SHA1 Message Date
Alain M. Lafon
3f31258e48
gptel: Add gptel-add-context function 2024-05-13 12:20:17 +02:00
Karthik Chikmagalur
8ccdc31b12 README: Mention llm and Ellama
* gptel.el (header): Add email

* README.org (Alternatives): Mention llm and Ellama
2024-05-03 08:41:10 -07:00
Karthik Chikmagalur
69fb2f09f3 Merge branch 'elpa/gptel' of https://git.sv.gnu.org/git/emacs/nongnu 2024-05-03 08:25:35 -07:00
Karthik Chikmagalur
533724042e README: Mention Org features
* README.org: Mention gptel's Org features, consult-web and use
consistent (lower-)casing for gptel.  Add a MELPA stable and a
NonGNU ELPA badge.
2024-05-02 17:11:10 -07:00
Karthik Chikmagalur
f663f3a9db README: Mention Org features
* README.org: Mention gptel's Org features, consult-web and use
consistent (lower-)casing for gptel.
2024-05-01 16:10:59 -07:00
2 changed files with 114 additions and 25 deletions

View file

@ -1,8 +1,8 @@
#+title: GPTel: A simple LLM client for Emacs
#+title: gptel: A simple LLM client for Emacs
[[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]
[[https://elpa.nongnu.org/nongnu/gptel.svg][file:https://elpa.nongnu.org/nongnu/gptel.svg]] [[https://stable.melpa.org/packages/gptel-badge.svg][file:https://stable.melpa.org/packages/gptel-badge.svg]] [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]
GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends.
gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends.
| LLM Backend | Supports | Requires |
|--------------------+----------+---------------------------|
@ -40,7 +40,7 @@ https://github-production-user-asset-6210df.s3.amazonaws.com/8607532/278854024-a
- You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
- Don't like gptel's workflow? Use it to create your own for any supported model/backend with a [[https://github.com/karthink/gptel/wiki#defining-custom-gptel-commands][simple API]].
GPTel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
** Contents :toc:
- [[#installation][Installation]]
@ -67,6 +67,7 @@ GPTel uses Curl if available, but falls back to url-retrieve to work without ext
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
- [[#save-and-restore-your-chat-sessions][Save and restore your chat sessions]]
- [[#extra-org-mode-conveniences][Extra Org mode conveniences]]
- [[#faq][FAQ]]
- [[#i-want-the-window-to-scroll-automatically-as-the-response-is-inserted][I want the window to scroll automatically as the response is inserted]]
- [[#i-want-the-cursor-to-move-to-the-next-prompt-after-the-response-is-inserted][I want the cursor to move to the next prompt after the response is inserted]]
@ -78,13 +79,13 @@ GPTel uses Curl if available, but falls back to url-retrieve to work without ext
- [[#why-another-llm-client][Why another LLM client?]]
- [[#additional-configuration][Additional Configuration]]
- [[#alternatives][Alternatives]]
- [[#extensions-using-gptel][Extensions using GPTel]]
- [[#extensions-using-gptel][Extensions using gptel]]
- [[#breaking-changes][Breaking Changes]]
- [[#acknowledgments][Acknowledgments]]
** Installation
GPTel is on MELPA. Ensure that MELPA is in your list of sources, then install gptel with =M-x package-install⏎= =gptel=.
gptel is on MELPA. Ensure that MELPA is in your list of sources, then install it with =M-x package-install⏎= =gptel=.
(Optional: Install =markdown-mode=.)
@ -560,16 +561,19 @@ The above code makes the backend available to select. If you want it to be the
(This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel.)
|-------------------+-------------------------------------------------------------------------|
| *Command* | Description |
|-------------------+-------------------------------------------------------------------------|
| =gptel-send= | Send conversation up to =(point)=, or selection if region is active. Works anywhere in Emacs. |
| =gptel= | Create a new dedicated chat buffer. Not required to use gptel. |
| =C-u= =gptel-send= | Transient menu for preferences, input/output redirection etc. |
| =gptel-menu= | /(Same)/ |
|-------------------+-------------------------------------------------------------------------|
| =gptel-set-topic= | /(Org-mode only)/ Limit conversation context to an Org heading |
|-------------------+-------------------------------------------------------------------------|
|-----------------------------+------------------------------------------------------------------------------------------------|
| *Command* | Description |
|-----------------------------+------------------------------------------------------------------------------------------------|
| =gptel-send= | Send conversation up to =(point)=, or selection if region is active. Works anywhere in Emacs. |
| =gptel= | Create a new dedicated chat buffer. Not required to use gptel. |
| =C-u= =gptel-send= | Transient menu for preferences, input/output redirection etc. |
| =gptel-menu= | /(Same)/ |
|-----------------------------+------------------------------------------------------------------------------------------------|
| *Command* /(Org mode only)/ | |
|-----------------------------+------------------------------------------------------------------------------------------------|
| =gptel-org-set-topic= | Limit conversation context to an Org heading |
| =gptel-org-set-properties= | Write gptel configuration as Org properties (for self-contained chat logs) |
|-----------------------------+------------------------------------------------------------------------------------------------|
*** In any buffer:
@ -612,12 +616,51 @@ The default mode is =markdown-mode= if available, else =text-mode=. You can set
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on =gptel-mode= before editing the buffer.
*** Extra Org mode conveniences
gptel offers a few extra conveniences in Org mode.
- You can limit the conversation context to an Org heading with the command =gptel-org-set-topic=.
- You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. This is also useful for limiting the context size of each query. See the variable =gptel-org-branching-context=.
- You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command =gptel-org-set-properties=. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks, and to have simultaneous chats with different models, model settings and directives under different Org headings.
*** Optional: Add more context to your query
You can add contextual information from any Emacs buffer by utilizing
=gptel-add-context=. Each call will add another snippet including
metadata like buffer-name, LOC and major mode.
1. Select the text you want to add as context. If no text is selected,
the entire content of the current buffer will be used.
2. =gptel-add-context= adds the selected text or the whole buffer
content to the "*gptel-context*" buffer.
3. Proceed with LLM interactions using =gptel= as usual. The added
context will influence the LLM's responses, making them more
relevant and contextualized.
4. At any point, you can manually edit the "*gptel-context*" buffer to
remove stale information.
**** Practical Applications
- Enhancing code development sessions with relevant documentation or
code snippets as a reference.
- Accumulating research notes or sources while writing papers or
articles to ensure consistency in the narrative or arguments.
- Providing detailed error logs or system information during debugging
sessions to assist in generating more accurate solutions or
suggestions from the LLM.
** FAQ
#+html: <details><summary>
**** I want the window to scroll automatically as the response is inserted
#+html: </summary>
To be minimally annoying, GPTel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
#+begin_src emacs-lisp
(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)
@ -628,7 +671,7 @@ To be minimally annoying, GPTel does not move the cursor by default. Add the fo
**** I want the cursor to move to the next prompt after the response is inserted
#+html: </summary>
To be minimally annoying, GPTel does not move the cursor by default. Add the following to your configuration to move the cursor:
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:
#+begin_src emacs-lisp
(add-hook 'gptel-post-response-functions 'gptel-end-of-response)
@ -675,7 +718,7 @@ Or see this [[https://github.com/karthink/gptel/wiki#save-transient-flags][wiki
**** I want to use gptel in a way that's not supported by =gptel-send= or the options menu
#+html: </summary>
GPTel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (=C-u M-x gptel-send=).
gptel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (=C-u M-x gptel-send=).
For more programmable usage, gptel provides a general =gptel-request= function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by =gptel-send=. See the documentation of =gptel-request=, and the [[https://github.com/karthink/gptel/wiki][wiki]] for examples.
@ -782,18 +825,21 @@ Features being considered or in the pipeline:
Other Emacs clients for LLMs include
- [[https://github.com/ahyatt/llm][llm]]: llm provides a uniform API across language model providers for building LLM clients in Emacs, and is intended as a library for use by package authors. For similar scripting purposes, gptel provides the command =gptel-request=, which see.
- [[https://github.com/s-kostyaev/ellama][Ellama]]: A full-fledged LLM client built on llm, that supports many LLM providers (Ollama, Open AI, Vertex, GPT4All and more). Its usage differs from gptel in that it provides separate commands for dozens of common tasks, like general chat, summarizing code/text, refactoring code, improving grammar, translation and so on.
- [[https://github.com/xenodium/chatgpt-shell][chatgpt-shell]]: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- [[https://github.com/rksm/org-ai][org-ai]]: Interaction through special =#+begin_ai ... #+end_ai= Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: [[https://github.com/CarlQLange/chatgpt-arcana.el][chatgpt-arcana]], [[https://github.com/MichaelBurge/leafy-mode][leafy-mode]], [[https://github.com/iwahbe/chat.el][chat.el]]
*** Extensions using GPTel
*** Extensions using gptel
These are packages that depend on GPTel to provide additional functionality
These are packages that use gptel to provide additional functionality
- [[https://github.com/kamushadenes/gptel-extensions.el][gptel-extensions]]: Extra utility functions for GPTel.
- [[https://github.com/kamushadenes/gptel-extensions.el][gptel-extensions]]: Extra utility functions for gptel.
- [[https://github.com/kamushadenes/ai-blog.el][ai-blog.el]]: Streamline generation of blog posts in Hugo.
- [[https://github.com/douo/magit-gptcommit][magit-gptcommit]]: Generate Commit Messages within magit-status Buffer using GPTel.
- [[https://github.com/douo/magit-gptcommit][magit-gptcommit]]: Generate Commit Messages within magit-status Buffer using gptel.
- [[https://github.com/armindarvish/consult-web][consult-web]]: Provides gptel as a source when querying multiple local and online sources.
** Breaking Changes

View file

@ -2,7 +2,7 @@
;; Copyright (C) 2023 Karthik Chikmagalur
;; Author: Karthik Chikmagalur
;; Author: Karthik Chikmagalur <karthik.chikmagalur@gmail.com>
;; Version: 0.8.6
;; Package-Requires: ((emacs "27.1") (transient "0.4.0") (compat "29.1.4.1"))
;; Keywords: convenience
@ -174,6 +174,15 @@
"Use `gptel-make-openai' instead."
"0.5.0")
(defcustom gptel-context-prompt-preamble
"Here, you find more context for the following user prompts. Key aspects are:
- User inputs are encapsulated within Emacs Org mode src blocks.
- Naming Convention: Each src block is identified using a structured name format '{{name-of-original-buffer}}:{{beginning-line-number}}-{{ending-line-number}}'. This scheme offers insight into the origin and scope of the code or text snippet.
- Mode Indication: The mode of the original file is included within each src block. This detail informs you about the programming language or markup format of the snippet, aiding in accurate interpretation and response."
"Instruct the llm about how to treat the additional context from *gptel-context*."
:group 'gptel
:type 'string)
(defcustom gptel-proxy ""
"Path to a proxy to use for gptel interactions.
Passed to curl via --proxy arg, for example \"proxy.yourorg.com:80\"
@ -887,7 +896,7 @@ Model parameters can be let-bound around calls to this function."
((markerp position) position)
((integerp position)
(set-marker (make-marker) position buffer))))
(full-prompt
(full-prompt-draft
(cond
((null prompt) (gptel--create-prompt start-marker))
((stringp prompt)
@ -898,6 +907,17 @@ Model parameters can be let-bound around calls to this function."
(insert prompt)
(gptel--create-prompt))))
((consp prompt) prompt)))
(context-prompt
(when (get-buffer "*gptel-context*")
(list :role "user"
:content (format "%s\n\n%s"
gptel-context-prompt-preamble
(with-current-buffer "*gptel-context*"
(save-excursion
(buffer-substring-no-properties (point-min) (point-max))))))))
(full-prompt (if context-prompt
(append (list context-prompt) full-prompt-draft)
full-prompt-draft))
(request-data (gptel--request-data gptel-backend full-prompt))
(info (list :data request-data
:buffer buffer
@ -1106,6 +1126,7 @@ the response is inserted into the current buffer after point."
(encode-coding-string
(gptel--json-encode (plist-get info :data))
'utf-8)))
;; why do these checks not occur inside of `gptel--log'?
(when gptel-log-level ;logging
(when (eq gptel-log-level 'debug)
(gptel--log (gptel--json-encode
@ -1360,6 +1381,28 @@ context for the ediff session."
(goto-char (+ beg offset))
(pulse-momentary-highlight-region beg (+ beg (length alt-response)))))
;;;###autoload
(defun gptel-add-context ()
"Add selected region (or whole buffer) to *gptel-context*."
(interactive)
(let* ((context (if (use-region-p)
(buffer-substring-no-properties (region-beginning) (region-end))
(buffer-substring-no-properties (point-min) (point-max))))
(src-name (buffer-name))
(start (line-number-at-pos (region-beginning)))
(end (line-number-at-pos (region-end)))
(region-major-mode (symbol-name major-mode))
(loc (format "%s:%d-%d" src-name start end)))
(with-current-buffer (get-buffer-create "*gptel-context*")
(org-mode)
(goto-char (point-max))
(unless (bolp) (insert "\n"))
(insert (format "#+NAME: %s\n" loc))
(insert (format "#+BEGIN_SRC %s\n" region-major-mode))
(insert (format "%s\n" context))
(insert "#+END_SRC\n\n"))
(message "Context has been added to *gptel-context*.")))
(defun gptel--next-variant (&optional arg)
"Switch to next gptel-response at this point, if it exists."
(interactive "p")