gptel: Update description and bump version

gptel.el (header): Update description and bump version.
This commit is contained in:
Karthik Chikmagalur 2024-05-01 13:11:44 -07:00
parent 97ab6cbd1e
commit cdb07d0d2b

View file

@ -3,7 +3,7 @@
;; Copyright (C) 2023 Karthik Chikmagalur
;; Author: Karthik Chikmagalur
;; Version: 0.8.5
;; Version: 0.8.6
;; Package-Requires: ((emacs "27.1") (transient "0.4.0") (compat "29.1.4.1"))
;; Keywords: convenience
;; URL: https://github.com/karthink/gptel
@ -32,7 +32,7 @@
;; gptel supports
;;
;; - The services ChatGPT, Azure, Gemini, Anthropic AI, Anyscale, Together.ai,
;; Perplexity, and Kagi (FastGPT & Summarizer)
;; Perplexity, Anyscale, OpenRouter, Groq and Kagi (FastGPT & Summarizer)
;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
;;
;; Additionally, any LLM service (local or remote) that provides an
@ -54,10 +54,14 @@
;; key or to a function of no arguments that returns the key. (It tries to
;; use `auth-source' by default)
;;
;; ChatGPT is configured out of the box. For the other sources:
;;
;; - For Azure: define a gptel-backend with `gptel-make-azure', which see.
;; - For Gemini: define a gptel-backend with `gptel-make-gemini', which see.
;; - For Anthropic (Claude): define a gptel-backend with `gptel-make-anthropic',
;; which see
;; - For Together.ai, Anyscale, Perplexity, Groq and OpenRouter: define a
;; gptel-backend with `gptel-make-openai', which see.
;; - For Kagi: define a gptel-backend with `gptel-make-kagi', which see.
;;
;; For local models using Ollama, Llama.cpp or GPT4All:
@ -65,6 +69,7 @@
;; - The model has to be running on an accessible address (or localhost)
;; - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all',
;; which see.
;; - Llama.cpp or Llamafiles: Define a gptel-backend with `gptel-make-openai',
;;
;; Consult the package README for examples and more help with configuring
;; backends.
@ -81,14 +86,14 @@
;; - Call `gptel-send' to send the text up to the cursor. Select a region to
;; send only the region.
;;
;; - You can select previous prompts and responses to
;; continue the conversation.
;; - You can select previous prompts and responses to continue the conversation.
;;
;; - Call `gptel-send' with a prefix argument to access a menu where you can set
;; your backend, model and other parameters, or to redirect the
;; prompt/response.
;;
;; To use this in a dedicated buffer:
;;
;; - M-x gptel: Start a chat session
;; - C-u M-x gptel: Start another session or multiple independent chat sessions
;;
@ -98,11 +103,28 @@
;; model, or choose to redirect the input or output elsewhere (such as to the
;; kill ring).
;;
;; - You can save this buffer to a file. When opening this file, turning on
;; `gptel-mode' will allow resuming the conversation.
;; - You can save this buffer to a file. When opening this file, turn on
;; `gptel-mode' before editing it to restore the conversation state and
;; continue chatting.
;;
;; gptel in Org mode:
;;
;; gptel offers a few extra conveniences in Org mode.
;; - You can limit the conversation context to an Org heading with
;; `gptel-org-set-topic'.
;;
;; - You can have branching conversations in Org mode, where each hierarchical
;; outline path through the document is a separate conversation branch.
;; See the variable `gptel-org-branching-context'.
;;
;; - You can declare the gptel model, backend, temperature, system message and
;; other parameters as Org properties with the command
;; `gptel-org-set-properties'. gptel queries under the corresponding heading
;; will always use these settings, allowing you to create mostly reproducible
;; LLM chat notebooks.
;;
;; Finally, gptel offers a general purpose API for writing LLM ineractions
;; that suit how you work, see `gptel-request'.
;; that suit your workflow, see `gptel-request'.
;;; Code:
(declare-function markdown-mode "markdown-mode")