Commit graph

46 commits

Author SHA1 Message Date
Karthik Chikmagalur
8a25058eed gptel-openai: default :header key to simplify config
* gptel.el (gptel--openai): Don't specify header.

* gptel-openai.el (gptel-make-openai): Use a key-aware lambda for
the header argument.  This should make it easier to define new
OpenAI-style API backends (see #177, #184)

* README.org: Update with instructions for together.ai and
Anyscale, both of which provide OpenAI-style APIs.  Clean up the
config blocks for the other backends.
2024-01-19 14:45:36 -08:00
Karthik Chikmagalur
b34e217bbf README: Mention gptel-request
* README.org:
- Mention gptel-request near the top.
- Reformat FAQ
- Add solution to #75 and #182 (Doom Emacs keybinding conflict) to
  the FAQ
2024-01-16 20:26:03 -08:00
Karthik Chikmagalur
1752f1d589 gptel-kagi: Add support for the Kagi summarizer
* gptel-kagi.el (gptel--request-data, gptel--parse-buffer,
gptel-make-kagi): Add support for the Kagi summarizer.  If there
is a url at point (or at the end of the provided prompt), it is
used as the summarizer input.  Otherwise the behavior is
unchanged.

* README (Kagi): Mention summarizer support.

* gptel.el: Mention summarizer support.
2024-01-15 17:29:42 -08:00
Karthik Chikmagalur
c6a07043af gptel-kagi: Add support for Kagi FastGPT
* gptel.el: Bump version and update package description.

* gptel-kagi.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-kagi): Add new file and support
for the Kagi FastGPT LLM API.  Streaming and setting model
parameters (temperature, max tokesn) are not supported by the API.
A Kagi backend can be added with `gptel-make-kagi`.

* README.org: Update with instructions for Kagi.
2024-01-12 23:17:21 -08:00
Karthik Chikmagalur
612aea3456 gptel: Make gptel-post-response-* easier to use
* gptel.el (gptel-end-of-response, gptel-post-response-hook,
gptel-post-response-functions, gptel--insert-response,
gptel-response-filter-functions):
Rename gptel-post-response-hook -> gptel-post-response-functions
The new abnormal hook now calls its functions with the start and
end positions of the response, to make it easier to act on the
response.

* gptel-curl.el (gptel-curl--stream-cleanup): Corresponding changes.

* README.org: Mention breaking change.
2024-01-12 23:04:40 -08:00
Karthik Chikmagalur
e67ed41e31 README: Specify: no key needed for llama backend
* README.org: Specify that no key is needed when defining a
Llama.cpp backend.  Fix #170.
2024-01-05 20:55:52 -08:00
Bruno Bigras
0fce1d86d1
README: fix typo (#168) 2024-01-03 18:58:29 -08:00
Karthik Chikmagalur
3ac5963080 README: Add instructions for Llamafile
* README.org (* Llama.cpp): As it turns out, text-generation
Llamafile models (currently Mistral Instruct and Llava) offer an
OpenAI-compatible API, so we can use them easily from gptel.  Add
instructions for Llamafiles to the Llama section of the README.
2023-12-31 14:37:26 -08:00
Karthik Chikmagalur
85bd47cb4c README: Add support for llama.cpp
* README.org: The llama.cpp server supports OpenAI's API, so we
can reuse it.  Closes #121.
2023-12-28 22:27:53 -08:00
Karthik Chikmagalur
32dd463bd6 README: Mention YouTube demo
* gptel.el: Change blurb

* README.org: Mention YouTube demo
2023-12-27 08:35:06 -08:00
Karthik Chikmagalur
8973498378 gptel: Add minimal status indicator via mode-line-process
* gptel.el (gptel-update-destination, gptel-use-header-line,
gptel--update-status, gptel-mode): Improve status messaging when not
using the header-line.  When the user option
`gptel-use-header-line` (renamed from `gptel-update-destination`)
is set to nil, we use `mode-line-process` to report on in-progress
requests, and show the active LLM (model) otherwise.  Error
messages are sent to the echo area.  Close #9.

* README.org: Change `gptel-update-destination` to
`gptel-use-header-line` and tweak description.
2023-12-21 18:00:17 -08:00
Mark Stuart
4775ade6e0 gptel: add custom gptel-update-destination
README: Mention `gptel-update-destination` in README.

gptel.el (gptel-update-destination, gptel--update-status,
gptel-send, gptel--insert-response): New option
`gptel-update-destination` to control how gptel's status messages
are shown.  `gptel--update-status` replaces
`gptel--update-header-line`.  Replace calls to this function
elsewhere in gptel.el.

gptel-curl.el (gptel-abort, gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Use `gptel--update-status` in
place of `gptel--update-header-line`.

gptel-transient.el (gptel--suffix-send): Use
`gptel--update-status` in place of `gptel--update-header-line`.
2023-12-21 17:48:36 -08:00
Karthik Chikmagalur
a202911009 gptel: Add post-stream hook, scroll commands
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses.  This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport.  The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.

* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.

* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
2023-12-21 16:09:57 -08:00
Karthik Chikmagalur
3dd00a7457 gptel-gemini: Add streaming responses, simplify configuration
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests.  Simplify the user configuration
required.

* README.org: Fix formatting errors.  Update the configuration
instructions for Gemini.

This closes #149.
2023-12-20 15:17:14 -08:00
mrdylanyin
84cd7bf5a4 gptel-gemini: Add Gemini support
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API.  Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.

README: Add instructions for Gemini.
2023-12-20 13:55:43 -08:00
Karthik Chikmagalur
e105a52541 gptel: Update docstrings for prompt/response prefixes
README: Mention `gptel-response-prefix-alist`

gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist):
Improve docstring.
2023-12-15 09:42:37 -08:00
Fangyuan
15404f639d
README: Update instructions for Azure (#147) 2023-12-14 19:53:57 -08:00
Karthik Chikmagalur
50a2498259 README: Tweak instructions for local LLMs, mention #120 2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
63027083cd README: Update additional customization section 2023-10-29 14:24:39 -07:00
Karthik Chikmagalur
6af89254b7 README: Document breaking changes (mainly gptel-host deprecation) 2023-10-29 09:51:17 -07:00
Karthik Chikmagalur
1434bbac7b gptel-ollama, gptel-openai: Add example of backend creation
README: Fix error with Ollama backend instructions
2023-10-29 00:31:56 -07:00
Karthik Chikmagalur
6419e8f021 gptel: Add multi-llm support
README.org: Update README with new information and a multi-llm demo.

gptel.el (gptel-host, gptel--known-backends, gptel--api-key,
gptel--create-prompt, gptel--request-data, gptel--parse-buffer, gptel-request,
gptel--parse-response, gptel--openai, gptel--debug, gptel--restore-state,
gptel, gptel-backend):

Integrate multiple LLMs through the introcution of gptel-backends. Each backend
is composed of two pieces:

1. An instance of a cl-struct, containing connection, authentication and model
information.  See the cl-struct `gptel-backend` for details.  A separate
cl-struct type is defined for each supported backend (OpenAI, Azure, GPT4All and
Ollama) that inherits from the generic gptel-backend type.

2. cl-generic implementations of specific tasks, like gathering up and
formatting context (previous user queries and LLM responses), parsing responses
or responses streams etc.  The four tasks currently specialized this way are
carried out by `gptel--parse-buffer` and `gptel--request-data` (for constructing
the query) and `gptel--parse-response` and `gptel-curl--parse-stream` (for
parsing the response).  See their implementations for details.  Some effort has
been made to limit the number of times dispatching is done when reading
streaming responses.

When a backend is created, it is registered in the collection
`gptel--known-backends` and can be accessed by name later, such as from the
transient menu.

Only one of these backends is active at any time in a buffer, stored in the
buffer-local variable `gptel-backend`. Most messaging, authentication etc
accounts for the active backend, although there might be some leftovers.

When using `gptel-request` or `gptel-send`, the active backend can be changed or
let-bound.

- Obsolete `gptel-host`
- Fix the rear-sticky property when restoring sessions from files.
- Document some variables (not user options), like `gptel--debug`

gptel-openai.el (gptel-backend, gptel-make-openai, gptel-make-azure,
gptel-make-gpt4all): This file (currently always loaded) sets up the generic
backend struct and includes constructors for creating OpenAI, GPT4All and Azure
backends.  They all use the same API so a single set of defgeneric
implemenations suffices for all of them.

gptel-ollama.el (gptel-make-ollama): This file includes the cl-struct,
constructor and requisite defgeneric implementations for Ollama support.

gptel-transient.el (gptel-menu, gptel-provider-variable, gptel--infix-provider,
gptel-suffix-send):

- Provide access to all available LLM backends and models from `gptel-menu`.
- Adjust keybindings in gptel-menu: setting the model and query parameters is
  now bound to two char keybinds, while redirecting input and output is bound to
  single keys.
2023-10-28 23:57:47 -07:00
Karthik Chikmagalur
6e4d95a70a README: Add drawers to installation instructions 2023-08-12 11:27:10 -07:00
Karthik Chikmagalur
b2a01b8d65 README: Explain saving/restoring sessions better 2023-08-09 17:58:13 -07:00
Karthik Chikmagalur
0f161a466b gptel: saving and restoring state for Markdown/Text
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.

Chats can be restored by turning on `gptel-mode'.  One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long.  A better approach is needed.

Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.

* README.org: Mention saving and restoring chats where
appropriate.
2023-07-28 16:05:22 -07:00
Karthik Chikmagalur
a7207a3835 README: Add TOC 2023-06-12 17:27:52 -07:00
Karthik Chikmagalur
30700cc88a README: Mention extensions, gptel-proxy 2023-06-09 14:09:28 -07:00
Tianshu Wang
e6a1468bd2
gptel: Make API host configurable (#67)
* Make API host configurable

* Update README.org
2023-05-31 20:24:13 -07:00
Karthik Chikmagalur
42132d3662 README: tweak description of package 2023-05-31 18:35:08 -07:00
Karthik Chikmagalur
37c381c2e5 README: Update with acknowledgments and more
* README.org: update with acknowledgments (#4)
2023-05-19 23:29:12 -07:00
Troy Rosenberg
075609544a
README: Update instructions for setting key (#46)
* README.org Update the instructions for getting =gptel-api-key= to
include using the .authinfo file after support was added in 6f951ed.
2023-04-23 19:55:34 -07:00
Karthik Chikmagalur
6202474a6e README: Update with changes to gptel-menu
* README.org (Usage): Add images for new options
2023-04-09 13:02:56 -07:00
Karthik Chikmagalur
1b47235e25 README: Add section on gptel-request
* README.org (** Using it your way): New section describing
gptel-request.
2023-04-08 20:17:06 -07:00
Karthik Chikmagalur
6c47c0a483 README: Add videos with streaming
* README.org (In a dedicated chat buffer): Move the any-buffer
interaction description up.
2023-04-06 17:32:35 -07:00
AlessandroW
1f03655e2d
Add Doom Emacs installation instructions (#28)
README: Add Doom Emacs installation instructions
2023-04-01 14:25:47 -07:00
Karthik Chikmagalur
1c07a94e18 README: Update manual install instructions 2023-03-28 12:24:08 -07:00
Rida Ayed
1ab8a57183 add installation instructions 2023-03-28 12:19:56 -07:00
Karthik Chikmagalur
048eaf9b64 README: Update description of chat parameters 2023-03-23 14:39:17 -07:00
Karthik Chikmagalur
4f3ca23454 gptel: Update commentary and README 2023-03-19 19:58:19 -07:00
Karthik Chikmagalur
f0eba0cf4f README: Update README for MELPA 2023-03-19 17:50:51 -07:00
Karthik Chikmagalur
051501c892 README: Change installation instructions (no aio) 2023-03-18 00:04:16 -07:00
Karthik Chikmagalur
c8f87f5554 Update README with transient menu details 2023-03-14 02:09:53 -07:00
Karthik Chikmagalur
8fca5bc762 gptel: Add org-mode support and update README
gptel.el (gptel-response-filter-functions, gptel-send,
gptel--create-prompt, gptel--transform-response, gptel--convert-org,
gptel--convert-markdown->org): Add support for org-mode by transforming
the response manually.  (Note: Asking ChatGPT to format its results in
org-mode markup produces inconsistent results.)

Additionally, the abnormal hook `gptel-resposne-filter-functions' is
added for arbitrary transformations of the response.  Its implementation
seems needlessly complex, and in the future we should change it to
use `run-hook-wrapped' with a local accumulator.
2023-03-10 05:13:29 -08:00
Karthik Chikmagalur
3c10147a72 gptel: Tweak README, minor linting 2023-03-08 01:22:14 -08:00
Karthik Chikmagalur
deeb606409 Update license.
Also update README.
2023-03-05 18:13:32 -08:00
Karthik Chikmagalur
99aa8dcc5f Add gptel.el and a README. 2023-03-05 18:13:32 -08:00