Commit graph

161 commits

Author SHA1 Message Date
Karthik Chikmagalur
c6a07043af gptel-kagi: Add support for Kagi FastGPT
* gptel.el: Bump version and update package description.

* gptel-kagi.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-kagi): Add new file and support
for the Kagi FastGPT LLM API.  Streaming and setting model
parameters (temperature, max tokesn) are not supported by the API.
A Kagi backend can be added with `gptel-make-kagi`.

* README.org: Update with instructions for Kagi.
2024-01-12 23:17:21 -08:00
Karthik Chikmagalur
612aea3456 gptel: Make gptel-post-response-* easier to use
* gptel.el (gptel-end-of-response, gptel-post-response-hook,
gptel-post-response-functions, gptel--insert-response,
gptel-response-filter-functions):
Rename gptel-post-response-hook -> gptel-post-response-functions
The new abnormal hook now calls its functions with the start and
end positions of the response, to make it easier to act on the
response.

* gptel-curl.el (gptel-curl--stream-cleanup): Corresponding changes.

* README.org: Mention breaking change.
2024-01-12 23:04:40 -08:00
Karthik Chikmagalur
8ec233d79c gptel: Name gptel buffer according to backend
* gptel.el (gptel-default-session, gptel): Name the gptel buffer
according to the default backend.  Delete the variable
`gptel-default-session`.  Fix #174.

* gptel-openai.el (gptel-make-openai): Don't specify a key by
default. Fix #170.
2024-01-12 22:27:20 -08:00
Karthik Chikmagalur
febeada960 gptel: Make gptel-backend customizable
* gptel.el (gptel-backend): Turn `gptel-backend` into a defcustom
so it can be used with setopt.  Fix #167.

* gptel-openai.el (gptel-make-openai): Improve docstring.
2024-01-05 20:55:52 -08:00
Karthik Chikmagalur
d5b10c3d6d gptel: gptel-model can be an arbitrary string
* gptel.el (gptel-model): Allow gptel-model to be an arbitrary
string in the customize interface so it can be set with setopt
etc. (See #152.)
2024-01-02 14:47:19 -08:00
Karthik Chikmagalur
1e31f550de gptel: Declare compat as explicit dependency
* gptel.el (gptel--always, gptel--button-buttonize): Currently
gptel depends on the Compat library transitively via transient.el.
Declare it as an explicit dependency so we can get rid of special
case definitions and simplify.  This also enables us to use Emacs
28 and 29 conveniences freely in the code.
2023-12-29 13:06:25 -08:00
Karthik Chikmagalur
60cb406567 gptel: Improve documentation of gptel-send
* gptel.el (gptel-send): Update the docstring to describe the
behavior accurately.
2023-12-27 12:34:47 -08:00
Karthik Chikmagalur
32dd463bd6 README: Mention YouTube demo
* gptel.el: Change blurb

* README.org: Mention YouTube demo
2023-12-27 08:35:06 -08:00
Karthik Chikmagalur
9126bed43f gptel: Set window when doing auto-scrolling
* gptel.el (gptel-auto-scroll):  After calling `gptel-send`, the
window focus could have changed as the response is received.  Set
the window correctly when running `gptel-auto-scroll` to ensure
the correct buffer is scrolled.
2023-12-27 08:35:06 -08:00
Karthik Chikmagalur
8d3e08faa8 gptel: Don't use called-interactively-p
* gptel.el (gptel): Switch from the fragile
`called-interactively-p` to using the interactive form to check if
`gptel` is called interactively.
2023-12-22 16:49:04 -08:00
Karthik Chikmagalur
2e92c0303c gptel: gptel-backend-url can accept functions
* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.

* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses.  Set the backend-url to a function to
account for the value of gptel-stream.  This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix #153.

* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
2023-12-22 16:25:32 -08:00
Karthik Chikmagalur
4d01dddf7d gptel, gptel-curl: Address checkdoc warnings
* gptel.el (gptel--url-parse-response, gptel-max-tokens,
gptel-use-header-line): Address checkdoc warnings.

* gptel-curl.el (gptel-curl--parse-response, gptel-abort):
Address checkdoc warnings.

* gptel-gemini (gptel-make-gemini): Address checkdoc warnings.
2023-12-22 16:23:50 -08:00
Karthik Chikmagalur
7271d0e408 gptel: Try to save/restore gptel-backend in files
* gptel.el (gptel--save-state, gptel--restore-state,
gptel--backend-name, gptel--restore-backend): Try to save and
restore the gptel backend when persisting chat sessions in files.
The local variable `gptel--backend-name` holds the backend name in
the file across Emacs sessions.  The function
`gptel--restore-backend` tries to set this backend and messages
the user if this is not possible.
2023-12-22 16:23:50 -08:00
Karthik Chikmagalur
ce75072f9d gptel: Bump version 2023-12-21 18:11:48 -08:00
Karthik Chikmagalur
8973498378 gptel: Add minimal status indicator via mode-line-process
* gptel.el (gptel-update-destination, gptel-use-header-line,
gptel--update-status, gptel-mode): Improve status messaging when not
using the header-line.  When the user option
`gptel-use-header-line` (renamed from `gptel-update-destination`)
is set to nil, we use `mode-line-process` to report on in-progress
requests, and show the active LLM (model) otherwise.  Error
messages are sent to the echo area.  Close #9.

* README.org: Change `gptel-update-destination` to
`gptel-use-header-line` and tweak description.
2023-12-21 18:00:17 -08:00
Mark Stuart
4775ade6e0 gptel: add custom gptel-update-destination
README: Mention `gptel-update-destination` in README.

gptel.el (gptel-update-destination, gptel--update-status,
gptel-send, gptel--insert-response): New option
`gptel-update-destination` to control how gptel's status messages
are shown.  `gptel--update-status` replaces
`gptel--update-header-line`.  Replace calls to this function
elsewhere in gptel.el.

gptel-curl.el (gptel-abort, gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Use `gptel--update-status` in
place of `gptel--update-header-line`.

gptel-transient.el (gptel--suffix-send): Use
`gptel--update-status` in place of `gptel--update-header-line`.
2023-12-21 17:48:36 -08:00
Karthik Chikmagalur
a202911009 gptel: Add post-stream hook, scroll commands
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses.  This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport.  The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.

* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.

* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
2023-12-21 16:09:57 -08:00
Karthik Chikmagalur
38095eaed5 gptel: Fix prompt collection bug + linting
* gptel.el: Update package description.

* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.

* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent.  Fix formatting and
linter warnings.

* gptel-openai.el(gptel--parse-buffer): Ditto.
2023-12-20 15:40:56 -08:00
Karthik Chikmagalur
e105a52541 gptel: Update docstrings for prompt/response prefixes
README: Mention `gptel-response-prefix-alist`

gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist):
Improve docstring.
2023-12-15 09:42:37 -08:00
daedsidog
644e341244
Add multiline prefixes & AI response prefixes (#142)
gptel: Add customizable prompt/response prefixes

gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist,
gptel-prompt-prefix-string, gptel-response-prefix-string,
gptel--url-get-response): Add customizable response prefixes (per
major-mode) in `gptel-response-prefix-alist`.

Rename `gptel-prompt-string` -> `gptel-prompt-prefix-string`

The function `gptel-response-prefix-string` returns the prefix
string for the response in the current major-mode.

gptel-openai.el, gptel-ollama.el (gptel--parse-buffer): Remove the
prompt and response prefixes when creating prompt strings to send
to the LLM API.

gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Insert the response prefix
for the current major-mode before inserting the LLM API response.
2023-12-15 09:30:16 -08:00
Karim Aziiev
d5949ef428
gptel-curl: handle large Curl payloads with a temp file (#137)
gptel-curl.el (gptel-curl--get-args,
gptel-curl-file-size-threshold): Use temporary file for curl data.
Ensure curl uses a temporary file for binary data to prevent
issues with large payloads and special characters:

- Add a new defcustom `gptel-curl-file-size-threshold` to
determine when to use a temporary file for passing data to Curl.

- Use `--data-binary` with a temp file for data larger than the
specified threshold, improving handling of large data payloads in
GPTel queries.

- Reliably clean up temporary files created for Curl requests
exceeding the size threshold.  Add a function to
`gptel-post-response-hook` to delete the file post-Curl execution
and remove itself from the hook, preventing temporary file
accumulation.
2023-12-14 20:22:53 -08:00
Moritz
3e361323d5
Update available OpenAI GPT models to match API (#146)
gptel-transient.el (gptel--infix-model):
gptel.el (gptel-model, gptel--openai): Update gpt-4 models.
2023-12-07 18:21:01 -08:00
Karthik Chikmagalur
17a58d38e7 gptel: Fix bug in url-retrieve setup
* gptel.el (gptel--url-get-response): Record correctly the
gptel-backend at time of call to url-retrieve.
2023-11-12 18:11:25 -08:00
Karthik Chikmagalur
0109d0d1c0 gptel: API agnostic response error handling
* gptel.el (gptel--url-get-response, gptel--url-parse-response):

- When the query fails, the error message format (in the JSON)
differs between APIs.  Ultimately it may be required to dispatch
error handling via a generic function, but for now: try to make
the error handling API agnostic.

- Mention the backend name in the error message.  Pass the backend
to the (non-streaming response) parsers to be able to do this.

* gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--parse-response):  Same changes.
2023-11-08 13:29:39 -08:00
Karthik Chikmagalur
3308449761 gptel: Fix prompt string handling in gptel-request
* gptel.el (gptel-request): When `gptel-request` is supplied a
string, it creates the full prompt plist according to the OpenAI
API.  Fix by inserting it into a temp buffer and using the
cl-generic dispatch to parse the buffer instead.  This is a janky
solution but the best possible one without defining another
generic function just to handle prompt strings differently per API.
2023-11-08 12:45:30 -08:00
Karthik Chikmagalur
57a70c23cb gptel: Skip to end of word before sending
* gptel.el (gptel--at-word-end, gptel-send, gptel-request):
Include the word the cursor is on in the prompt, and don't break
it when inserting the response.  This is primarily useful for
evil-mode users who frequenty end up one char before the end of a
word when they switch to normal-mode.

* gptel-transient.el (gptel-send): Same.  Also fix bug with
selecting an existing buffer to send the response to.
2023-11-07 21:19:25 -08:00
Karthik Chikmagalur
cee5893d79 gptel: Appease the byte compiler. 2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
3c01477c37 gptel: api-key shenanigans
gptel.el (gptel--get-api-key, gptel, gptel-mode,
gptel-make-openai, gptel-api-key-from-auth-source): Handle models
that don't require an API key.

gptel-transient.el (gptel--suffix-system-message): Set backend
from buffer-local value when invoking, and handle API key
requirement better.
2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
c97778d5a8 gptel: address byte-compile and checkdoc warnings
* gptel.el, gptel-transient.el, gptel-openai.el, gptel-ollama.el
2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
aa50cbab70 gptel: Bump version 2023-10-29 00:34:39 -07:00
Karthik Chikmagalur
190d1d20e2 gptel: Update header line and package info description 2023-10-29 00:25:44 -07:00
Karthik Chikmagalur
6419e8f021 gptel: Add multi-llm support
README.org: Update README with new information and a multi-llm demo.

gptel.el (gptel-host, gptel--known-backends, gptel--api-key,
gptel--create-prompt, gptel--request-data, gptel--parse-buffer, gptel-request,
gptel--parse-response, gptel--openai, gptel--debug, gptel--restore-state,
gptel, gptel-backend):

Integrate multiple LLMs through the introcution of gptel-backends. Each backend
is composed of two pieces:

1. An instance of a cl-struct, containing connection, authentication and model
information.  See the cl-struct `gptel-backend` for details.  A separate
cl-struct type is defined for each supported backend (OpenAI, Azure, GPT4All and
Ollama) that inherits from the generic gptel-backend type.

2. cl-generic implementations of specific tasks, like gathering up and
formatting context (previous user queries and LLM responses), parsing responses
or responses streams etc.  The four tasks currently specialized this way are
carried out by `gptel--parse-buffer` and `gptel--request-data` (for constructing
the query) and `gptel--parse-response` and `gptel-curl--parse-stream` (for
parsing the response).  See their implementations for details.  Some effort has
been made to limit the number of times dispatching is done when reading
streaming responses.

When a backend is created, it is registered in the collection
`gptel--known-backends` and can be accessed by name later, such as from the
transient menu.

Only one of these backends is active at any time in a buffer, stored in the
buffer-local variable `gptel-backend`. Most messaging, authentication etc
accounts for the active backend, although there might be some leftovers.

When using `gptel-request` or `gptel-send`, the active backend can be changed or
let-bound.

- Obsolete `gptel-host`
- Fix the rear-sticky property when restoring sessions from files.
- Document some variables (not user options), like `gptel--debug`

gptel-openai.el (gptel-backend, gptel-make-openai, gptel-make-azure,
gptel-make-gpt4all): This file (currently always loaded) sets up the generic
backend struct and includes constructors for creating OpenAI, GPT4All and Azure
backends.  They all use the same API so a single set of defgeneric
implemenations suffices for all of them.

gptel-ollama.el (gptel-make-ollama): This file includes the cl-struct,
constructor and requisite defgeneric implementations for Ollama support.

gptel-transient.el (gptel-menu, gptel-provider-variable, gptel--infix-provider,
gptel-suffix-send):

- Provide access to all available LLM backends and models from `gptel-menu`.
- Adjust keybindings in gptel-menu: setting the model and query parameters is
  now bound to two char keybinds, while redirecting input and output is bound to
  single keys.
2023-10-28 23:57:47 -07:00
Karthik Chikmagalur
61c0df5e19 gptel, gptel-curl: Make the gptel text-property non-sticky
gptel.el (gptel--insert-response):
gptel-curl.el (gptel-curl--stream-insert-response): Make the `gptel'
text-property rear-nonsticky so typing after it is recognized as part of the
user prompt.
2023-10-28 19:34:54 -07:00
Karthik Chikmagalur
62a6020302 gptel, gptel-curl: Allow protocol (https) to be set separately 2023-10-23 10:45:59 -07:00
Karthik Chikmagalur
ed0bfc9ed1 gptel: Offer suggestion when setting gptel-topic
gptel.el (gptel-set-topic): Offer a suggestion when setting a GPTEL_TOPIC
property for an Org heading.

Fix linting in docstring.
2023-10-22 11:50:41 -07:00
Karthik Chikmagalur
648fa228a1 gptel: Fix check for markdown-mode (#109)
* gptel.el (gptel-default-mode): Use `fboundp' instead of `featurep' to check if
markdown-mode is available, since the latter requires `markdown-mode' to be
already loaded.
2023-10-03 14:47:54 -07:00
Karthik Chikmagalur
24add64455 gptel: Adjust how gptel--system-message is set
* gptel.el (gptel--system-message, gptel-directives): Try to make
gptel--system-message read from gptel-directives.  This doesn't yet work how
we need it to -- changing gptel-directives does not update
gptel--system-message.
2023-10-03 09:49:35 -07:00
Karthik Chikmagalur
c0ffce0849 gptel: Fix reading bounds in org files (#98)
* gptel.el (gptel--restore-state): When there is no "GPTEL_BOUNDS"
org property, `read' asks for stdin instead.  Fix by only calling
`read' when this property is non-nil.

Thanks to @Elilif for spotting this bug.
2023-08-05 17:41:35 -07:00
Karthik Chikmagalur
0f161a466b gptel: saving and restoring state for Markdown/Text
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.

Chats can be restored by turning on `gptel-mode'.  One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long.  A better approach is needed.

Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.

* README.org: Mention saving and restoring chats where
appropriate.
2023-07-28 16:05:22 -07:00
Karthik Chikmagalur
e0a7898645 gptel: Add pre-response-hook
* gptel.el (gptel--insert-response, gptel-pre-response-hook): New
user option `gptel-pre-response-hook' that runs before the
response is inserted into the buffer.  This can be used to prepare
the buffer in some user-specified way for the response.

* gptel-curl.el (gptel-curl--stream-filter): Run
`gptel-pre-response-hook' before inserting streaming responses.
2023-07-25 16:03:22 -07:00
Tianshu Wang
a660e13a8b
gptel, gptel-transient: Fix read temperature from minibuffer (#85)
gptel-transient.el (gtel--transient-read-variable): Use a custom transient infix reader.

gptel.el (gptel--request-data): Don't use `gptel--numberize'.
2023-07-20 21:33:00 -07:00
Karthik Chikmagalur
b92fc389d7 gptel: Reduce verbosity of gptel--save-state
* gptel.el (gptel--save-state): Only write `gptel-temperature' to
the file if it is different from the default value of the variable.
2023-07-20 14:14:18 -07:00
Karthik Chikmagalur
cc6c5e7321 gptel: saving and restoring state, and limiting context
* gptel.el (gptel-mode, gptel-set-topic, gptel--create-prompt,
gptel-set-topic, gptel--get-topic-start, gptel--get-bounds,
gptel--save-state, gptel--restore-state): Add support for saving
and restoring gptel state for Org buffers.  Support for Markdown
buffers is not yet implemented.

`gptel--save-state' and `gptel--restore-state' save and restores
state using Org properties.  With `gptel-mode' active, these are
run automatically when saving the buffer or enabling `gptel-mode'
respectively.

The command `gptel-set-topic' can be used to set a topic for the
current heading, which is stored as an Org property.  The topic
name is unused (as of now), but the presence of this property
limits the text context sent to ChatGPT to the heading text up to
the cursor position.

Autload `gptel-mode' since the user may want to enable this (to
restore sessions) without having loaded gptel.el.
2023-07-19 20:41:56 -07:00
Neil Fulwiler
4356f6fbec
gptel: correct system message with gptel-request
gptel.el (gptel-request): when using `gptel-request', let-bind
`gptel--system-message' around call to `gptel--create-prompt' when
the prompt argument is null.  This allows `gptel-request' to be
used to send the buffer as a prompt with a different system
message from `gptel--system-message' for that buffer.

---------

Co-authored-by: Neil Fulwiler <neil@fulwiler.me>
2023-07-13 15:31:18 -07:00
Karthik Chikmagalur
9c4af204a3 gptel-transient: Add crowdsourced prompts
* gptel.el (gptel-crowdsourced-prompts-file): This file holds
prompts compiled by the community.

* gptel-transient.el (gptel--read-crowdsourced-prompt,
gptel--crowdsourced-prompts, gptel-system-prompt--setup,
gptel--crowdsourced-prompts-url): Fetch crowdsourced system
prompts from https://github.com/f/awesome-chatgpt-prompts and pick
one to use from the transient menu.
2023-07-10 02:36:28 -07:00
Karthik Chikmagalur
bb8b37d8c0 gptel, gptel-curl: Fix byte-compile warnings
gptel.el (gptel--request-data): Also use :json-false to encode nil in the http
request.
2023-06-23 16:44:16 -07:00
Filipe Guerreiro
3d98ce8eee
gptel: Add new turbo 0613 models (#77)
gptel.el (gptel-model): Update choices for the OpenAI model.  Add the 16k and 32k token versions of the gpt-3.5 and gpt-4 model respectively.
2023-06-23 13:22:31 -07:00
Marcus Kammer
e6df1a5e33
gptel: Use :require for auth-source-search (#78)
gptel.el (gptel-api-key-from-auth-source): To read from .authinfo.gpg the key parameter :require for auth-source-search is needed.
2023-06-18 12:10:43 -07:00
Palace
20af9a8b99 gptel: curl proxy support (#69)
* gptel.el (gptel-proxy): Support a proxy when interacting with openai
endpoint. In many organizations the openai api can only be accessed
via proxy. This is easily supported by curl.

gptel-curl.el (gptel-curl--get-args): tidy up `gptel-curl--get-args'.
---------

Co-authored-by: PalaceChan <XXX>
2023-06-05 21:23:21 -07:00
Tianshu Wang
e6a1468bd2
gptel: Make API host configurable (#67)
* Make API host configurable

* Update README.org
2023-05-31 20:24:13 -07:00