Commit graph

267 commits

Author SHA1 Message Date
Karthik Chikmagalur
48047c0600 gptel-transient: Improve system-message edit buffer
* gptel-transient.el (gptel--suffix-system-message):  Use a header
line for better messaging when editing the system prompt.
2023-12-29 15:28:37 -08:00
Karthik Chikmagalur
e5357383ce gptel: Appease byte-compiler and linter
* gptel-transient.el:

* gptel-openai.el:

* gptel-gemini.el:
2023-12-29 13:19:23 -08:00
Karthik Chikmagalur
1e31f550de gptel: Declare compat as explicit dependency
* gptel.el (gptel--always, gptel--button-buttonize): Currently
gptel depends on the Compat library transitively via transient.el.
Declare it as an explicit dependency so we can get rid of special
case definitions and simplify.  This also enables us to use Emacs
28 and 29 conveniences freely in the code.
2023-12-29 13:06:25 -08:00
Karthik Chikmagalur
85bd47cb4c README: Add support for llama.cpp
* README.org: The llama.cpp server supports OpenAI's API, so we
can reuse it.  Closes #121.
2023-12-28 22:27:53 -08:00
Karthik Chikmagalur
f571323174 gptel-gemini: Simulate system-message for gemini
* gptel-gemini.el (gptel--parse-buffer): The Gemini API does not
provide an explicit system message parameter.  In the interest of
providing a uniform interface, simulate this in gptel by
prepending the first user message with `gptel--system-message`.
2023-12-27 22:35:41 -08:00
Karthik Chikmagalur
60cb406567 gptel: Improve documentation of gptel-send
* gptel.el (gptel-send): Update the docstring to describe the
behavior accurately.
2023-12-27 12:34:47 -08:00
Karthik Chikmagalur
0690c8b6a9 gptel-transient: Exit transient when writing directive
* gptel-transient.el (gptel--suffix-system-message): Explicitly
set the :transient slot of the system-message editor commands to
`transient--do-exit` (#157).
2023-12-27 08:35:06 -08:00
Karthik Chikmagalur
32dd463bd6 README: Mention YouTube demo
* gptel.el: Change blurb

* README.org: Mention YouTube demo
2023-12-27 08:35:06 -08:00
Karthik Chikmagalur
9126bed43f gptel: Set window when doing auto-scrolling
* gptel.el (gptel-auto-scroll):  After calling `gptel-send`, the
window focus could have changed as the response is received.  Set
the window correctly when running `gptel-auto-scroll` to ensure
the correct buffer is scrolled.
2023-12-27 08:35:06 -08:00
Karthik Chikmagalur
c3ca4fd0a0 gptel-transient: Set suffix-state explicitly for directives
* gptel-transient.el (gptel-system-prompt--setup): In Transient
v0.5 and up, some suffixes defined dynamically using
`gptel-system-prompt--setup' are being treated as infix commands,
see #140.  Set the `:transient' key of these suffixes to
`transient--do-return' explicitly to avoid this problem.  TODO:
This fix will work but it's not clear why this is needed, this
needs some investigation.
2023-12-25 14:03:43 -08:00
Karthik Chikmagalur
8d3e08faa8 gptel: Don't use called-interactively-p
* gptel.el (gptel): Switch from the fragile
`called-interactively-p` to using the interactive form to check if
`gptel` is called interactively.
2023-12-22 16:49:04 -08:00
Karthik Chikmagalur
2e92c0303c gptel: gptel-backend-url can accept functions
* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.

* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses.  Set the backend-url to a function to
account for the value of gptel-stream.  This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix #153.

* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
2023-12-22 16:25:32 -08:00
Karthik Chikmagalur
4d01dddf7d gptel, gptel-curl: Address checkdoc warnings
* gptel.el (gptel--url-parse-response, gptel-max-tokens,
gptel-use-header-line): Address checkdoc warnings.

* gptel-curl.el (gptel-curl--parse-response, gptel-abort):
Address checkdoc warnings.

* gptel-gemini (gptel-make-gemini): Address checkdoc warnings.
2023-12-22 16:23:50 -08:00
Karthik Chikmagalur
7271d0e408 gptel: Try to save/restore gptel-backend in files
* gptel.el (gptel--save-state, gptel--restore-state,
gptel--backend-name, gptel--restore-backend): Try to save and
restore the gptel backend when persisting chat sessions in files.
The local variable `gptel--backend-name` holds the backend name in
the file across Emacs sessions.  The function
`gptel--restore-backend` tries to set this backend and messages
the user if this is not possible.
2023-12-22 16:23:50 -08:00
Karthik Chikmagalur
c9d362a3e9 gptel-transient: Set model when redirecting to new buffer
* gptel-transient.el (gptel--suffix-send): When creating a new
session to redirect the response to, ensure that gptel-model is
set correctly in that buffer.
2023-12-21 21:25:05 -08:00
Karthik Chikmagalur
ce75072f9d gptel: Bump version 2023-12-21 18:11:48 -08:00
Karthik Chikmagalur
8973498378 gptel: Add minimal status indicator via mode-line-process
* gptel.el (gptel-update-destination, gptel-use-header-line,
gptel--update-status, gptel-mode): Improve status messaging when not
using the header-line.  When the user option
`gptel-use-header-line` (renamed from `gptel-update-destination`)
is set to nil, we use `mode-line-process` to report on in-progress
requests, and show the active LLM (model) otherwise.  Error
messages are sent to the echo area.  Close #9.

* README.org: Change `gptel-update-destination` to
`gptel-use-header-line` and tweak description.
2023-12-21 18:00:17 -08:00
Mark Stuart
4775ade6e0 gptel: add custom gptel-update-destination
README: Mention `gptel-update-destination` in README.

gptel.el (gptel-update-destination, gptel--update-status,
gptel-send, gptel--insert-response): New option
`gptel-update-destination` to control how gptel's status messages
are shown.  `gptel--update-status` replaces
`gptel--update-header-line`.  Replace calls to this function
elsewhere in gptel.el.

gptel-curl.el (gptel-abort, gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Use `gptel--update-status` in
place of `gptel--update-header-line`.

gptel-transient.el (gptel--suffix-send): Use
`gptel--update-status` in place of `gptel--update-header-line`.
2023-12-21 17:48:36 -08:00
Karthik Chikmagalur
1a554785e8 gptel-curl: Remove redundant calls to insert-response
* gptel-curl.el (gptel-curl--sentinel, gptel-curl--stream-filter):
Remove redundant calls to `gptel-curl--stream-insert-response`
when the response being inserted is nil or a blank string.  This
should be a modest boost to streaming performance.
2023-12-21 16:09:57 -08:00
Karthik Chikmagalur
a202911009 gptel: Add post-stream hook, scroll commands
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses.  This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport.  The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.

* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.

* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
2023-12-21 16:09:57 -08:00
Karthik Chikmagalur
ddd69cbbcf gptel-curl: Replace Curl timeout with speed-time
* gptel-curl.el (gptel-curl--common-args): Following the
discussion in #143, Use "-y300 -Y1" as Curl arguments instead of
specifying the timeout.  Now the connection stays open unless less
than 1 byte of information is exchanged over 300 seconds.
2023-12-20 15:41:12 -08:00
Karthik Chikmagalur
38095eaed5 gptel: Fix prompt collection bug + linting
* gptel.el: Update package description.

* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.

* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent.  Fix formatting and
linter warnings.

* gptel-openai.el(gptel--parse-buffer): Ditto.
2023-12-20 15:40:56 -08:00
Karthik Chikmagalur
3dd00a7457 gptel-gemini: Add streaming responses, simplify configuration
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests.  Simplify the user configuration
required.

* README.org: Fix formatting errors.  Update the configuration
instructions for Gemini.

This closes #149.
2023-12-20 15:17:14 -08:00
mrdylanyin
84cd7bf5a4 gptel-gemini: Add Gemini support
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API.  Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.

README: Add instructions for Gemini.
2023-12-20 13:55:43 -08:00
Karthik Chikmagalur
0ea3c7fb15 gptel-transient: Improve suffix message editor
* gptel-transient.el (gptel--suffix-system-message):  Improve the
editing prompt for custom suffixes.  Unset the "C-c C-c" and "C-c
C-k" keys from text-mode.  FIXME: This is fragile, instead add the
keymap with these keys as a sticky text-property over the text.
2023-12-16 16:00:09 -08:00
Karthik Chikmagalur
e105a52541 gptel: Update docstrings for prompt/response prefixes
README: Mention `gptel-response-prefix-alist`

gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist):
Improve docstring.
2023-12-15 09:42:37 -08:00
daedsidog
644e341244
Add multiline prefixes & AI response prefixes (#142)
gptel: Add customizable prompt/response prefixes

gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist,
gptel-prompt-prefix-string, gptel-response-prefix-string,
gptel--url-get-response): Add customizable response prefixes (per
major-mode) in `gptel-response-prefix-alist`.

Rename `gptel-prompt-string` -> `gptel-prompt-prefix-string`

The function `gptel-response-prefix-string` returns the prefix
string for the response in the current major-mode.

gptel-openai.el, gptel-ollama.el (gptel--parse-buffer): Remove the
prompt and response prefixes when creating prompt strings to send
to the LLM API.

gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Insert the response prefix
for the current major-mode before inserting the LLM API response.
2023-12-15 09:30:16 -08:00
Karim Aziiev
d5949ef428
gptel-curl: handle large Curl payloads with a temp file (#137)
gptel-curl.el (gptel-curl--get-args,
gptel-curl-file-size-threshold): Use temporary file for curl data.
Ensure curl uses a temporary file for binary data to prevent
issues with large payloads and special characters:

- Add a new defcustom `gptel-curl-file-size-threshold` to
determine when to use a temporary file for passing data to Curl.

- Use `--data-binary` with a temp file for data larger than the
specified threshold, improving handling of large data payloads in
GPTel queries.

- Reliably clean up temporary files created for Curl requests
exceeding the size threshold.  Add a function to
`gptel-post-response-hook` to delete the file post-Curl execution
and remove itself from the hook, preventing temporary file
accumulation.
2023-12-14 20:22:53 -08:00
Fangyuan
15404f639d
README: Update instructions for Azure (#147) 2023-12-14 19:53:57 -08:00
Karthik Chikmagalur
5c3b26aeec gptel-curl: Tweak Curl arguments for windows
gptel-curl.el (gptel-curl--common-args, gptel-curl--get-args):
Don't use compression with Curl on Windows, since it seems to
be generally not supported. Fix #90.
2023-12-09 19:34:29 -08:00
Moritz
3e361323d5
Update available OpenAI GPT models to match API (#146)
gptel-transient.el (gptel--infix-model):
gptel.el (gptel-model, gptel--openai): Update gpt-4 models.
2023-12-07 18:21:01 -08:00
Karthik Chikmagalur
de6d8089cd gptel-transient: Fix system-message setting function
gptel-transient.el (gptel--suffix-system-message): Removing the
`(setf (buffer-local-value ...))` construct (as instructed to by
the byte compiler) introduced a bug where custom system message
were set from the wrong buffer.  Handle this correctly to fix #138
and possibly #140.
2023-11-20 11:25:12 -08:00
Karthik Chikmagalur
17a58d38e7 gptel: Fix bug in url-retrieve setup
* gptel.el (gptel--url-get-response): Record correctly the
gptel-backend at time of call to url-retrieve.
2023-11-12 18:11:25 -08:00
Karthik Chikmagalur
0109d0d1c0 gptel: API agnostic response error handling
* gptel.el (gptel--url-get-response, gptel--url-parse-response):

- When the query fails, the error message format (in the JSON)
differs between APIs.  Ultimately it may be required to dispatch
error handling via a generic function, but for now: try to make
the error handling API agnostic.

- Mention the backend name in the error message.  Pass the backend
to the (non-streaming response) parsers to be able to do this.

* gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--parse-response):  Same changes.
2023-11-08 13:29:39 -08:00
Karthik Chikmagalur
3308449761 gptel: Fix prompt string handling in gptel-request
* gptel.el (gptel-request): When `gptel-request` is supplied a
string, it creates the full prompt plist according to the OpenAI
API.  Fix by inserting it into a temp buffer and using the
cl-generic dispatch to parse the buffer instead.  This is a janky
solution but the best possible one without defining another
generic function just to handle prompt strings differently per API.
2023-11-08 12:45:30 -08:00
Karthik Chikmagalur
66d2bafad6 gptel-ollama: Fix buffer parsing
* gptel-ollama.el (gptel--parse-buffer): The prompt construction
for Ollama fails when starting from (point-min).  Fix by checking
if a valid text-property match object is found in the parsing.
2023-11-07 22:37:59 -08:00
Karthik Chikmagalur
57a70c23cb gptel: Skip to end of word before sending
* gptel.el (gptel--at-word-end, gptel-send, gptel-request):
Include the word the cursor is on in the prompt, and don't break
it when inserting the response.  This is primarily useful for
evil-mode users who frequenty end up one char before the end of a
word when they switch to normal-mode.

* gptel-transient.el (gptel-send): Same.  Also fix bug with
selecting an existing buffer to send the response to.
2023-11-07 21:19:25 -08:00
Karthik Chikmagalur
cee5893d79 gptel: Appease the byte compiler. 2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
3c01477c37 gptel: api-key shenanigans
gptel.el (gptel--get-api-key, gptel, gptel-mode,
gptel-make-openai, gptel-api-key-from-auth-source): Handle models
that don't require an API key.

gptel-transient.el (gptel--suffix-system-message): Set backend
from buffer-local value when invoking, and handle API key
requirement better.
2023-11-07 20:36:37 -08:00
Nick Anderson
ec0e461b35 gptel-curl: Increased curl timeout (#127)
gptel-curl.arg (gptel-curl--get-args): Increase curl timeout.

Often local LLMs will offload a query to CPU if there is not enough VRAM or in
the case of an unsupported GPU. When a query is offloaded to the CPU responses
can be significantly slower. If curl times out early the user will not get the
response from the LLM back in Emacs.

This change increases the timeout for curl from 60s to 300s to make gptel usable
in slower environments.

Closes #125
2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
c97778d5a8 gptel: address byte-compile and checkdoc warnings
* gptel.el, gptel-transient.el, gptel-openai.el, gptel-ollama.el
2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
50a2498259 README: Tweak instructions for local LLMs, mention #120 2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
63027083cd README: Update additional customization section 2023-10-29 14:24:39 -07:00
Karthik Chikmagalur
6af89254b7 README: Document breaking changes (mainly gptel-host deprecation) 2023-10-29 09:51:17 -07:00
Karthik Chikmagalur
aa50cbab70 gptel: Bump version 2023-10-29 00:34:39 -07:00
Karthik Chikmagalur
1434bbac7b gptel-ollama, gptel-openai: Add example of backend creation
README: Fix error with Ollama backend instructions
2023-10-29 00:31:56 -07:00
Karthik Chikmagalur
190d1d20e2 gptel: Update header line and package info description 2023-10-29 00:25:44 -07:00
Karthik Chikmagalur
6419e8f021 gptel: Add multi-llm support
README.org: Update README with new information and a multi-llm demo.

gptel.el (gptel-host, gptel--known-backends, gptel--api-key,
gptel--create-prompt, gptel--request-data, gptel--parse-buffer, gptel-request,
gptel--parse-response, gptel--openai, gptel--debug, gptel--restore-state,
gptel, gptel-backend):

Integrate multiple LLMs through the introcution of gptel-backends. Each backend
is composed of two pieces:

1. An instance of a cl-struct, containing connection, authentication and model
information.  See the cl-struct `gptel-backend` for details.  A separate
cl-struct type is defined for each supported backend (OpenAI, Azure, GPT4All and
Ollama) that inherits from the generic gptel-backend type.

2. cl-generic implementations of specific tasks, like gathering up and
formatting context (previous user queries and LLM responses), parsing responses
or responses streams etc.  The four tasks currently specialized this way are
carried out by `gptel--parse-buffer` and `gptel--request-data` (for constructing
the query) and `gptel--parse-response` and `gptel-curl--parse-stream` (for
parsing the response).  See their implementations for details.  Some effort has
been made to limit the number of times dispatching is done when reading
streaming responses.

When a backend is created, it is registered in the collection
`gptel--known-backends` and can be accessed by name later, such as from the
transient menu.

Only one of these backends is active at any time in a buffer, stored in the
buffer-local variable `gptel-backend`. Most messaging, authentication etc
accounts for the active backend, although there might be some leftovers.

When using `gptel-request` or `gptel-send`, the active backend can be changed or
let-bound.

- Obsolete `gptel-host`
- Fix the rear-sticky property when restoring sessions from files.
- Document some variables (not user options), like `gptel--debug`

gptel-openai.el (gptel-backend, gptel-make-openai, gptel-make-azure,
gptel-make-gpt4all): This file (currently always loaded) sets up the generic
backend struct and includes constructors for creating OpenAI, GPT4All and Azure
backends.  They all use the same API so a single set of defgeneric
implemenations suffices for all of them.

gptel-ollama.el (gptel-make-ollama): This file includes the cl-struct,
constructor and requisite defgeneric implementations for Ollama support.

gptel-transient.el (gptel-menu, gptel-provider-variable, gptel--infix-provider,
gptel-suffix-send):

- Provide access to all available LLM backends and models from `gptel-menu`.
- Adjust keybindings in gptel-menu: setting the model and query parameters is
  now bound to two char keybinds, while redirecting input and output is bound to
  single keys.
2023-10-28 23:57:47 -07:00
Karthik Chikmagalur
61c0df5e19 gptel, gptel-curl: Make the gptel text-property non-sticky
gptel.el (gptel--insert-response):
gptel-curl.el (gptel-curl--stream-insert-response): Make the `gptel'
text-property rear-nonsticky so typing after it is recognized as part of the
user prompt.
2023-10-28 19:34:54 -07:00
Karthik Chikmagalur
644fc1de2f gptel-transient: Handle empty input when setting temperature 2023-10-24 16:26:07 -07:00