* gptel-curl.el (gptel-curl--sentinel, gptel-curl--stream-filter):
Remove redundant calls to `gptel-curl--stream-insert-response`
when the response being inserted is nil or a blank string. This
should be a modest boost to streaming performance.
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses. This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport. The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.
* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.
* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
* gptel-curl.el (gptel-curl--common-args): Following the
discussion in #143, Use "-y300 -Y1" as Curl arguments instead of
specifying the timeout. Now the connection stays open unless less
than 1 byte of information is exchanged over 300 seconds.
* gptel.el: Update package description.
* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.
* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent. Fix formatting and
linter warnings.
* gptel-openai.el(gptel--parse-buffer): Ditto.
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests. Simplify the user configuration
required.
* README.org: Fix formatting errors. Update the configuration
instructions for Gemini.
This closes#149.
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API. Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.
README: Add instructions for Gemini.
* gptel-transient.el (gptel--suffix-system-message): Improve the
editing prompt for custom suffixes. Unset the "C-c C-c" and "C-c
C-k" keys from text-mode. FIXME: This is fragile, instead add the
keymap with these keys as a sticky text-property over the text.
gptel: Add customizable prompt/response prefixes
gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist,
gptel-prompt-prefix-string, gptel-response-prefix-string,
gptel--url-get-response): Add customizable response prefixes (per
major-mode) in `gptel-response-prefix-alist`.
Rename `gptel-prompt-string` -> `gptel-prompt-prefix-string`
The function `gptel-response-prefix-string` returns the prefix
string for the response in the current major-mode.
gptel-openai.el, gptel-ollama.el (gptel--parse-buffer): Remove the
prompt and response prefixes when creating prompt strings to send
to the LLM API.
gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Insert the response prefix
for the current major-mode before inserting the LLM API response.
gptel-curl.el (gptel-curl--get-args,
gptel-curl-file-size-threshold): Use temporary file for curl data.
Ensure curl uses a temporary file for binary data to prevent
issues with large payloads and special characters:
- Add a new defcustom `gptel-curl-file-size-threshold` to
determine when to use a temporary file for passing data to Curl.
- Use `--data-binary` with a temp file for data larger than the
specified threshold, improving handling of large data payloads in
GPTel queries.
- Reliably clean up temporary files created for Curl requests
exceeding the size threshold. Add a function to
`gptel-post-response-hook` to delete the file post-Curl execution
and remove itself from the hook, preventing temporary file
accumulation.
gptel-curl.el (gptel-curl--common-args, gptel-curl--get-args):
Don't use compression with Curl on Windows, since it seems to
be generally not supported. Fix#90.
gptel-transient.el (gptel--suffix-system-message): Removing the
`(setf (buffer-local-value ...))` construct (as instructed to by
the byte compiler) introduced a bug where custom system message
were set from the wrong buffer. Handle this correctly to fix#138
and possibly #140.
* gptel.el (gptel--url-get-response, gptel--url-parse-response):
- When the query fails, the error message format (in the JSON)
differs between APIs. Ultimately it may be required to dispatch
error handling via a generic function, but for now: try to make
the error handling API agnostic.
- Mention the backend name in the error message. Pass the backend
to the (non-streaming response) parsers to be able to do this.
* gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--parse-response): Same changes.
* gptel.el (gptel-request): When `gptel-request` is supplied a
string, it creates the full prompt plist according to the OpenAI
API. Fix by inserting it into a temp buffer and using the
cl-generic dispatch to parse the buffer instead. This is a janky
solution but the best possible one without defining another
generic function just to handle prompt strings differently per API.
* gptel-ollama.el (gptel--parse-buffer): The prompt construction
for Ollama fails when starting from (point-min). Fix by checking
if a valid text-property match object is found in the parsing.
* gptel.el (gptel--at-word-end, gptel-send, gptel-request):
Include the word the cursor is on in the prompt, and don't break
it when inserting the response. This is primarily useful for
evil-mode users who frequenty end up one char before the end of a
word when they switch to normal-mode.
* gptel-transient.el (gptel-send): Same. Also fix bug with
selecting an existing buffer to send the response to.
gptel.el (gptel--get-api-key, gptel, gptel-mode,
gptel-make-openai, gptel-api-key-from-auth-source): Handle models
that don't require an API key.
gptel-transient.el (gptel--suffix-system-message): Set backend
from buffer-local value when invoking, and handle API key
requirement better.
gptel-curl.arg (gptel-curl--get-args): Increase curl timeout.
Often local LLMs will offload a query to CPU if there is not enough VRAM or in
the case of an unsupported GPU. When a query is offloaded to the CPU responses
can be significantly slower. If curl times out early the user will not get the
response from the LLM back in Emacs.
This change increases the timeout for curl from 60s to 300s to make gptel usable
in slower environments.
Closes#125
README.org: Update README with new information and a multi-llm demo.
gptel.el (gptel-host, gptel--known-backends, gptel--api-key,
gptel--create-prompt, gptel--request-data, gptel--parse-buffer, gptel-request,
gptel--parse-response, gptel--openai, gptel--debug, gptel--restore-state,
gptel, gptel-backend):
Integrate multiple LLMs through the introcution of gptel-backends. Each backend
is composed of two pieces:
1. An instance of a cl-struct, containing connection, authentication and model
information. See the cl-struct `gptel-backend` for details. A separate
cl-struct type is defined for each supported backend (OpenAI, Azure, GPT4All and
Ollama) that inherits from the generic gptel-backend type.
2. cl-generic implementations of specific tasks, like gathering up and
formatting context (previous user queries and LLM responses), parsing responses
or responses streams etc. The four tasks currently specialized this way are
carried out by `gptel--parse-buffer` and `gptel--request-data` (for constructing
the query) and `gptel--parse-response` and `gptel-curl--parse-stream` (for
parsing the response). See their implementations for details. Some effort has
been made to limit the number of times dispatching is done when reading
streaming responses.
When a backend is created, it is registered in the collection
`gptel--known-backends` and can be accessed by name later, such as from the
transient menu.
Only one of these backends is active at any time in a buffer, stored in the
buffer-local variable `gptel-backend`. Most messaging, authentication etc
accounts for the active backend, although there might be some leftovers.
When using `gptel-request` or `gptel-send`, the active backend can be changed or
let-bound.
- Obsolete `gptel-host`
- Fix the rear-sticky property when restoring sessions from files.
- Document some variables (not user options), like `gptel--debug`
gptel-openai.el (gptel-backend, gptel-make-openai, gptel-make-azure,
gptel-make-gpt4all): This file (currently always loaded) sets up the generic
backend struct and includes constructors for creating OpenAI, GPT4All and Azure
backends. They all use the same API so a single set of defgeneric
implemenations suffices for all of them.
gptel-ollama.el (gptel-make-ollama): This file includes the cl-struct,
constructor and requisite defgeneric implementations for Ollama support.
gptel-transient.el (gptel-menu, gptel-provider-variable, gptel--infix-provider,
gptel-suffix-send):
- Provide access to all available LLM backends and models from `gptel-menu`.
- Adjust keybindings in gptel-menu: setting the model and query parameters is
now bound to two char keybinds, while redirecting input and output is bound to
single keys.
gptel.el (gptel--insert-response):
gptel-curl.el (gptel-curl--stream-insert-response): Make the `gptel'
text-property rear-nonsticky so typing after it is recognized as part of the
user prompt.
* gptel.el (gptel-default-mode): Use `fboundp' instead of `featurep' to check if
markdown-mode is available, since the latter requires `markdown-mode' to be
already loaded.
* gptel.el (gptel--system-message, gptel-directives): Try to make
gptel--system-message read from gptel-directives. This doesn't yet work how
we need it to -- changing gptel-directives does not update
gptel--system-message.
* gptel.el (gptel--restore-state): When there is no "GPTEL_BOUNDS"
org property, `read' asks for stdin instead. Fix by only calling
`read' when this property is non-nil.
Thanks to @Elilif for spotting this bug.
* gptel.el (gptel--save-state, gptel--restore-state,
gptel-temperature, gptel-model, gptel-max-tokens,
gptel-directives, gptel--always, gptel--button-buttonize,
gptel--system-message, gptel--bounds): Write gptel parameters as
file-local variables when saving chats in Markdown or text files.
The local variable gptel--bounds stores the locations of the
responses from the LLM. This is not a great solution, but the best
I can think to do without adding more syntax to the document.
Chats can be restored by turning on `gptel-mode'. One of the
problem with this approach is that if the buffer is modified
before `gptel-mode' is turned on, the state data is out of date.
Another problem is that this metadata block as printed in the
buffer can become quite long. A better approach is needed.
Define helper functions `gptel--always' and
`gptel--button-buttonize' to work around Emacs 27.1 support.
* README.org: Mention saving and restoring chats where
appropriate.
* gptel.el (gptel--insert-response, gptel-pre-response-hook): New
user option `gptel-pre-response-hook' that runs before the
response is inserted into the buffer. This can be used to prepare
the buffer in some user-specified way for the response.
* gptel-curl.el (gptel-curl--stream-filter): Run
`gptel-pre-response-hook' before inserting streaming responses.
* gptel-curl.el (gptel-curl-get-response): Don't convert response
into org-mode unless the buffer from which the request originated
is in org-mode. This makes `gptel-default-mode' less binding, and
only used when creating a new chat session with `gptel'. Also,
gptel should now do the right thing depending on whether the
current buffer is in text, Markdown or Org modes.
* gptel.el (gptel-mode, gptel-set-topic, gptel--create-prompt,
gptel-set-topic, gptel--get-topic-start, gptel--get-bounds,
gptel--save-state, gptel--restore-state): Add support for saving
and restoring gptel state for Org buffers. Support for Markdown
buffers is not yet implemented.
`gptel--save-state' and `gptel--restore-state' save and restores
state using Org properties. With `gptel-mode' active, these are
run automatically when saving the buffer or enabling `gptel-mode'
respectively.
The command `gptel-set-topic' can be used to set a topic for the
current heading, which is stored as an Org property. The topic
name is unused (as of now), but the presence of this property
limits the text context sent to ChatGPT to the heading text up to
the cursor position.
Autload `gptel-mode' since the user may want to enable this (to
restore sessions) without having loaded gptel.el.
gptel.el (gptel-request): when using `gptel-request', let-bind
`gptel--system-message' around call to `gptel--create-prompt' when
the prompt argument is null. This allows `gptel-request' to be
used to send the buffer as a prompt with a different system
message from `gptel--system-message' for that buffer.
---------
Co-authored-by: Neil Fulwiler <neil@fulwiler.me>
* gptel.el (gptel-crowdsourced-prompts-file): This file holds
prompts compiled by the community.
* gptel-transient.el (gptel--read-crowdsourced-prompt,
gptel--crowdsourced-prompts, gptel-system-prompt--setup,
gptel--crowdsourced-prompts-url): Fetch crowdsourced system
prompts from https://github.com/f/awesome-chatgpt-prompts and pick
one to use from the transient menu.