* gptel.el (gptel--sanitize-model, gptel): Check for mismatches
between the default values of gptel-backend and gptel-model when
starting a new gptel chat. Previously the default value of
gptel-backend was compared against the buffer-local value of
gptel-model.
* gptel.el (gptel--insert-response, gptel--restore-state): Don't
set the rear-nonsticky property on the gptel response text.
Instead, the gptel text-property is globally declared to be
nonsticky via `text-property-default-nonsticky`.
This change is since markdown-mode makes rear-nonsticky a
font-lock-managed property and we can't set it manually. Setting
this property explicitly also makes all text properties in the
range nonsticky, which can have unintended side effects.
* gptel-curl.el (gptel-curl--stream-insert-response): Ditto.
gptel.el (gptel--convert-markdown->org,
gptel--stream-convert-markdown->org): Handle the case of Markdown
bullet lists that begin with a "*", which the Gemini LLMs produce
often. Address #238.
gptel.el (gptel--url-parse-response): Handle HTTP 100 followed by
200. Note: this fix is brittle, it will break if 100 is followed
by an error code.
gptel-curl.el (gptel-curl--stream-filter,
gptel-curl--parse-stream): Ditto. Address #194.
gptel-curl.el (gptel-curl--stream-cleanup): The LLM response can
be empty with HTTP status 200, for example when the API responds
with an error description instead. Handle this case gracefully.
When `gptel-mode` is enabled, also inform the user that the
response was empty. Fix#234.
gptel.el (gptel-post-response-functions): Documentation. Explain
the arguments passed to each hook function in this hook when the
response fails or is empty.
* gptel.el: Mention Anthropic in the package description.
* gptel-anthropic.el (gptel-anthropic, gptel-make-anthropic,
gptel--parse-response, gptel--request-data, gptel--parse-buffer,
gptel-curl--parse-stream): Add support for Anthropic AI's Claude 3
models.
* README.org: Add instructions for using Anthropic AI's Claude 3
models.
* gptel.el (gptel-display-buffer-action, gptel): Add
`gptel-display-buffer-action` to customize the display of the
gptel buffer.
* README: Mention new option.
* gptel.el (gptel--attach-response-history, gptel--ediff,
gptel--next-variant, gptel--previous-variant,
gptel--mark-response):
Add `gptel--attach-response-history` -- this can be used to add
text properties to the next gptel response in the buffer. This
is (currently) useful for tracking changes when the response
overwrites existing text.
The next three commands -- `gptel--ediff`,
`gptel--previous-variant`, `gptel--next-variant` -- provide
facilities for manipulating a gptel response at point when there
is history. `gptel--mark-response` marks the response at point.
These are considered internal functions for now and can be
accessed from the transient menu, where they work together with
`gptel--regenerate`.
The input arguments to these commands are expected to change to
support copilot-style functionality in the near future.
* gptel-transient.el (gptel-menu, gptel--suffix-send,
gptel--regenerate):
Change the transient menu layout to be more compact (with a newly
added column.) When overwriting the prompt with a response, save
the prompt to the gptel response's history. Add
`gptel--regenerate` to regenerate a response. This is accessible
from the transient menu when the point is inside response text.
* gptel.el (gptel--debug, gptel-log-level, gptel--log-buffer-name,
gptel--url-get-response, gptel--parse-response): Optionally log
all request and response data to `gptel--log-buffer-name`, with
the log level governed by `gptel-log-level`. Obsolete
`gptel--debug`.
* gptel-curl.el (gptel-curl--log-response, gptel-curl--get-args,
gptel-curl--get-response, gptel-curl--stream-cleanup,
gptel-curl--sentinel): Add support for logging.
* gptel.el (gptel--convert-markdown->org,
gptel--stream-convert-markdown->org): Don't touch underscores in
the source markdown. This will turn some emphasis blocks into
underlines in Org, but we can live with that. Fix#40.
* gptel.el (gptel--restore-state, gptel--save-state): Try to
support writing/reading multi-line directives in Org and other
modes correctly when saving the buffer. The support is
preliminary and works as follows:
- org-mode: Replace newlines with "\n" before writing the relevant
org property.
- other modes: escape newlines with print-escape-newlines when
updating local vars.
Neither of these is a good fix, but brings support for multi-line
directives up from completely broken to works-via-hack. They are
subject to change in the future and might break some chat files. :(
* gptel.el (gptel-request): Update docstring to clarify what
BUFFER and POSITION do. Addresses #191.
* gptel-transient.el (gptel-menu, gptel--suffix-send): Replace
"new session" and "existing session" redirection options with
"gptel session" and "any buffer", allowing for more flexibility
when redirecting. "gptel session" can be an existing or new
session. Fix bug where the prompt was generated from the contents
of the destination buffer instead of the current buffer when
redirecting to a gptel session. Add comments demarcating blocks
in `gptel--suffix-send`.
* gptel.el (gptel--openai): Don't specify header.
* gptel-openai.el (gptel-make-openai): Use a key-aware lambda for
the header argument. This should make it easier to define new
OpenAI-style API backends (see #177, #184)
* README.org: Update with instructions for together.ai and
Anyscale, both of which provide OpenAI-style APIs. Clean up the
config blocks for the other backends.
* gptel-transient.el (gptel-menu): Sanitize model if it's not in
the backend.
* gptel.el (gptel--sanitize-model): New helper.
* gptel.el (gptel-send): Also sanitize model in non-prefixed gptel-send.
* gptel-openai.el (cl-lib): Require it.
* gptel.el (compat): Leniently require compat so gptel.el can be
compiled standalone. This will expose other compiler errors that
are easily visible with M-x flymake.
* gptel.el (gptel--stream-convert-markdown->org): (Bug #183) Set
variables to nil explicitly in the bindings section of letrec.
Implicit nil letrec-bindings appears to cause an issue with Emacs
27.2.
* gptel.el (gptel--insert-response): Turn on visual-line-mode in
the response buffer that is created when the gptel buffer is
read-only.
* gptel-curl.el (gptel-curl--stream-insert-response): Ditto.
* gptel-kagi.el (gptel--request-data, gptel--parse-buffer,
gptel-make-kagi): Add support for the Kagi summarizer. If there
is a url at point (or at the end of the provided prompt), it is
used as the summarizer input. Otherwise the behavior is
unchanged.
* README (Kagi): Mention summarizer support.
* gptel.el: Mention summarizer support.
* gptel.el: Bump version and update package description.
* gptel-kagi.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-kagi): Add new file and support
for the Kagi FastGPT LLM API. Streaming and setting model
parameters (temperature, max tokesn) are not supported by the API.
A Kagi backend can be added with `gptel-make-kagi`.
* README.org: Update with instructions for Kagi.
* gptel.el (gptel-end-of-response, gptel-post-response-hook,
gptel-post-response-functions, gptel--insert-response,
gptel-response-filter-functions):
Rename gptel-post-response-hook -> gptel-post-response-functions
The new abnormal hook now calls its functions with the start and
end positions of the response, to make it easier to act on the
response.
* gptel-curl.el (gptel-curl--stream-cleanup): Corresponding changes.
* README.org: Mention breaking change.
* gptel.el (gptel-default-session, gptel): Name the gptel buffer
according to the default backend. Delete the variable
`gptel-default-session`. Fix#174.
* gptel-openai.el (gptel-make-openai): Don't specify a key by
default. Fix#170.
* gptel.el (gptel-backend): Turn `gptel-backend` into a defcustom
so it can be used with setopt. Fix#167.
* gptel-openai.el (gptel-make-openai): Improve docstring.
* gptel.el (gptel--always, gptel--button-buttonize): Currently
gptel depends on the Compat library transitively via transient.el.
Declare it as an explicit dependency so we can get rid of special
case definitions and simplify. This also enables us to use Emacs
28 and 29 conveniences freely in the code.
* gptel.el (gptel-auto-scroll): After calling `gptel-send`, the
window focus could have changed as the response is received. Set
the window correctly when running `gptel-auto-scroll` to ensure
the correct buffer is scrolled.
* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.
* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses. Set the backend-url to a function to
account for the value of gptel-stream. This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix#153.
* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
* gptel.el (gptel--save-state, gptel--restore-state,
gptel--backend-name, gptel--restore-backend): Try to save and
restore the gptel backend when persisting chat sessions in files.
The local variable `gptel--backend-name` holds the backend name in
the file across Emacs sessions. The function
`gptel--restore-backend` tries to set this backend and messages
the user if this is not possible.
* gptel.el (gptel-update-destination, gptel-use-header-line,
gptel--update-status, gptel-mode): Improve status messaging when not
using the header-line. When the user option
`gptel-use-header-line` (renamed from `gptel-update-destination`)
is set to nil, we use `mode-line-process` to report on in-progress
requests, and show the active LLM (model) otherwise. Error
messages are sent to the echo area. Close#9.
* README.org: Change `gptel-update-destination` to
`gptel-use-header-line` and tweak description.
README: Mention `gptel-update-destination` in README.
gptel.el (gptel-update-destination, gptel--update-status,
gptel-send, gptel--insert-response): New option
`gptel-update-destination` to control how gptel's status messages
are shown. `gptel--update-status` replaces
`gptel--update-header-line`. Replace calls to this function
elsewhere in gptel.el.
gptel-curl.el (gptel-abort, gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Use `gptel--update-status` in
place of `gptel--update-header-line`.
gptel-transient.el (gptel--suffix-send): Use
`gptel--update-status` in place of `gptel--update-header-line`.
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses. This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport. The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.
* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.
* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
* gptel.el: Update package description.
* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.
* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent. Fix formatting and
linter warnings.
* gptel-openai.el(gptel--parse-buffer): Ditto.
gptel: Add customizable prompt/response prefixes
gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist,
gptel-prompt-prefix-string, gptel-response-prefix-string,
gptel--url-get-response): Add customizable response prefixes (per
major-mode) in `gptel-response-prefix-alist`.
Rename `gptel-prompt-string` -> `gptel-prompt-prefix-string`
The function `gptel-response-prefix-string` returns the prefix
string for the response in the current major-mode.
gptel-openai.el, gptel-ollama.el (gptel--parse-buffer): Remove the
prompt and response prefixes when creating prompt strings to send
to the LLM API.
gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Insert the response prefix
for the current major-mode before inserting the LLM API response.
gptel-curl.el (gptel-curl--get-args,
gptel-curl-file-size-threshold): Use temporary file for curl data.
Ensure curl uses a temporary file for binary data to prevent
issues with large payloads and special characters:
- Add a new defcustom `gptel-curl-file-size-threshold` to
determine when to use a temporary file for passing data to Curl.
- Use `--data-binary` with a temp file for data larger than the
specified threshold, improving handling of large data payloads in
GPTel queries.
- Reliably clean up temporary files created for Curl requests
exceeding the size threshold. Add a function to
`gptel-post-response-hook` to delete the file post-Curl execution
and remove itself from the hook, preventing temporary file
accumulation.
* gptel.el (gptel--url-get-response, gptel--url-parse-response):
- When the query fails, the error message format (in the JSON)
differs between APIs. Ultimately it may be required to dispatch
error handling via a generic function, but for now: try to make
the error handling API agnostic.
- Mention the backend name in the error message. Pass the backend
to the (non-streaming response) parsers to be able to do this.
* gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--parse-response): Same changes.