* gptel.el (gptel--restore-state, gptel--save-state): Try to
support writing/reading multi-line directives in Org and other
modes correctly when saving the buffer. The support is
preliminary and works as follows:
- org-mode: Replace newlines with "\n" before writing the relevant
org property.
- other modes: escape newlines with print-escape-newlines when
updating local vars.
Neither of these is a good fix, but brings support for multi-line
directives up from completely broken to works-via-hack. They are
subject to change in the future and might break some chat files. :(
* gptel.el (gptel-request): Update docstring to clarify what
BUFFER and POSITION do. Addresses #191.
* gptel-transient.el (gptel-menu, gptel--suffix-send): Replace
"new session" and "existing session" redirection options with
"gptel session" and "any buffer", allowing for more flexibility
when redirecting. "gptel session" can be an existing or new
session. Fix bug where the prompt was generated from the contents
of the destination buffer instead of the current buffer when
redirecting to a gptel session. Add comments demarcating blocks
in `gptel--suffix-send`.
* gptel.el (gptel--openai): Don't specify header.
* gptel-openai.el (gptel-make-openai): Use a key-aware lambda for
the header argument. This should make it easier to define new
OpenAI-style API backends (see #177, #184)
* README.org: Update with instructions for together.ai and
Anyscale, both of which provide OpenAI-style APIs. Clean up the
config blocks for the other backends.
* gptel-transient.el (gptel-menu): Sanitize model if it's not in
the backend.
* gptel.el (gptel--sanitize-model): New helper.
* gptel.el (gptel-send): Also sanitize model in non-prefixed gptel-send.
* gptel-openai.el (cl-lib): Require it.
* gptel.el (compat): Leniently require compat so gptel.el can be
compiled standalone. This will expose other compiler errors that
are easily visible with M-x flymake.
* gptel.el (gptel--stream-convert-markdown->org): (Bug #183) Set
variables to nil explicitly in the bindings section of letrec.
Implicit nil letrec-bindings appears to cause an issue with Emacs
27.2.
* gptel.el (gptel--insert-response): Turn on visual-line-mode in
the response buffer that is created when the gptel buffer is
read-only.
* gptel-curl.el (gptel-curl--stream-insert-response): Ditto.
* gptel-kagi.el (gptel--request-data, gptel--parse-buffer,
gptel-make-kagi): Add support for the Kagi summarizer. If there
is a url at point (or at the end of the provided prompt), it is
used as the summarizer input. Otherwise the behavior is
unchanged.
* README (Kagi): Mention summarizer support.
* gptel.el: Mention summarizer support.
* gptel.el: Bump version and update package description.
* gptel-kagi.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-kagi): Add new file and support
for the Kagi FastGPT LLM API. Streaming and setting model
parameters (temperature, max tokesn) are not supported by the API.
A Kagi backend can be added with `gptel-make-kagi`.
* README.org: Update with instructions for Kagi.
* gptel.el (gptel-end-of-response, gptel-post-response-hook,
gptel-post-response-functions, gptel--insert-response,
gptel-response-filter-functions):
Rename gptel-post-response-hook -> gptel-post-response-functions
The new abnormal hook now calls its functions with the start and
end positions of the response, to make it easier to act on the
response.
* gptel-curl.el (gptel-curl--stream-cleanup): Corresponding changes.
* README.org: Mention breaking change.
* gptel-transient.el (gptel-system-prompt--setup): (Tentative
change) Show descriptions of directives next to the keys when
picking a directive from the system-prompt menu.
* gptel-transient.el (gptel-menu, gptel--suffix-send): Add a
transient menu option to select the prompt from the kill-ring.
By default the latest kill is selected. Sending with a prefix-arg
allows for choosing the kill ring element.
TODO: This latter behavior needs to be made discoverable somehow.
* gptel.el (gptel-default-session, gptel): Name the gptel buffer
according to the default backend. Delete the variable
`gptel-default-session`. Fix#174.
* gptel-openai.el (gptel-make-openai): Don't specify a key by
default. Fix#170.
* gptel.el (gptel-backend): Turn `gptel-backend` into a defcustom
so it can be used with setopt. Fix#167.
* gptel-openai.el (gptel-make-openai): Improve docstring.
* README.org (* Llama.cpp): As it turns out, text-generation
Llamafile models (currently Mistral Instruct and Llava) offer an
OpenAI-compatible API, so we can use them easily from gptel. Add
instructions for Llamafiles to the Llama section of the README.
* gptel.el (gptel--always, gptel--button-buttonize): Currently
gptel depends on the Compat library transitively via transient.el.
Declare it as an explicit dependency so we can get rid of special
case definitions and simplify. This also enables us to use Emacs
28 and 29 conveniences freely in the code.
* gptel-gemini.el (gptel--parse-buffer): The Gemini API does not
provide an explicit system message parameter. In the interest of
providing a uniform interface, simulate this in gptel by
prepending the first user message with `gptel--system-message`.
* gptel-transient.el (gptel--suffix-system-message): Explicitly
set the :transient slot of the system-message editor commands to
`transient--do-exit` (#157).
* gptel.el (gptel-auto-scroll): After calling `gptel-send`, the
window focus could have changed as the response is received. Set
the window correctly when running `gptel-auto-scroll` to ensure
the correct buffer is scrolled.
* gptel-transient.el (gptel-system-prompt--setup): In Transient
v0.5 and up, some suffixes defined dynamically using
`gptel-system-prompt--setup' are being treated as infix commands,
see #140. Set the `:transient' key of these suffixes to
`transient--do-return' explicitly to avoid this problem. TODO:
This fix will work but it's not clear why this is needed, this
needs some investigation.
* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.
* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses. Set the backend-url to a function to
account for the value of gptel-stream. This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix#153.
* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
* gptel.el (gptel--save-state, gptel--restore-state,
gptel--backend-name, gptel--restore-backend): Try to save and
restore the gptel backend when persisting chat sessions in files.
The local variable `gptel--backend-name` holds the backend name in
the file across Emacs sessions. The function
`gptel--restore-backend` tries to set this backend and messages
the user if this is not possible.
* gptel-transient.el (gptel--suffix-send): When creating a new
session to redirect the response to, ensure that gptel-model is
set correctly in that buffer.
* gptel.el (gptel-update-destination, gptel-use-header-line,
gptel--update-status, gptel-mode): Improve status messaging when not
using the header-line. When the user option
`gptel-use-header-line` (renamed from `gptel-update-destination`)
is set to nil, we use `mode-line-process` to report on in-progress
requests, and show the active LLM (model) otherwise. Error
messages are sent to the echo area. Close#9.
* README.org: Change `gptel-update-destination` to
`gptel-use-header-line` and tweak description.
README: Mention `gptel-update-destination` in README.
gptel.el (gptel-update-destination, gptel--update-status,
gptel-send, gptel--insert-response): New option
`gptel-update-destination` to control how gptel's status messages
are shown. `gptel--update-status` replaces
`gptel--update-header-line`. Replace calls to this function
elsewhere in gptel.el.
gptel-curl.el (gptel-abort, gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Use `gptel--update-status` in
place of `gptel--update-header-line`.
gptel-transient.el (gptel--suffix-send): Use
`gptel--update-status` in place of `gptel--update-header-line`.
* gptel-curl.el (gptel-curl--sentinel, gptel-curl--stream-filter):
Remove redundant calls to `gptel-curl--stream-insert-response`
when the response being inserted is nil or a blank string. This
should be a modest boost to streaming performance.
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses. This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport. The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.
* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.
* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
* gptel-curl.el (gptel-curl--common-args): Following the
discussion in #143, Use "-y300 -Y1" as Curl arguments instead of
specifying the timeout. Now the connection stays open unless less
than 1 byte of information is exchanged over 300 seconds.
* gptel.el: Update package description.
* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.
* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent. Fix formatting and
linter warnings.
* gptel-openai.el(gptel--parse-buffer): Ditto.
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests. Simplify the user configuration
required.
* README.org: Fix formatting errors. Update the configuration
instructions for Gemini.
This closes#149.