Commit graph

11 commits

Author SHA1 Message Date
Karthik Chikmagalur
d0c685e501 gptel: checkdoc linting and indentation rules
* gptel.el (gptel-use-header-line, gptel--parse-buffer):
Docstrings.

* gptel-transient.el (gptel--transient-read-variable): Docstring.

* gptel-openai.el (gptel-make-openai, gptel-make-azure): Add
indent declaration.

* gptel-ollama.el (gptel-make-ollama): Add indent declaration.

* gptel-kagi.el (gptel-make-kagi): Add indent declaration.

* gptel-gemini.el (gptel-make-gemini): Add indent declaration.
2024-01-19 14:19:22 -08:00
João Távora
1fcb4606a2 Fix compilation warning in gptel-openai.el
* gptel-openai.el (cl-lib): Require it.

* gptel.el (compat): Leniently require compat so gptel.el can be
compiled standalone.  This will expose other compiler errors that
are easily visible with M-x flymake.
2024-01-18 17:32:08 -08:00
Karthik Chikmagalur
8ec233d79c gptel: Name gptel buffer according to backend
* gptel.el (gptel-default-session, gptel): Name the gptel buffer
according to the default backend.  Delete the variable
`gptel-default-session`.  Fix #174.

* gptel-openai.el (gptel-make-openai): Don't specify a key by
default. Fix #170.
2024-01-12 22:27:20 -08:00
Karthik Chikmagalur
febeada960 gptel: Make gptel-backend customizable
* gptel.el (gptel-backend): Turn `gptel-backend` into a defcustom
so it can be used with setopt.  Fix #167.

* gptel-openai.el (gptel-make-openai): Improve docstring.
2024-01-05 20:55:52 -08:00
Karthik Chikmagalur
e5357383ce gptel: Appease byte-compiler and linter
* gptel-transient.el:

* gptel-openai.el:

* gptel-gemini.el:
2023-12-29 13:19:23 -08:00
Karthik Chikmagalur
38095eaed5 gptel: Fix prompt collection bug + linting
* gptel.el: Update package description.

* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.

* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent.  Fix formatting and
linter warnings.

* gptel-openai.el(gptel--parse-buffer): Ditto.
2023-12-20 15:40:56 -08:00
daedsidog
644e341244
Add multiline prefixes & AI response prefixes (#142)
gptel: Add customizable prompt/response prefixes

gptel.el (gptel-prompt-prefix-alist, gptel-response-prefix-alist,
gptel-prompt-prefix-string, gptel-response-prefix-string,
gptel--url-get-response): Add customizable response prefixes (per
major-mode) in `gptel-response-prefix-alist`.

Rename `gptel-prompt-string` -> `gptel-prompt-prefix-string`

The function `gptel-response-prefix-string` returns the prefix
string for the response in the current major-mode.

gptel-openai.el, gptel-ollama.el (gptel--parse-buffer): Remove the
prompt and response prefixes when creating prompt strings to send
to the LLM API.

gptel-curl.el (gptel-curl--stream-cleanup,
gptel-curl--stream-insert-response): Insert the response prefix
for the current major-mode before inserting the LLM API response.
2023-12-15 09:30:16 -08:00
Karthik Chikmagalur
3c01477c37 gptel: api-key shenanigans
gptel.el (gptel--get-api-key, gptel, gptel-mode,
gptel-make-openai, gptel-api-key-from-auth-source): Handle models
that don't require an API key.

gptel-transient.el (gptel--suffix-system-message): Set backend
from buffer-local value when invoking, and handle API key
requirement better.
2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
c97778d5a8 gptel: address byte-compile and checkdoc warnings
* gptel.el, gptel-transient.el, gptel-openai.el, gptel-ollama.el
2023-11-07 20:36:37 -08:00
Karthik Chikmagalur
1434bbac7b gptel-ollama, gptel-openai: Add example of backend creation
README: Fix error with Ollama backend instructions
2023-10-29 00:31:56 -07:00
Karthik Chikmagalur
6419e8f021 gptel: Add multi-llm support
README.org: Update README with new information and a multi-llm demo.

gptel.el (gptel-host, gptel--known-backends, gptel--api-key,
gptel--create-prompt, gptel--request-data, gptel--parse-buffer, gptel-request,
gptel--parse-response, gptel--openai, gptel--debug, gptel--restore-state,
gptel, gptel-backend):

Integrate multiple LLMs through the introcution of gptel-backends. Each backend
is composed of two pieces:

1. An instance of a cl-struct, containing connection, authentication and model
information.  See the cl-struct `gptel-backend` for details.  A separate
cl-struct type is defined for each supported backend (OpenAI, Azure, GPT4All and
Ollama) that inherits from the generic gptel-backend type.

2. cl-generic implementations of specific tasks, like gathering up and
formatting context (previous user queries and LLM responses), parsing responses
or responses streams etc.  The four tasks currently specialized this way are
carried out by `gptel--parse-buffer` and `gptel--request-data` (for constructing
the query) and `gptel--parse-response` and `gptel-curl--parse-stream` (for
parsing the response).  See their implementations for details.  Some effort has
been made to limit the number of times dispatching is done when reading
streaming responses.

When a backend is created, it is registered in the collection
`gptel--known-backends` and can be accessed by name later, such as from the
transient menu.

Only one of these backends is active at any time in a buffer, stored in the
buffer-local variable `gptel-backend`. Most messaging, authentication etc
accounts for the active backend, although there might be some leftovers.

When using `gptel-request` or `gptel-send`, the active backend can be changed or
let-bound.

- Obsolete `gptel-host`
- Fix the rear-sticky property when restoring sessions from files.
- Document some variables (not user options), like `gptel--debug`

gptel-openai.el (gptel-backend, gptel-make-openai, gptel-make-azure,
gptel-make-gpt4all): This file (currently always loaded) sets up the generic
backend struct and includes constructors for creating OpenAI, GPT4All and Azure
backends.  They all use the same API so a single set of defgeneric
implemenations suffices for all of them.

gptel-ollama.el (gptel-make-ollama): This file includes the cl-struct,
constructor and requisite defgeneric implementations for Ollama support.

gptel-transient.el (gptel-menu, gptel-provider-variable, gptel--infix-provider,
gptel-suffix-send):

- Provide access to all available LLM backends and models from `gptel-menu`.
- Adjust keybindings in gptel-menu: setting the model and query parameters is
  now bound to two char keybinds, while redirecting input and output is bound to
  single keys.
2023-10-28 23:57:47 -07:00