Commit graph

12 commits

Author SHA1 Message Date
Karthik Chikmagalur
f58ad9435c gptel: Use libjansson support if available
Using the libjansson JSON parser gives us a modest boost in speed.
It's not as significant a speedup as it is for LSP clients since
our jSON payloads are smaller and less frequent -- but we might as
well use it.

* gptel.el (gptel--json-read, gptel--json-encode,
gptel--url-get-response, gptel--parse-response): Define macros to
use the libjansson-supported `json-parse-buffer` and
`json-serialize`.  Replace use of `json-encode` and `json-read`
appropriately.

* gptel-openai.el: (gptel-curl--parse-stream) : Use
`gptel--json-read` instead of `json-read`.

* gptel-ollama.el (gptel-curl--parse-stream): Use
`gptel--json-read` instead of `json-read`.

* gptel-gemini.el (gptel-curl--parse-stream): Use
`gptel--json-read` instead of `json-read`.

* gptel-curl.el (gptel-curl--get-args, gptel-curl--get-response,
gptel-curl--log-response, gptel-curl--stream-cleanup,
gptel-curl--parse-response): Use `gptel--json-read` and
`gptel--json-encode` in place of the json.el versions.

* gptel-anthropic.el (gptel-curl--parse-stream): Use
`gptel--json-read` instead of `json-read`.

* test/gptel-org-test.el: Use `gptel--json-read`.
2024-03-14 20:28:21 -07:00
Cash Prokop-Weaver
3935a6dcf8
♻️: Untangle Gemini model and endpoint #212 (#213)
gptel-gemini.el (gptel-make-gemini): Decouple the Gemini model
from the API endpoint.  This is to support additional model
options in the future.
2024-03-10 21:02:42 -07:00
r0man
43f625ecb9
gptel-openai: curl-args slot in gptel-backend (#221)
gptel-openai.el (gptel-backend, gptel-make-openai,
gptel-make-azure): Add a curl-args slot to the backend struct for
additional Curl arguments.

Usage example: This can be used to set the `--cert` and `--key`
options in a custom backend that uses mutal TLS to communicate
with an OpenAI proxy/gateway.

gptel-curl.el (gptel-curl--get-args): Add backend-specific
curl-args when creating HTTP requests.

gptel-gemini.el (gptel-make-gemini): Add a curl-args slot to the
constructor.
gptel-kagi.el (gptel-make-kagi): Ditto.
gptel-ollama.el (gptel-make-ollama): Ditto.
2024-02-20 15:21:46 -08:00
Karthik Chikmagalur
ef8b9093d2 gptel-gemini: Use permissive API safety settings
* gptel-gemini.el (gptel-make-gemini, gptel--request-data): The
Gemini API misclassifies harmless questions (like "What's 2+2",
see #208) as harmful.  Use the most permissive safety settings the
API offers.

Also respect the value of `:stream` used when defining Gemini
backends.
2024-02-07 19:03:45 -08:00
Karthik Chikmagalur
d0c685e501 gptel: checkdoc linting and indentation rules
* gptel.el (gptel-use-header-line, gptel--parse-buffer):
Docstrings.

* gptel-transient.el (gptel--transient-read-variable): Docstring.

* gptel-openai.el (gptel-make-openai, gptel-make-azure): Add
indent declaration.

* gptel-ollama.el (gptel-make-ollama): Add indent declaration.

* gptel-kagi.el (gptel-make-kagi): Add indent declaration.

* gptel-gemini.el (gptel-make-gemini): Add indent declaration.
2024-01-19 14:19:22 -08:00
Karthik Chikmagalur
e5357383ce gptel: Appease byte-compiler and linter
* gptel-transient.el:

* gptel-openai.el:

* gptel-gemini.el:
2023-12-29 13:19:23 -08:00
Karthik Chikmagalur
f571323174 gptel-gemini: Simulate system-message for gemini
* gptel-gemini.el (gptel--parse-buffer): The Gemini API does not
provide an explicit system message parameter.  In the interest of
providing a uniform interface, simulate this in gptel by
prepending the first user message with `gptel--system-message`.
2023-12-27 22:35:41 -08:00
Karthik Chikmagalur
2e92c0303c gptel: gptel-backend-url can accept functions
* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.

* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses.  Set the backend-url to a function to
account for the value of gptel-stream.  This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix #153.

* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
2023-12-22 16:25:32 -08:00
Karthik Chikmagalur
4d01dddf7d gptel, gptel-curl: Address checkdoc warnings
* gptel.el (gptel--url-parse-response, gptel-max-tokens,
gptel-use-header-line): Address checkdoc warnings.

* gptel-curl.el (gptel-curl--parse-response, gptel-abort):
Address checkdoc warnings.

* gptel-gemini (gptel-make-gemini): Address checkdoc warnings.
2023-12-22 16:23:50 -08:00
Karthik Chikmagalur
38095eaed5 gptel: Fix prompt collection bug + linting
* gptel.el: Update package description.

* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.

* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent.  Fix formatting and
linter warnings.

* gptel-openai.el(gptel--parse-buffer): Ditto.
2023-12-20 15:40:56 -08:00
Karthik Chikmagalur
3dd00a7457 gptel-gemini: Add streaming responses, simplify configuration
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests.  Simplify the user configuration
required.

* README.org: Fix formatting errors.  Update the configuration
instructions for Gemini.

This closes #149.
2023-12-20 15:17:14 -08:00
mrdylanyin
84cd7bf5a4 gptel-gemini: Add Gemini support
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API.  Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.

README: Add instructions for Gemini.
2023-12-20 13:55:43 -08:00