Using the libjansson JSON parser gives us a modest boost in speed.
It's not as significant a speedup as it is for LSP clients since
our jSON payloads are smaller and less frequent -- but we might as
well use it.
* gptel.el (gptel--json-read, gptel--json-encode,
gptel--url-get-response, gptel--parse-response): Define macros to
use the libjansson-supported `json-parse-buffer` and
`json-serialize`. Replace use of `json-encode` and `json-read`
appropriately.
* gptel-openai.el: (gptel-curl--parse-stream) : Use
`gptel--json-read` instead of `json-read`.
* gptel-ollama.el (gptel-curl--parse-stream): Use
`gptel--json-read` instead of `json-read`.
* gptel-gemini.el (gptel-curl--parse-stream): Use
`gptel--json-read` instead of `json-read`.
* gptel-curl.el (gptel-curl--get-args, gptel-curl--get-response,
gptel-curl--log-response, gptel-curl--stream-cleanup,
gptel-curl--parse-response): Use `gptel--json-read` and
`gptel--json-encode` in place of the json.el versions.
* gptel-anthropic.el (gptel-curl--parse-stream): Use
`gptel--json-read` instead of `json-read`.
* test/gptel-org-test.el: Use `gptel--json-read`.
gptel-openai.el (gptel-backend, gptel-make-openai,
gptel-make-azure): Add a curl-args slot to the backend struct for
additional Curl arguments.
Usage example: This can be used to set the `--cert` and `--key`
options in a custom backend that uses mutal TLS to communicate
with an OpenAI proxy/gateway.
gptel-curl.el (gptel-curl--get-args): Add backend-specific
curl-args when creating HTTP requests.
gptel-gemini.el (gptel-make-gemini): Add a curl-args slot to the
constructor.
gptel-kagi.el (gptel-make-kagi): Ditto.
gptel-ollama.el (gptel-make-ollama): Ditto.
* gptel-gemini.el (gptel-make-gemini, gptel--request-data): The
Gemini API misclassifies harmless questions (like "What's 2+2",
see #208) as harmful. Use the most permissive safety settings the
API offers.
Also respect the value of `:stream` used when defining Gemini
backends.
* gptel-gemini.el (gptel--parse-buffer): The Gemini API does not
provide an explicit system message parameter. In the interest of
providing a uniform interface, simulate this in gptel by
prepending the first user message with `gptel--system-message`.
* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.
* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses. Set the backend-url to a function to
account for the value of gptel-stream. This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix#153.
* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
* gptel.el: Update package description.
* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.
* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent. Fix formatting and
linter warnings.
* gptel-openai.el(gptel--parse-buffer): Ditto.
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests. Simplify the user configuration
required.
* README.org: Fix formatting errors. Update the configuration
instructions for Gemini.
This closes#149.
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API. Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.
README: Add instructions for Gemini.