Commit graph

3 commits

Author SHA1 Message Date
Karthik Chikmagalur
38095eaed5 gptel: Fix prompt collection bug + linting
* gptel.el: Update package description.

* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.

* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent.  Fix formatting and
linter warnings.

* gptel-openai.el(gptel--parse-buffer): Ditto.
2023-12-20 15:40:56 -08:00
Karthik Chikmagalur
3dd00a7457 gptel-gemini: Add streaming responses, simplify configuration
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests.  Simplify the user configuration
required.

* README.org: Fix formatting errors.  Update the configuration
instructions for Gemini.

This closes #149.
2023-12-20 15:17:14 -08:00
mrdylanyin
84cd7bf5a4 gptel-gemini: Add Gemini support
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API.  Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.

README: Add instructions for Gemini.
2023-12-20 13:55:43 -08:00