* gptel.el (gptel--url-get-response): If the backend-url is a
function, call it to find the full url to query.
* gptel-gemini.el: Gemini uses different urls for
streaming/oneshot responses. Set the backend-url to a function to
account for the value of gptel-stream. This is also safer than
before as the API key is not stored as part of a static url string
in memory. Fix#153.
* gptel-curl.el (gptel-curl--get-args): If the backend-url is a
function, call it to find the full url to query.
* gptel.el: Update package description.
* gptel-gemini.el(gptel--request-data, gptel--parse-buffer): Add
model temperature to request correctly.
* gptel-ollama.el(gptel--parse-buffer): Ensure that newlines are
trimmed correctly even when `gptel-prompt-prefix-string` and
`gptel-response-prefix-string` are absent. Fix formatting and
linter warnings.
* gptel-openai.el(gptel--parse-buffer): Ditto.
* gptel-gemini.el (gptel-make-gemini, gptel-curl--parse-stream,
gptel--request-data, gptel--parse-buffer): Enable streaming for
the Gemini backend, as well as the temperature and max tokens
parameters when making requests. Simplify the user configuration
required.
* README.org: Fix formatting errors. Update the configuration
instructions for Gemini.
This closes#149.
gptel-gemini.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-gemini): Add new file and support
for the Google Gemini LLM API. Streaming and setting model
parameters (temperature, max tokesn) are not yet supported.
README: Add instructions for Gemini.