From 32dd463bd643d3527d14abe09f54cca6846ad006 Mon Sep 17 00:00:00 2001 From: Karthik Chikmagalur Date: Mon, 25 Dec 2023 18:20:29 -0800 Subject: [PATCH] README: Mention YouTube demo * gptel.el: Change blurb * README.org: Mention YouTube demo --- README.org | 24 +++++++++++++----------- gptel.el | 2 +- 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/README.org b/README.org index 450db90..b6458db 100644 --- a/README.org +++ b/README.org @@ -14,7 +14,7 @@ GPTel is a simple Large Language Model chat client for Emacs, with support for m | PrivateGPT | Planned | - | | Llama.cpp | Planned | - | -*General usage*: +*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]]) https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4 @@ -223,6 +223,8 @@ You can pick this backend from the transient menu when using gptel (see Usage), ** Usage +(This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel.) + |-------------------+-------------------------------------------------------------------------| | *Command* | Description | |-------------------+-------------------------------------------------------------------------| @@ -327,16 +329,16 @@ Other Emacs clients for LLMs prescribe the format of the interaction (a comint s | =gptel-api-key= | Variable/function that returns the API key for the active backend. | |---------------------------+---------------------------------------------------------------------| -|---------------------------+---------------------------------------------------------------------| -| *LLM options* | /(Note: not supported uniformly across LLMs)/ | -|---------------------------+---------------------------------------------------------------------| -| =gptel-backend= | Default LLM Backend. | -| =gptel-model= | Default model to use (depends on the backend). | -| =gptel-stream= | Enable streaming responses (overrides backend-specific preference). | -| =gptel-directives= | Alist of system directives, can switch on the fly. | -| =gptel-max-tokens= | Maximum token count (in query + response). | -| =gptel-temperature= | Randomness in response text, 0 to 2. | -|---------------------------+---------------------------------------------------------------------| +|-------------------+---------------------------------------------------------| +| *LLM options* | /(Note: not supported uniformly across LLMs)/ | +|-------------------+---------------------------------------------------------| +| =gptel-backend= | Default LLM Backend. | +| =gptel-model= | Default model to use, depends on the backend. | +| =gptel-stream= | Enable streaming responses, if the backend supports it. | +| =gptel-directives= | Alist of system directives, can switch on the fly. | +| =gptel-max-tokens= | Maximum token count (in query + response). | +| =gptel-temperature= | Randomness in response text, 0 to 2. | +|-------------------+---------------------------------------------------------| |-----------------------------+----------------------------------------| | *Chat UI options* | | diff --git a/gptel.el b/gptel.el index 79e805e..9252288 100644 --- a/gptel.el +++ b/gptel.el @@ -1,4 +1,4 @@ -;;; gptel.el --- A simple multi-LLM client -*- lexical-binding: t; -*- +;;; gptel.el --- Interact with ChatGPT or other LLMs -*- lexical-binding: t; -*- ;; Copyright (C) 2023 Karthik Chikmagalur