From f5228daeeb4eccb49ceb1183e1a9c4420fd473ab Mon Sep 17 00:00:00 2001 From: Achiya Elyasaf <10044875+achiyae@users.noreply.github.com> Date: Thu, 7 Dec 2023 22:09:59 +0200 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index c7baa8e..445b7e5 100644 --- a/README.md +++ b/README.md @@ -103,8 +103,8 @@ Entry 7 & Entry 8 & Entry 9\\ \end{table} ``` -## Connecting vLLM Models -It is possible to use also locally deployed LLM models, as long as they support OpenAI Chat Completion API. vLLM models support this API (see [here](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#using-openai-chat-api-with-vllm)). +## Using non-GPT Models +Other LLM deployments/models are supported as long as they can be accessed via OpenAI Chat Completion API. Some examples: [vLLM models](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#using-openai-chat-api-with-vllm), [LLAMA models](https://github.com/c0sogi/llama-api#usage-chat-completion), and [easyLLM](https://philschmid.github.io/easyllm/examples/chat-completion-api/). Use the plugin's JSON editor to change the URL and the model. See [Issue #8](https://github.com/bThink-BGU/LeafLLM/issues/8) for further details.