Skip to content

Commit

Permalink
[ChatGPT] Enhance binding (#17320)
Browse files Browse the repository at this point in the history
Signed-off-by: Artur-Fedjukevits <[email protected]>
  • Loading branch information
Artur-Fedjukevits authored Nov 9, 2024
1 parent 7cab153 commit 5929ef8
Show file tree
Hide file tree
Showing 18 changed files with 1,404 additions and 89 deletions.
71 changes: 51 additions & 20 deletions bundles/org.openhab.binding.chatgpt/README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,39 @@
# ChatGPT Binding

The openHAB ChatGPT Binding allows openHAB to communicate with the ChatGPT language model provided by OpenAI.
The openHAB ChatGPT Binding allows openHAB to communicate with the ChatGPT language model provided by OpenAI and manage openHAB system via [Function calling](https://platform.openai.com/docs/guides/function-calling).

ChatGPT is a powerful natural language processing (NLP) tool that can be used to understand and respond to a wide range of text-based commands and questions.
With this binding, you can use ChatGPT to formulate proper sentences for any kind of information that you would like to output.
ChatGPT is a powerful natural language processing (NLP) tool that can be used to understand and respond to a wide range of text-based commands and questions.
With this binding, users can:

- Control openHAB Devices: Manage lights, climate systems, media players, and more with natural language commands.
- Multi-language Support: Issue commands in almost any language, enhancing accessibility.
- Engage in Conversations: Have casual conversations, ask questions, and receive informative responses.
- Extended Capabilities: Utilize all other functionalities of ChatGPT, from composing creative content to answering complex questions.

This integration significantly enhances user experience, providing seamless control over smart home environments and access to the full range of ChatGPT’s capabilities.

## Supported Things

The binding supports a single thing type `account`, which corresponds to the OpenAI account that is to be used for the integration.

## Thing Configuration

The `account` thing requires a single configuration parameter, which is the API key that allows accessing the account.
The `account` thing requires the API key that allows accessing the account.
API keys can be created and managed under <https://platform.openai.com/account/api-keys>.

| Name | Type | Description | Default | Required | Advanced |
|-----------------|---------|-----------------------------------------------------------|--------------------------------------------|----------|----------|
| apiKey | text | The API key to be used for the requests | N/A | yes | no |
| apiUrl | text | The server API where to reach the AI service | https://api.openai.com/v1/chat/completions | no | yes |
| modelUrl | text | The model url where to retrieve the available models from | https://api.openai.com/v1/models | no | yes |
| Name | Type | Description | Default | Required | Advanced |
|------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------|----------|
| apiKey | text | The API key to be used for the requests | N/A | yes | no |
| temperature | decimal | A value between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 0.5 | no | no |
| topP | decimal | A value between 0 and 1. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | 1.0 | no | yes |
| apiUrl | text | The server API where to reach the AI service | https://api.openai.com/v1/chat/completions | no | yes |
| modelUrl | text | The model url where to retrieve the available models from | https://api.openai.com/v1/models | no | yes |
| model | text | The model to be used for the HLI service | gpt-4o-mini | no | yes |
| systemMessage | text | Here you need to describe your openHAB system that will help AI control your smart home. | N/A | if HLI | yes |
| maxTokens | decimal | The maximum number of tokens to generate in the completion. | 500 | no | yes |
| keepContext | decimal | How long should the HLI service retain context between requests (in minutes) | 2 | no | yes |
| contextThreshold | decimal | Limit total tokens included in context. | 10000 | no | yes |
| useSemanticModel | boolean | Use the semantic model to determine the location of an item. | true | no | yes |

The advanced parameters `apiUrl` and `modelUrl` can be used, if any other ChatGPT-compatible service is used, e.g. a local installation of [LocalAI](https://github.com/go-skynet/LocalAI).

Expand All @@ -33,32 +48,41 @@ It is possible to extend the thing with further channels of type `chat`, so that

Each channel of type `chat` takes the following configuration parameters:

| Name | Type | Description | Default | Required | Advanced |
|-----------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|----------|
| model | text | The model to be used for the responses. | gpt-3.5-turbo | no | no |
| temperature | decimal | A value between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 0.5 | no | no |
| systemMessage | text | The system message helps set the behavior of the assistant. | N/A | no | no |
| maxTokens | decimal | The maximum number of tokens to generate in the completion. | 500 | no | yes |
| Name | Type | Description | Default | Required | Advanced |
|---------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------|----------|
| model | text | The model to be used for the responses. | gpt-4o | yes | no |
| systemMessage | text | The system message helps set the behavior of the assistant. | N/A | yes | no |
| temperature | decimal | A value between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 0.5 | no | yes |
| topP | decimal | A value between 0 and 1. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | 1.0 | no | yes |
| maxTokens | decimal | The maximum number of tokens to generate in the completion. | 1000 | no | yes |


## Items Configuration

Items to be used by the HLI service must be tagged with the [ "ChatGPT" ] tag.
If no semantic model is set up, you can set the parameter `useSemanticModel` to false.
In this case, the item names must follow the naming convention '<Location>_***', for example "Kitchen_Light". The label of the items are expected to briefly describe the item in more detail.

## Full Example

### Thing Configuration

```java
Thing chatgpt:account:1 [apiKey="<your api key here>"] {
Thing chatgpt:account:1 [
apiKey="",
] {
Channels:
Type chat : chat "Weather Advice" [
model="gpt-3.5-turbo",
model="gpt-4o-mini",
temperature="1.5",
systemMessage="Answer briefly, in 2-3 sentences max. Behave like Eddie Murphy and give an advice for the day based on the following weather data:"
]
Type chat : morningMessage "Morning Message" [
model="gpt-3.5-turbo",
model="gpt-4o-mini",
temperature="0.5",
systemMessage="You are Marvin, a very depressed robot. You wish a good morning and tell the current time."
]
]
}

```

### Item Configuration
Expand All @@ -69,8 +93,14 @@ String Morning_Message { channel="chatgpt:account:1:morningMessage" }

Number Temperature_Forecast_Low
Number Temperature_Forecast_High
Dimmer Kitchen_Dimmer "Kitchen main light" [ "ChatGPT" ]
```

### UI Configuration of the HLI Service

To enable the HLI service, go to Settings -> Voice and choose "ChatGPT Human Language Interpreter".
A text-to-speech service must be configured.

### Example Rules

```java
Expand Down Expand Up @@ -106,3 +136,4 @@ and
```

The state updates can be used for a text-to-speech output and they will give your announcements at home a personal touch.

Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,13 @@
@NonNullByDefault
public class ChatGPTChannelConfiguration {

public String model = "gpt-3.5-turbo";
public String model = "gpt-4o-mini";

public float temperature = 0.5f;
public Double temperature = 0.5;

public Double topP = 1.0;

public String systemMessage = "";

int maxTokens = 500;
public int maxTokens = 500;
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,12 @@ public class ChatGPTConfiguration {
public String apiKey = "";
public String apiUrl = "https://api.openai.com/v1/chat/completions";
public String modelUrl = "https://api.openai.com/v1/models";
public boolean useSemanticModel = true;
public String model = "gpt-4o-mini";
public Double temperature = 1.0;
public Integer maxTokens = 1000;
public Double topP = 1.0;
public String systemMessage = "";
public Integer keepContext = 2;
public Integer contextThreshold = 10000;
}
Loading

0 comments on commit 5929ef8

Please sign in to comment.