Skip to content

Commit

Permalink
1.2.3 (#101)
Browse files Browse the repository at this point in the history
* 1.2.3

* wip

* Update ActorTestAppServer.kt

* wip

* 1.2.3

* wip

* wip

* wip

* Update uiHandlers.js
  • Loading branch information
acharneski authored Sep 15, 2024
1 parent 3409209 commit 68fc375
Show file tree
Hide file tree
Showing 44 changed files with 1,687 additions and 521 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,18 +76,18 @@ Maven:
<dependency>
<groupId>com.simiacryptus</groupId>
<artifactId>skyenet-webui</artifactId>
<version>1.1.2</version>
<version>1.1.4</version>
</dependency>
```

Gradle:

```groovy
implementation group: 'com.simiacryptus', name: 'skyenet', version: '1.1.2'
implementation group: 'com.simiacryptus', name: 'skyenet', version: '1.1.4'
```

```kotlin
implementation("com.simiacryptus:skyenet:1.1.2")
implementation("com.simiacryptus:skyenet:1.1.4")
```

### 🌟 To Use
Expand Down
2 changes: 1 addition & 1 deletion core/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ val hsqldb_version = "2.7.2"

dependencies {

implementation(group = "com.simiacryptus", name = "jo-penai", version = "1.1.2")
implementation(group = "com.simiacryptus", name = "jo-penai", version = "1.1.4")
implementation(group = "org.hsqldb", name = "hsqldb", version = hsqldb_version)

implementation("org.apache.commons:commons-text:1.11.0")
Expand Down
12 changes: 6 additions & 6 deletions core/src/main/dev_documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,7 @@ sequenceDiagram
```kotlin
val imageActor = ImageActor(
prompt = "Transform the user request into an image generation prompt that the user will like",
textModel = ChatModels.GPT3_5_Turbo,
textModel = OpenAIModels.GPT3_5_Turbo,
imageModel = ImageModels.DallE2,
temperature = 0.3,
width = 1024,
Expand Down Expand Up @@ -757,8 +757,8 @@ Creates a new instance of `ParsedActor<T>` with the specified OpenAI model.
val actor = ParsedActor(
resultClass = MyClass::class.java,
prompt = "Please parse the following message:",
model = ChatModels.GPT_3_5_Turbo,
parsingModel = ChatModels.GPT_3_5_Turbo
model = OpenAIModels.GPT_3_5_Turbo,
parsingModel = OpenAIModels.GPT_3_5_Turbo
)

val api = OpenAIClient("your_api_key")
Expand Down Expand Up @@ -1388,7 +1388,7 @@ Intercepts calls to the `render` method of the `TextToSpeechActor`. It allows fo

```kotlin
// Create an instance of TextToSpeechActor
val originalActor = TextToSpeechActor("actorName", audioModel, "alloy", 1.0, ChatModels.GPT35Turbo)
val originalActor = TextToSpeechActor("actorName", audioModel, "alloy", 1.0, OpenAIModels.GPT35Turbo)

// Define a function wrapper to intercept calls
val interceptor = FunctionWrapper()
Expand Down Expand Up @@ -1452,7 +1452,7 @@ classDiagram

#### Methods

- `override fun actorFactory(prompt: String): CodingActor`: Creates an instance of `CodingActor` with the specified `interpreterClass`, `details` (prompt), and the model set to `ChatModels.GPT35Turbo`.
- `override fun actorFactory(prompt: String): CodingActor`: Creates an instance of `CodingActor` with the specified `interpreterClass`, `details` (prompt), and the model set to `OpenAIModels.GPT35Turbo`.
- `override fun getPrompt(actor: BaseActor<CodeRequest, CodeResult>): String`: Retrieves the prompt details from the given `CodingActor` instance.
- `override fun resultMapper(result: CodeResult): String`: Maps the `CodeResult` to its `code` property, effectively extracting the generated code snippet.

Expand Down Expand Up @@ -2761,7 +2761,7 @@ This test method validates the functionality of the `incrementUsage` method with

1. **Setup**: A test user is created with predefined attributes. A session ID is generated using `StorageInterface.newGlobalID()`. A predefined usage object is created to simulate the consumption of resources.

2. **Action**: The `incrementUsage` method of the `UsageInterface` implementation is called with the session ID, test user, a model (in this case, `ChatModels.GPT35Turbo`), and the predefined usage object.
2. **Action**: The `incrementUsage` method of the `UsageInterface` implementation is called with the session ID, test user, a model (in this case, `OpenAIModels.GPT35Turbo`), and the predefined usage object.

3. **Verification**:
- The method `getSessionUsageSummary` is called with the session ID to retrieve the usage summary for the session. The test verifies that the returned usage summary matches the predefined usage object.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import com.simiacryptus.jopenai.OpenAIClient
import com.simiacryptus.jopenai.describe.AbbrevWhitelistYamlDescriber
import com.simiacryptus.jopenai.describe.TypeDescriber
import com.simiacryptus.jopenai.models.ChatModels
import com.simiacryptus.jopenai.models.OpenAIModels
import com.simiacryptus.jopenai.models.OpenAITextModel
import com.simiacryptus.jopenai.util.ClientUtil.toContentList
import com.simiacryptus.skyenet.core.OutputInterceptor
Expand All @@ -25,7 +26,7 @@ open class CodingActor(
name: String? = interpreterClass.simpleName,
val details: String? = null,
model: OpenAITextModel,
val fallbackModel: ChatModels = ChatModels.GPT4o,
val fallbackModel: ChatModels = OpenAIModels.GPT4o,
temperature: Double = 0.1,
val runtimeSymbols: Map<String, Any> = mapOf()
) : BaseActor<CodingActor.CodeRequest, CodingActor.CodeResult>(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import com.simiacryptus.jopenai.OpenAIClient
import com.simiacryptus.jopenai.describe.AbbrevWhitelistYamlDescriber
import com.simiacryptus.jopenai.describe.TypeDescriber
import com.simiacryptus.jopenai.models.ChatModels
import com.simiacryptus.jopenai.models.OpenAIModels
import com.simiacryptus.jopenai.models.OpenAITextModel
import com.simiacryptus.jopenai.util.ClientUtil.toContentList
import com.simiacryptus.jopenai.util.JsonUtil
Expand All @@ -17,9 +18,9 @@ open class ParsedActor<T : Any>(
val exampleInstance: T? = resultClass?.getConstructor()?.newInstance(),
prompt: String = "",
name: String? = resultClass?.simpleName,
model: OpenAITextModel = ChatModels.GPT4o,
model: OpenAITextModel = OpenAIModels.GPT4o,
temperature: Double = 0.3,
val parsingModel: OpenAITextModel = ChatModels.GPT35Turbo,
val parsingModel: OpenAITextModel = OpenAIModels.GPT4oMini,
val deserializerRetries: Int = 2,
open val describer: TypeDescriber = object : AbbrevWhitelistYamlDescriber(
"com.simiacryptus", "com.github.simiacryptus"
Expand Down Expand Up @@ -85,9 +86,9 @@ open class ParsedActor<T : Any>(
ApiModel.ChatMessage(role = ApiModel.Role.user, content = "The user message to parse:\n\n$input".toContentList()),
),
temperature = temperature,
model = model.modelName,
model = parsingModel.modelName,
),
model = model,
model = parsingModel,
).choices.first().message?.content
var contentUnwrapped = content?.trim() ?: throw RuntimeException("No response")

Expand Down Expand Up @@ -144,4 +145,4 @@ open class ParsedActor<T : Any>(
companion object {
private val log = org.slf4j.LoggerFactory.getLogger(ParsedActor::class.java)
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ open class DataStorage(
) : StorageInterface {

init {
log.info("Data directory: ${dataDir.absolutePath}", RuntimeException())
log.debug("Data directory: ${dataDir.absolutePath}", RuntimeException())
}

override fun getMessages(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package com.simiacryptus.skyenet.core.platform.test

import com.simiacryptus.jopenai.ApiModel
import com.simiacryptus.jopenai.models.ChatModels
import com.simiacryptus.jopenai.models.OpenAIModels
import com.simiacryptus.skyenet.core.platform.StorageInterface
import com.simiacryptus.skyenet.core.platform.UsageInterface
import com.simiacryptus.skyenet.core.platform.User
Expand All @@ -24,7 +25,7 @@ abstract class UsageTest(private val impl: UsageInterface) {

@Test
fun `incrementUsage should increment usage for session`() {
val model = ChatModels.GPT35Turbo
val model = OpenAIModels.GPT4oMini
val session = StorageInterface.newGlobalID()
val usage = ApiModel.Usage(
prompt_tokens = 10,
Expand All @@ -40,7 +41,7 @@ abstract class UsageTest(private val impl: UsageInterface) {

@Test
fun `getUserUsageSummary should return correct usage summary`() {
val model = ChatModels.GPT35Turbo
val model = OpenAIModels.GPT4oMini
val session = StorageInterface.newGlobalID()
val usage = ApiModel.Usage(
prompt_tokens = 15,
Expand All @@ -54,7 +55,7 @@ abstract class UsageTest(private val impl: UsageInterface) {

@Test
fun `clear should reset all usage data`() {
val model = ChatModels.GPT35Turbo
val model = OpenAIModels.GPT4oMini
val session = StorageInterface.newGlobalID()
val usage = ApiModel.Usage(
prompt_tokens = 20,
Expand All @@ -71,8 +72,8 @@ abstract class UsageTest(private val impl: UsageInterface) {

@Test
fun `incrementUsage should handle multiple models correctly`() {
val model1 = ChatModels.GPT35Turbo
val model2 = ChatModels.GPT4Turbo
val model1 = OpenAIModels.GPT4oMini
val model2 = OpenAIModels.GPT4Turbo
val session = StorageInterface.newGlobalID()
val usage1 = ApiModel.Usage(
prompt_tokens = 10,
Expand All @@ -96,7 +97,7 @@ abstract class UsageTest(private val impl: UsageInterface) {

@Test
fun `incrementUsage should accumulate usage for the same model`() {
val model = ChatModels.GPT35Turbo
val model = OpenAIModels.GPT4oMini
val session = StorageInterface.newGlobalID()
val usage1 = ApiModel.Usage(
prompt_tokens = 10,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import com.simiacryptus.jopenai.audio.AudioRecorder
import com.simiacryptus.jopenai.audio.LookbackLoudnessWindowBuffer
import com.simiacryptus.jopenai.audio.TranscriptionProcessor
import com.simiacryptus.jopenai.models.ChatModels
import com.simiacryptus.jopenai.models.OpenAIModels
import com.simiacryptus.jopenai.proxy.ChatProxy
import org.slf4j.LoggerFactory
import java.util.*
Expand Down Expand Up @@ -38,7 +39,7 @@ open class Ears(
open val commandRecognizer = ChatProxy(
clazz = CommandRecognizer::class.java,
api = api,
model = ChatModels.GPT35Turbo,
model = OpenAIModels.GPT4oMini,
).create()

open fun timeout(ms: Long): () -> Boolean {
Expand Down
22 changes: 11 additions & 11 deletions docs/core_user_documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ The `BaseActor` class abstracts the process of sending prompts to an OpenAI mode

- `prompt`: The initial prompt or question to be sent to the model.
- `name`: An optional name for the actor. Useful for identification purposes in more complex scenarios.
- `model`: The OpenAI model to be used. Defaults to `ChatModels.GPT35Turbo`.
- `model`: The OpenAI model to be used. Defaults to `OpenAIModels.GPT35Turbo`.
- `temperature`: Controls the randomness of the model's responses. Lower values make responses more deterministic.


Expand Down Expand Up @@ -349,8 +349,8 @@ To create an instance of `CodingActor`, you need to provide the following parame
- `describer`: An instance of `TypeDescriber` used to describe the types of the provided symbols.
- `name`: An optional name for the actor.
- `details`: Optional additional details to be included in the prompt sent to the OpenAI API.
- `model`: The OpenAI model to be used for code generation (default is `ChatModels.GPT35Turbo`).
- `fallbackModel`: A fallback OpenAI model to be used in case of failure with the primary model (default is `ChatModels.GPT4o`).
- `model`: The OpenAI model to be used for code generation (default is `OpenAIModels.GPT35Turbo`).
- `fallbackModel`: A fallback OpenAI model to be used in case of failure with the primary model (default is `OpenAIModels.GPT4o`).
- `temperature`: The temperature parameter for the OpenAI API requests (default is `0.1`).
- `runtimeSymbols`: Additional symbols to be added at runtime.

Expand All @@ -362,7 +362,7 @@ val codingActor = CodingActor(
symbols = mapOf("exampleSymbol" to Any()),
name = "MyCodingActor",
details = "This is a detailed description of what the actor does.",
model = ChatModels.GPT35Turbo
model = OpenAIModels.GPT35Turbo
)
```

Expand Down Expand Up @@ -435,7 +435,7 @@ To create an instance of `ImageActor`, you can use the following constructor:
val imageActor = ImageActor(
prompt = "Transform the user request into an image generation prompt that the user will like",
name = null,
textModel = ChatModels.GPT35Turbo,
textModel = OpenAIModels.GPT35Turbo,
imageModel = ImageModels.DallE2,
temperature = 0.3,
width = 1024,
Expand Down Expand Up @@ -474,7 +474,7 @@ You can customize the `ImageActor` by changing its model settings:
- To change the chat model:

```kotlin
val newChatModel: ChatModels = ChatModels.GPT4
val newChatModel: ChatModels = OpenAIModels.GPT4
val updatedActor = imageActor.withModel(newChatModel)
```

Expand Down Expand Up @@ -649,9 +649,9 @@ The `ParsedActor` class is a specialized actor designed to parse responses from
- `parserClass`: The class of the parser function used to convert chat model responses into the desired data type.
- `prompt`: The initial prompt to send to the chat model.
- `name`: An optional name for the actor. Defaults to the simple name of the parser class if not provided.
- `model`: The chat model to use for generating responses. Defaults to `ChatModels.GPT35Turbo`.
- `model`: The chat model to use for generating responses. Defaults to `OpenAIModels.GPT35Turbo`.
- `temperature`: The temperature setting for the chat model, affecting the randomness of responses. Defaults to `0.3`.
- `parsingModel`: The chat model to use specifically for parsing responses. Defaults to `ChatModels.GPT35Turbo`.
- `parsingModel`: The chat model to use specifically for parsing responses. Defaults to `OpenAIModels.GPT35Turbo`.
- `deserializerRetries`: The number of retries for deserialization in case of parsing errors. Defaults to `2`.


Expand Down Expand Up @@ -1349,7 +1349,7 @@ The `SimpleActor` class is part of the `com.simiacryptus.skyenet.core.actors` pa

- `prompt`: A `String` representing the initial prompt or context to be sent to the model.
- `name`: An optional `String` parameter that specifies the name of the actor. It defaults to `null` if not provided.
- `model`: Specifies the model to be used for generating responses. It defaults to `ChatModels.GPT35Turbo`.
- `model`: Specifies the model to be used for generating responses. It defaults to `OpenAIModels.GPT35Turbo`.
- `temperature`: A `Double` value that controls the randomness of the model's responses. Lower values make the model more deterministic. It defaults to `0.3`.


Expand Down Expand Up @@ -1405,7 +1405,7 @@ Creates a new instance of `SimpleActor` with the specified model while retaining
val simpleActor = SimpleActor(
prompt = "Hello, how can I assist you today?",
name = "Assistant",
model = ChatModels.GPT35Turbo,
model = OpenAIModels.GPT35Turbo,
temperature = 0.3
)

Expand Down Expand Up @@ -1530,7 +1530,7 @@ The `ParsedActorTestBase` class is designed to streamline the process of testing
override fun actorFactory(prompt: String): ParsedActor
```

- **Description**: Creates an instance of `ParsedActor` with the specified prompt and parser class. The parsing model is set to `ChatModels.GPT35Turbo` by default.
- **Description**: Creates an instance of `ParsedActor` with the specified prompt and parser class. The parsing model is set to `OpenAIModels.GPT35Turbo` by default.
- **Parameters**:
- `prompt`: A string representing the prompt to be used by the actor.
- **Returns**: An instance of `ParsedActor` configured with the provided prompt and parser.
Expand Down
2 changes: 1 addition & 1 deletion gradle.properties
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Gradle Releases -> https://github.com/gradle/gradle/releases
libraryGroup = com.simiacryptus.skyenet
libraryVersion = 1.2.2
libraryVersion = 1.2.3
gradleVersion = 7.6.1
kotlin.daemon.jvmargs=-Xmx2g
2 changes: 1 addition & 1 deletion webui/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ val jackson_version = "2.17.2"

dependencies {

implementation(group = "com.simiacryptus", name = "jo-penai", version = "1.1.2") {
implementation(group = "com.simiacryptus", name = "jo-penai", version = "1.1.4") {
exclude(group = "org.slf4j")
}

Expand Down
6 changes: 3 additions & 3 deletions webui/src/compiled_documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ val codingAgent = CodingAgent(
symbols = mapOf("exampleSymbol" to Any()),
temperature = 0.1,
details = "Optional details",
model = ChatModels.GPT35Turbo
model = OpenAIModels.GPT35Turbo
)

// Start the code generation process with a user message
Expand Down Expand Up @@ -279,7 +279,7 @@ interpreter: KClass<T>,
symbols: Map<String, Any>,
temperature: Double = 0.1,
details: String?,
model: ChatModels = ChatModels.GPT35Turbo,
model: ChatModels = OpenAIModels.GPT35Turbo,
actorMap: Map<ActorTypes, CodingActor>
)
```
Expand Down Expand Up @@ -1041,7 +1041,7 @@ sending and receiving messages, processing user inputs, and generating responses
### Constructor Parameters

- `session`: The current user session.
- `model`: The OpenAI text model to use for generating responses. Default is `ChatModels.GPT35Turbo`.
- `model`: The OpenAI text model to use for generating responses. Default is `OpenAIModels.GPT35Turbo`.
- `userInterfacePrompt`: A prompt displayed to the user at the start of the chat session.
- `initialAssistantPrompt`: The initial message from the assistant. Default is an empty string.
- `systemPrompt`: A system-level prompt that influences the assistant's responses.
Expand Down
Loading

0 comments on commit 68fc375

Please sign in to comment.