Skip to content

Commit

Permalink
Update Claude 3.5 Sonnet model version to 20241022
Browse files Browse the repository at this point in the history
- Update Claude 3.5 Sonnet model version from 20240620 to 20241022 across:
  - AnthropicApi model definitions
  - Integration tests
  - Sample events JSON
  - Documentation pages
- Upgrade Ollama container to 0.3.14 in tests
- Add llama3.2:1b model to Ollama tests
- Convert Ollama functionCallTest to parameterized test
  • Loading branch information
tzolov committed Oct 22, 2024
1 parent 7b06fcf commit a1980ec
Show file tree
Hide file tree
Showing 7 changed files with 13 additions and 11 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ public AnthropicApi(String baseUrl, String anthropicApiKey, String anthropicVers
public enum ChatModel implements ChatModelDescription {

// @formatter:off
CLAUDE_3_5_SONNET("claude-3-5-sonnet-20240620"),
CLAUDE_3_5_SONNET("claude-3-5-sonnet-20241022"),

CLAUDE_3_OPUS("claude-3-opus-20240229"),
CLAUDE_3_SONNET("claude-3-sonnet-20240229"),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ class AnthropicChatModelIT {

@ParameterizedTest(name = "{0} : {displayName} ")
@ValueSource(strings = { "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307",
"claude-3-5-sonnet-20240620" })
"claude-3-5-sonnet-20241022" })
void roleTest(String modelName) {
UserMessage userMessage = new UserMessage(
"Tell me about 3 famous pirates from the Golden Age of Piracy and why they did.");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ void streamFunctionCallTest() {

@ParameterizedTest(name = "{0} : {displayName} ")
@ValueSource(strings = { "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307",
"claude-3-5-sonnet-20240620" })
"claude-3-5-sonnet-20241022" })
void multiModalityEmbeddedImage(String modelName) throws IOException {

// @formatter:off
Expand All @@ -280,7 +280,7 @@ void multiModalityEmbeddedImage(String modelName) throws IOException {
@Disabled("Currently Anthropic API does not support external image URLs")
@ParameterizedTest(name = "{0} : {displayName} ")
@ValueSource(strings = { "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307",
"claude-3-5-sonnet-20240620" })
"claude-3-5-sonnet-20241022" })
void multiModalityImageUrl(String modelName) throws IOException {

// TODO: add url method that wrapps the checked exception.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"type": "message",
"role": "assistant",
"content": [],
"model": "claude-3-5-sonnet-20240620",
"model": "claude-3-5-sonnet-20241022",
"stop_reason": null,
"stop_sequence": null,
"usage": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ class OllamaWithOpenAiChatModelIT {
private static final String DEFAULT_OLLAMA_MODEL = "mistral";

@Container
static OllamaContainer ollamaContainer = new OllamaContainer("ollama/ollama:0.3.9");
static OllamaContainer ollamaContainer = new OllamaContainer("ollama/ollama:0.3.14");

static String baseUrl = "http://localhost:11434";

Expand All @@ -86,6 +86,7 @@ public static void beforeAll() throws IOException, InterruptedException {
logger.info("Start pulling the '" + DEFAULT_OLLAMA_MODEL + " ' generative ... would take several minutes ...");
ollamaContainer.execInContainer("ollama", "pull", DEFAULT_OLLAMA_MODEL);
ollamaContainer.execInContainer("ollama", "pull", "llava");
ollamaContainer.execInContainer("ollama", "pull", "llama3.2:1b");
logger.info(DEFAULT_OLLAMA_MODEL + " pulling competed!");

baseUrl = "http://" + ollamaContainer.getHost() + ":" + ollamaContainer.getMappedPort(11434);
Expand Down Expand Up @@ -260,8 +261,9 @@ void beanStreamOutputConverterRecords() {
assertThat(actorsFilms.movies()).hasSize(5);
}

@Test
void functionCallTest() {
@ParameterizedTest(name = "{0} : {displayName} ")
@ValueSource(strings = { "llama3.2:1b" })
void functionCallTest(String modelName) {

UserMessage userMessage = new UserMessage(
"What's the weather like in San Francisco, Tokyo, and Paris? Return the temperature in Celsius.");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ The prefix `spring.ai.anthropic.chat` is the property prefix that lets you confi
| Property | Description | Default

| spring.ai.anthropic.chat.enabled | Enable Anthropic chat model. | true
| spring.ai.anthropic.chat.options.model | This is the Anthropic Chat model to use. Supports: `claude-3-5-sonnet-20240620`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307` and the legacy `claude-2.1`, `claude-2.0` and `claude-instant-1.2` models. | `claude-3-opus-20240229`
| spring.ai.anthropic.chat.options.model | This is the Anthropic Chat model to use. Supports: `claude-3-5-sonnet-20241022`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307` and the legacy `claude-2.1`, `claude-2.0` and `claude-instant-1.2` models. | `claude-3-opus-20240229`
| spring.ai.anthropic.chat.options.temperature | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. | 0.8
| spring.ai.anthropic.chat.options.max-tokens | The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. | 500
| spring.ai.anthropic.chat.options.stop-sequence | Custom text sequences that will cause the model to stop generating. Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn". If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence. | -
Expand Down Expand Up @@ -195,7 +195,7 @@ Add a `application.properties` file, under the `src/main/resources` directory, t
[source,application.properties]
----
spring.ai.anthropic.api-key=YOUR_API_KEY
spring.ai.anthropic.chat.options.model=claude-3-5-sonnet-20240620
spring.ai.anthropic.chat.options.model=claude-3-5-sonnet-20241022
spring.ai.anthropic.chat.options.temperature=0.7
spring.ai.anthropic.chat.options.max-tokens=450
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ TIP: Starting of Jul 1st, 2024, streaming function calling and Tool use is suppo

You can register custom Java functions with the `AnthropicChatModel` and have the Anthropic models intelligently choose to output a JSON object containing arguments to call one or many of the registered functions.
This allows you to connect the LLM capabilities with external tools and APIs.
The `claude-3-5-sonnet-20240620`, `claude-3-opus`, `claude-3-sonnet` and `claude-3-haiku` link:https://docs.anthropic.com/claude/docs/tool-use#tool-use-best-practices-and-limitations[models are trained to detect when a function should be called] and to respond with JSON that adheres to the function signature.
The `claude-3-5-sonnet-20241022`, `claude-3-opus`, `claude-3-sonnet` and `claude-3-haiku` link:https://docs.anthropic.com/claude/docs/tool-use#tool-use-best-practices-and-limitations[models are trained to detect when a function should be called] and to respond with JSON that adheres to the function signature.

The Anthropic API does not call the function directly; instead, the model generates JSON that you can use to call the function in your code and return the result back to the model to complete the conversation.

Expand Down

0 comments on commit a1980ec

Please sign in to comment.