Skip to content

Commit

Permalink
Update version to v0.0.107
Browse files Browse the repository at this point in the history
  • Loading branch information
GitHub Actions committed Dec 20, 2024
1 parent 5d90d31 commit f28a130
Show file tree
Hide file tree
Showing 15 changed files with 477 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/capabilities/batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem';

A batch is composed of a list of API requests. The structure of an individual request includes:

- A unique `custom_id` for identifying each request and referening results after completion
- A unique `custom_id` for identifying each request and referencing results after completion
- A `body` object with message information

Here's an example of how to structure a batch request:
Expand Down
3 changes: 3 additions & 0 deletions docs/getting-started/changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ November 6, 2024
- `frequency_penalty`: penalizes the repetition of words based on their frequency in the generated text
- `n`: number of completions to return for each request, input tokens are only billed once.

November 6, 2024
- We downscaled the temperature parameter of `pixtral-12b`, `ministral-3b-2410`, and `ministral-8b-2410` by a multiplier of 0.43 to improve consistency, quality, and unify model behavior.

October 9, 2024
- We released Ministral 3B (`ministral-3b-2410`) and Ministral 8B (`ministral-8b-2410`).

Expand Down
3 changes: 3 additions & 0 deletions docs/getting-started/glossary.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -84,3 +84,6 @@ allowing the model to understand and generate language more effectively.
Mistral AI Embeddings API offers cutting-edge, state-of-the-art embeddings for text,
which can be used for many NLP tasks. Check out our [Embeddings](/capabilities/embeddings) guide
to learn more.

## Temperature
Temperature is a fundamental sampling parameter in LLMs that controls the randomness and diversity of the generated outputs. Lower Temperature values result in more deterministic and accurate responses, while higher values introduce more creativity and randomness. This parameter affects the softmax function, which normalizes logits into a probability distribution. Higher Temperatures flatten the distribution, making less likely tokens more probable, while lower Temperatures sharpen the distribution, favoring the most likely tokens. Adjusting the Temperature allows for tailoring the model's behavior to suit different applications, such as requiring high accuracy for tasks like mathematics or classification, or enhancing creativity for tasks like brainstorming or writing novels. Balancing creativity and coherence is crucial, as increasing Temperature can also introduce inaccuracies. Some models, such as `pixtral-12b`, `ministral-3b-2410`, `ministral-8b-2410` and `open-mistral-nemo` have a factor of 0.43 on temperature when used via our services, to align better with how it impacts other models and unify model behaviour.
2 changes: 1 addition & 1 deletion docs/guides/contribute/_category_.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"label": "How to contribute",
"position": 1.9,
"position": 1.10,
"link": {
"type": "doc",
"id": "contribute_overview"
Expand Down
1 change: 0 additions & 1 deletion docs/guides/contribute/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ title: Contribute
slug: overview
---


# How to contribute

Thank you for your interest in contributing to Mistral AI. We welcome everyone who wishes to contribute and we appreciate your time and effort!
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/evaluation.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: evaluation
title: Evaluation
sidebar_position: 1.6
sidebar_position: 1.7
---

<a target="_blank" href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/evaluation/evaluation.ipynb">
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/finetuning.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: finetuning
title: Fine-tuning
sidebar_position: 1.5
sidebar_position: 1.6
---
:::warning[ ]
There's a monthly storage fee of $2 for each model. For more detailed pricing information, please visit our [pricing page](https://mistral.ai/technology/#pricing).
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
id: observability
title: Observability
slug: observability
sidebar_position: 1.7
sidebar_position: 1.8
---

## Why observability?
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/other-resources.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
id: other_resources
title: Other resources
slug: resources
sidebar_position: 1.8
sidebar_position: 1.9
---

Visit the [Mistral AI Cookbook](https://github.com/mistralai/cookbook) for additional inspiration,
Expand Down
Loading

0 comments on commit f28a130

Please sign in to comment.