Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Serverless] Adds Trained model autoscaling page #139

Merged
merged 21 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
c0564eb
Adds Trained model autoscaling page
kosabogi Oct 24, 2024
d8dde76
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
f6b650d
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
4683ddc
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
f91c731
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
f17b2a4
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
26fb9e0
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
c8b8b3b
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
55117e6
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
ac3a201
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
18487de
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
e430ce7
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
27b445c
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
8f7a9d2
Changes paragraph placement
kosabogi Oct 25, 2024
1beb41c
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 30, 2024
031c91f
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 31, 2024
9fdee40
Updates document based on feedback
kosabogi Nov 6, 2024
ad99fd3
Merge remote-tracking branch 'upstream/main' into add-serverless-mode…
kosabogi Nov 6, 2024
1b3caa8
Merge branch 'main' into add-serverless-model-autoscaling-doc
colleenmcginnis Nov 6, 2024
37ee519
mdx to asciidoc
colleenmcginnis Nov 6, 2024
aaf7ae9
Updates table
kosabogi Nov 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added serverless/images/ml-nlp-deployment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions serverless/nav/serverless-general.docnav.json
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,9 @@
},
{
"slug": "/serverless/regions"
},
{
"slug": "/serverless/general/ml-nlp-auto-scale"
}
]
}
132 changes: 132 additions & 0 deletions serverless/pages/ml-nlp-auto-scale.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
---
slug: /serverless/general/ml-nlp-auto-scale
title: Trained model autoscaling
tags: ['serverless']
---

You can enable autoscaling for each of your trained model deployments.
Autoscaling allows Elasticsearch to automatically adjust the resources the model deployment can use based on the workload demand.

There are two ways to enable autoscaling:

- through APIs by enabling adaptive allocations
- in Kibana by enabling adaptive resources

<DocCallOut color="primary" title="Note">
To fully leverage model autoscaling, it is highly recommended to enable [Elasticsearch deployment autoscaling](https://www.elastic.co/guide/en/cloud/current/ec-autoscaling.html).
</DocCallOut>
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

Trained model autoscaling is available for Search, Observability, and Security projects on serverless deployments. However, these projects handle processing power differently, which impacts their costs and resource limits.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

Security and Observability projects are only charged for data collection (ingest) and storage (retention). They are not charged for processing power (vCPU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

Because vCPU processing is costly, Search projects are given access to more processing resources, while Security and Observability projects have lower limits on their processing power. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

On serverless, adaptive allocations are automatically enabled for all project types.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

## Enabling autoscaling through APIs - adaptive allocations

Model allocations are independent units of work for NLP tasks.
If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process.
This can help you to manage performance and cost more easily.
(Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.)

When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load.
When the load is high, a new model allocation is automatically created.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
When the load is low, a model allocation is automatically removed.
You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits.

<DocCallOut color="primary" title="Note">
If you manually set the minimum number of Machine Learning (ML) resources (allocations) to 1, you will be charged every hour, even if the system isn’t fully using those resources.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
</DocCallOut>

You can enable adaptive allocations by using:

- the create inference endpoint API for [ELSER](https://www.elastic.co/guide/en/elasticsearch/reference/master/infer-service-elser.html), [E5 and models uploaded through Eland](https://www.elastic.co/guide/en/elasticsearch/reference/master/infer-service-elasticsearch.html) that are used as inference services.
- the [start trained model deployment](https://www.elastic.co/guide/en/elasticsearch/reference/master/start-trained-model-deployment.html) or [update trained model deployment](https://www.elastic.co/guide/en/elasticsearch/reference/master/update-trained-model-deployment.html) APIs for trained models that are deployed on machine learning nodes.

If the new allocations fit on the current machine learning nodes, they are immediately started.
If more resource capacity is needed for creating new model allocations, then your machine learning node will be scaled up if machine learning autoscaling is enabled to provide enough resources for the new allocation.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
The number of model allocations can be scaled down to 0.
They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more.
Adaptive allocations must be set up independently for each deployment and [inference endpoint](https://www.elastic.co/guide/en/elasticsearch/reference/master/put-inference-api.html).

When you create inference endpoints on serverless deployments using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

### Optimizing for typical use cases

You can optimize your model deployment for typical use cases, such as search and ingest.
When you optimize for ingest, the throughput will be higher, which increases the number of inference requests that can be performed in parallel.
When you optimize for search, the latency will be lower during search processes.

- If you want to optimize for ingest, set the number of threads to `1` (`"threads_per_allocation": 1`).
- If you want to optimize for search, set the number of threads to greater than `1`.
Increasing the number of threads will make the search processes more performant.

## Enabling autoscaling in Kibana - adaptive resources

You can enable adaptive resources for your models when starting or updating the model deployment.
Adaptive resources make it possible for Elasticsearch to scale up or down the available resources based on the load on the process.
This can help you to manage performance and cost more easily.
When adaptive resources are enabled, the number of vCPUs that the model deployment uses is set automatically based on the current load.
When the load is high, the number of vCPUs that the process can use is automatically increased.
When the load is low, the number of vCPUs that the process can use is automatically decreased.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level's range.

Refer to the tables in the auto-scaling-matrix section to find out the setings for the level you selected.

<DocImage size="xxl" url="../images/ml-nlp-deployment.png" alt="ML model deployment with adaptive resources enabled." />

## Model deployment resource matrix

The used resources for trained model deployments depend on three factors:

- your cluster environment (Serverless, Cloud, or on-premises)
- the use case you optimize the model deployment for (ingest or search)
- whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources

The following tables show you the number of allocations, threads, and vCPUs available on serverless when adaptive resources are enabled or disabled.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

### Deployments on serverless optimized for ingest

In case of ingest-optimized deployments, we maximize the number of model allocations.

#### Adaptive resources enabled

| Level | Allocations | Threads | vCPUs |
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
|--------|------------------------------------------------------|---------|------------------------------------------------------|
| Low | 0 to 2 dynamically | 1 | 0 to 2 dynamically |
| Medium | 1 to 32 dynamically | 1 | 1 to 32 dynamically |
| High | 1 to 512 for Search <br /> 1 to 128 for Security and Observability | 1 | 1 to 512 for Search <br /> 1 to 128 for Security and Observability |

#### Adaptive resources disabled (Search only)

| Level | Allocations | Threads | vCPUs |
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
|--------|------------------------------------------------------|---------|------------------------------------------------------|
| Low | Exactly 2 | 1 | 2 |
| Medium | Exactly 32 | 1 | 32 |
| High | 512 for Search <br /> No static allocations for Security and Observability | 1 | 512 for Search <br /> No static allocations for Security and Observability |

### Deployments on serverless optimized for search

In case of search-optimized deployments, we maximize the number of threads.
The maximum number of threads that can be claimed depends on the hardware your architecture has.
kosabogi marked this conversation as resolved.
Show resolved Hide resolved

#### Adaptive resources enabled

| Level | Allocations | Threads | vCPUs |
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
|--------|------------------------------------------------------|---------|------------------------------------------------------|
| Low | 0 to 1 dynamically | Always 2 | 0 to 2 dynamically |
| Medium | 1 to 2 (if threads=16), dinamically | Maximum (for example, 16) | 1 to 32 dynamically |
| High | 1 to 32 (if threads=16), dinamically | Maximum (for example, 16) | 1 to 512 in Search <br /> 1 to 128 for Security and Observability |

#### Adaptive resources disabled

| Level | Allocations | Threads | vCPUs |
kosabogi marked this conversation as resolved.
Show resolved Hide resolved
|--------|---------------------------------------------------------|------------------------|------------------------------------------------------|
| Low | 1 statically | Always 2 | 2 |
| Medium | 2 statically (if threads=16) | Maximum (for example, 16) | 32 |
| High | 32 statically (if threads=16) for Search <br /> No static allocations for Security and Observability | Maximum (for example, 16) | 512 for Search <br /> No static allocations for Security and Observability |
Loading