diff --git a/Dockerfile b/Dockerfile index 8d1c32635..740a96dcd 100644 --- a/Dockerfile +++ b/Dockerfile @@ -12,11 +12,11 @@ # See the License for the specific language governing permissions and # limitations under the License. -ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.01-py3 -ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.01-py3-sdk +ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.02-py3 +ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.02-py3-sdk -ARG MODEL_ANALYZER_VERSION=1.37.0dev -ARG MODEL_ANALYZER_CONTAINER_VERSION=24.02dev +ARG MODEL_ANALYZER_VERSION=1.37.0 +ARG MODEL_ANALYZER_CONTAINER_VERSION=24.02 FROM ${TRITONSDK_BASE_IMAGE} as sdk diff --git a/README.md b/README.md index b2764c4e3..f7dfddaa2 100644 --- a/README.md +++ b/README.md @@ -19,101 +19,4 @@ limitations under the License. # Triton Model Analyzer > [!Warning] -> ##### LATEST RELEASE -> You are currently on the `main` branch which tracks under-development progress towards the next release.
-> The latest release of the Triton Model Analyzer is 1.36.0 and is available on branch -> [r24.01](https://github.com/triton-inference-server/model_analyzer/tree/r24.01). - - -Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements. -

- -# Features - -### Search Modes - -- [Quick Search](docs/config_search.md#quick-search-mode) will **sparsely** search the [Max Batch Size](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#maximum-batch-size), - [Dynamic Batching](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#dynamic-batcher), and - [Instance Group](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#instance-groups) spaces by utilizing a heuristic hill-climbing algorithm to help you quickly find a more optimal configuration - -- [Automatic Brute Search](docs/config_search.md#automatic-brute-search) will **exhaustively** search the - [Max Batch Size](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#maximum-batch-size), - [Dynamic Batching](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#dynamic-batcher), and - [Instance Group](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#instance-groups) - parameters of your model configuration - -- [Manual Brute Search](docs/config_search.md#manual-brute-search) allows you to create manual sweeps for every parameter that can be specified in the model configuration - -### Model Types - -- [Ensemble Model Search](docs/config_search.md#ensemble-model-search): Model Analyzer can help you find the optimal - settings when profiling an ensemble model, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm - -- [BLS Model Search](docs/config_search.md#bls-model-search): Model Analyzer can help you find the optimal - settings when profiling a BLS model, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm - -- [Multi-Model Search](docs/config_search.md#multi-model-search-mode): **EARLY ACCESS** - Model Analyzer can help you - find the optimal settings when profiling multiple concurrent models, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm - -### Other Features - -- [Detailed and summary reports](docs/report.md): Model Analyzer is able to generate - summarized and detailed reports that can help you better understand the trade-offs - between different model configurations that can be used for your model. - -- [QoS Constraints](docs/config.md#constraint): Constraints can help you - filter out the Model Analyzer results based on your QoS requirements. For - example, you can specify a latency budget to filter out model configurations - that do not satisfy the specified latency threshold. -

- -# Examples and Tutorials - -### **Single Model** - -See the [Single Model Quick Start](docs/quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple PyTorch model. - -### **Multi Model** - -See the [Multi-model Quick Start](docs/mm_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on two models running concurrently on the same GPU. - -### **Ensemble Model** - -See the [Ensemble Model Quick Start](docs/ensemble_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple Ensemble model. - -### **BLS Model** - -See the [BLS Model Quick Start](docs/bls_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple BLS model. -

- -# Documentation - -- [Installation](docs/install.md) -- [Model Analyzer CLI](docs/cli.md) -- [Launch Modes](docs/launch_modes.md) -- [Configuring Model Analyzer](docs/config.md) -- [Model Analyzer Metrics](docs/metrics.md) -- [Model Config Search](docs/config_search.md) -- [Checkpointing](docs/checkpoints.md) -- [Model Analyzer Reports](docs/report.md) -- [Deployment with Kubernetes](docs/kubernetes_deploy.md) -

- -# Reporting problems, asking questions - -We appreciate any feedback, questions or bug reporting regarding this -project. When help with code is needed, follow the process outlined in -the Stack Overflow (https://stackoverflow.com/help/mcve) -document. Ensure posted examples are: - -- minimal – use as little code as possible that still produces the - same problem - -- complete – provide all parts needed to reproduce the problem. Check - if you can strip external dependency and still show the problem. The - less time we spend on reproducing problems the more time we have to - fix it - -- verifiable – test the code you're about to provide to make sure it - reproduces the problem. Remove all other problems that are not - related to your request/question. +> ##### THIS BRANCH IS UNDER ACTIVE DEVELOPMENT AND IS NOT READY FOR USE. \ No newline at end of file diff --git a/VERSION b/VERSION index c26c1f9f0..bf50e910e 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.37.0dev +1.37.0 diff --git a/docs/bls_quick_start.md b/docs/bls_quick_start.md index 432b6876f..22eb406e0 100644 --- a/docs/bls_quick_start.md +++ b/docs/bls_quick_start.md @@ -49,7 +49,7 @@ git pull origin main **1. Pull the SDK container:** ``` -docker pull nvcr.io/nvidia/tritonserver:24.01-py3-sdk +docker pull nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` **2. Run the SDK container** @@ -59,7 +59,7 @@ docker run -it --gpus 1 \ --shm-size 2G \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \ - --net=host nvcr.io/nvidia/tritonserver:24.01-py3-sdk + --net=host nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` **Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly

diff --git a/docs/config.md b/docs/config.md index bc091e86b..b08e933d8 100644 --- a/docs/config.md +++ b/docs/config.md @@ -153,7 +153,7 @@ cpu_only_composing_models: [ reload_model_disable: | default: false] # Triton Docker image tag used when launching using Docker mode -[ triton_docker_image: | default: nvcr.io/nvidia/tritonserver:24.01-py3 ] +[ triton_docker_image: | default: nvcr.io/nvidia/tritonserver:24.02-py3 ] # Triton Server HTTP endpoint url used by Model Analyzer client" [ triton_http_endpoint: | default: localhost:8000 ] diff --git a/docs/ensemble_quick_start.md b/docs/ensemble_quick_start.md index 4d2ff8501..23abb2bb4 100644 --- a/docs/ensemble_quick_start.md +++ b/docs/ensemble_quick_start.md @@ -55,7 +55,7 @@ mkdir examples/quick/ensemble_add_sub/1 **1. Pull the SDK container:** ``` -docker pull nvcr.io/nvidia/tritonserver:24.01-py3-sdk +docker pull nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` **2. Run the SDK container** @@ -65,7 +65,7 @@ docker run -it --gpus 1 \ --shm-size 1G \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \ - --net=host nvcr.io/nvidia/tritonserver:24.01-py3-sdk + --net=host nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` **Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly

diff --git a/docs/kubernetes_deploy.md b/docs/kubernetes_deploy.md index db0b6230d..ef63b4d15 100644 --- a/docs/kubernetes_deploy.md +++ b/docs/kubernetes_deploy.md @@ -79,7 +79,7 @@ images: triton: image: nvcr.io/nvidia/tritonserver - tag: 24.01-py3 + tag: 24.02-py3 ``` The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified. diff --git a/docs/mm_quick_start.md b/docs/mm_quick_start.md index 058054e7c..6395f25a7 100644 --- a/docs/mm_quick_start.md +++ b/docs/mm_quick_start.md @@ -49,7 +49,7 @@ git pull origin main **1. Pull the SDK container:** ``` -docker pull nvcr.io/nvidia/tritonserver:24.01-py3-sdk +docker pull nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` **2. Run the SDK container** @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:24.01-py3-sdk docker run -it --gpus all \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \ - --net=host nvcr.io/nvidia/tritonserver:24.01-py3-sdk + --net=host nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` ## `Step 3:` Profile both models concurrently diff --git a/docs/quick_start.md b/docs/quick_start.md index 55d4340af..fc0cd87da 100644 --- a/docs/quick_start.md +++ b/docs/quick_start.md @@ -49,7 +49,7 @@ git pull origin main **1. Pull the SDK container:** ``` -docker pull nvcr.io/nvidia/tritonserver:24.01-py3-sdk +docker pull nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` **2. Run the SDK container** @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:24.01-py3-sdk docker run -it --gpus all \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \ - --net=host nvcr.io/nvidia/tritonserver:24.01-py3-sdk + --net=host nvcr.io/nvidia/tritonserver:24.02-py3-sdk ``` ## `Step 3:` Profile the `add_sub` model diff --git a/helm-chart/values.yaml b/helm-chart/values.yaml index 5efae3c9c..c15e98e44 100644 --- a/helm-chart/values.yaml +++ b/helm-chart/values.yaml @@ -41,4 +41,4 @@ images: triton: image: nvcr.io/nvidia/tritonserver - tag: 24.01-py3 + tag: 24.02-py3 diff --git a/model_analyzer/config/input/config_defaults.py b/model_analyzer/config/input/config_defaults.py index 21befaa71..499b71b26 100755 --- a/model_analyzer/config/input/config_defaults.py +++ b/model_analyzer/config/input/config_defaults.py @@ -56,7 +56,7 @@ DEFAULT_RUN_CONFIG_PROFILE_MODELS_CONCURRENTLY_ENABLE = False DEFAULT_REQUEST_RATE_SEARCH_ENABLE = False DEFAULT_TRITON_LAUNCH_MODE = "local" -DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:24.01-py3" +DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:24.02-py3" DEFAULT_TRITON_HTTP_ENDPOINT = "localhost:8000" DEFAULT_TRITON_GRPC_ENDPOINT = "localhost:8001" DEFAULT_TRITON_METRICS_URL = "http://localhost:8002/metrics"