Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update main post-23.10 release #780

Merged
merged 2 commits into from
Oct 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.09-py3
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.09-py3-sdk
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.10-py3
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.10-py3-sdk

ARG MODEL_ANALYZER_VERSION=1.34.0dev
ARG MODEL_ANALYZER_CONTAINER_VERSION=23.11dev
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You are currently on the `main` branch which tracks
under-development progress towards the next release. <br>The latest
release of the Triton Model Analyzer is 1.32.0 and is available on
branch
[r23.09](https://github.com/triton-inference-server/model_analyzer/tree/r23.09).
[r23.10](https://github.com/triton-inference-server/model_analyzer/tree/r23.10).

Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements.
<br><br>
Expand Down
4 changes: 2 additions & 2 deletions docs/bls_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -59,7 +59,7 @@ docker run -it --gpus 1 \
--shm-size 2G \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:23.09-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>
Expand Down
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ cpu_only_composing_models: <comma-delimited-string-list>
[ reload_model_disable: <bool> | default: false]

# Triton Docker image tag used when launching using Docker mode
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.09-py3 ]
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.10-py3 ]

# Triton Server HTTP endpoint url used by Model Analyzer client"
[ triton_http_endpoint: <string> | default: localhost:8000 ]
Expand Down
4 changes: 2 additions & 2 deletions docs/ensemble_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ mkdir examples/quick/ensemble_add_sub/1
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -65,7 +65,7 @@ docker run -it --gpus 1 \
--shm-size 1G \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:23.09-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>
Expand Down
2 changes: 1 addition & 1 deletion docs/kubernetes_deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ images:

triton:
image: nvcr.io/nvidia/tritonserver
tag: 23.09-py3
tag: 23.10-py3
```

The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.
Expand Down
4 changes: 2 additions & 2 deletions docs/mm_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:23.09-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

## `Step 3:` Profile both models concurrently
Expand Down
4 changes: 2 additions & 2 deletions docs/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -58,7 +58,7 @@ docker pull nvcr.io/nvidia/tritonserver:23.09-py3-sdk
docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
--net=host nvcr.io/nvidia/tritonserver:23.09-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:23.10-py3-sdk
```

## `Step 3:` Profile the `add_sub` model
Expand Down
2 changes: 1 addition & 1 deletion helm-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ images:

triton:
image: nvcr.io/nvidia/tritonserver
tag: 23.09-py3
tag: 23.10-py3
2 changes: 1 addition & 1 deletion model_analyzer/config/input/config_defaults.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
DEFAULT_RUN_CONFIG_PROFILE_MODELS_CONCURRENTLY_ENABLE = False
DEFAULT_REQUEST_RATE_SEARCH_ENABLE = False
DEFAULT_TRITON_LAUNCH_MODE = "local"
DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:23.09-py3"
DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:23.10-py3"
DEFAULT_TRITON_HTTP_ENDPOINT = "localhost:8000"
DEFAULT_TRITON_GRPC_ENDPOINT = "localhost:8001"
DEFAULT_TRITON_METRICS_URL = "http://localhost:8002/metrics"
Expand Down
Loading