Skip to content

Commit

Permalink
Merge branch 'main' into andy-testable-tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
AndyDai-nv authored Jul 12, 2024
2 parents 8f7f37d + f180360 commit c62232d
Show file tree
Hide file tree
Showing 20 changed files with 502 additions and 126 deletions.
7 changes: 7 additions & 0 deletions src/c++/perf_analyzer/docs/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,13 @@ will also be reported in the results.
Default is `-1` indicating that the average latency is used to determine
stability.

#### `--request-count=<n>`

Specifies a total number of requests to use for measurement.

Default is `0`, which means that there is no request count and the measurement
will proceed using windows until stabilization is detected.

#### `-r <n>`
#### `--max-trials=<n>`

Expand Down
13 changes: 7 additions & 6 deletions src/c++/perf_analyzer/genai-perf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ docker run -it --net=host --rm --gpus=all nvcr.io/nvidia/tritonserver:${RELEASE}
2. Run GenAI-Perf:

```bash
genai-perf \
genai-perf profile \
-m gpt2 \
--service-kind triton \
--backend tensorrtllm \
Expand Down Expand Up @@ -209,7 +209,7 @@ current profile run. This is disabled by default but users can easily enable it
by passing the `--generate-plots` option when running the benchmark:

```bash
genai-perf \
genai-perf profile \
-m gpt2 \
--service-kind triton \
--backend tensorrtllm \
Expand Down Expand Up @@ -301,8 +301,8 @@ options:

When the dataset is coming from a file, you can specify the following
options:
* `--input-file <path>`: The input file containing the single prompt to
use for benchmarking.
* `--input-file <path>`: The input file containing the prompts to
use for benchmarking as JSON objects.

For any dataset, you can specify the following options:
* `--output-tokens-mean <int>`: The mean number of tokens in each output. Ensure
Expand Down Expand Up @@ -373,7 +373,7 @@ model config to not echo the input tokens in the output. (default: tensorrtllm)

Set a custom endpoint that differs from the OpenAI defaults. (default: `None`)

##### `--endpoint-type {chat,completions,embeddings}`
##### `--endpoint-type {chat,completions,embeddings,rankings}`

The endpoint-type to send requests to on the server. This is only used with the
`openai` service-kind. (default: `None`)
Expand All @@ -400,7 +400,8 @@ URL of the endpoint to target for benchmarking. (default: `None`)
The batch size of the requests GenAI-Perf should send.
This is currently only supported with the
[embeddings endpoint type](docs/embeddings.md).
(default: `1`)
(default: `1`) and
[rankings endpoint type](docs/rankings.md).

##### `--extra-inputs <str>`

Expand Down
14 changes: 7 additions & 7 deletions src/c++/perf_analyzer/genai-perf/docs/embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,12 @@ OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->

# Profiling Embeddings Models with GenAI-Perf
# Profile Embeddings Models with GenAI-Perf

GenAI-Perf allows you to profile embedding models running on an
[OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings)-compatible server.

## Creating a Sample Embeddings Input File
## Create a Sample Embeddings Input File

To create a sample embeddings input file, use the following command:

Expand All @@ -50,17 +50,17 @@ This will generate a file named embeddings.jsonl with the following content:
{"text": "In what state did they film Shrek 2?"}
```

## Starting an OpenAI Embeddings-Compatible Server
## Start an OpenAI Embeddings-Compatible Server
To start an OpenAI embeddings-compatible server, run the following command:
```bash
docker run -it --net=host --rm --gpus=all vllm/vllm-openai:latest --model intfloat/e5-mistral-7b-instruct --dtype float16 --max-model-len 1024
```

## Running GenAI-Perf
## Run GenAI-Perf
To profile embeddings models using GenAI-Perf, use the following command:

```bash
genai-perf \
genai-perf profile \
-m intfloat/e5-mistral-7b-instruct \
--service-kind openai \
--endpoint-type embeddings \
Expand All @@ -73,7 +73,7 @@ additional arguments with the `--extra-inputs` [flag](../README.md#input-options
For example, you could use this command:

```bash
genai-perf \
genai-perf profile \
-m intfloat/e5-mistral-7b-instruct \
--service-kind openai \
--endpoint-type embeddings \
Expand All @@ -90,4 +90,4 @@ Example output:
│ Request latency (ms) │ 42.21 │ 28.18 │ 318.61 │ 56.50 │ 49.21 │ 43.07 │
└──────────────────────┴───────┴───────┴────────┴───────┴───────┴───────┘
Request throughput (per sec): 23.63
```
```
8 changes: 4 additions & 4 deletions src/c++/perf_analyzer/genai-perf/docs/lora.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,22 +26,22 @@ OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->

# Profiling Multiple LoRA Adapters
# Profile Multiple LoRA Adapters
GenAI-Perf allows you to profile multiple LoRA adapters on top of a base model.

## Selecting LoRA Adapters
## Select LoRA Adapters
To do this, list multiple adapters after the model name option `-m`:

```bash
genai-perf -m lora_adapter1 lora_adapter2 lora_adapter3
```

## Choosing a Strategy for Selecting Models
## Choose a Strategy for Selecting Models
When profiling with multiple models, you can specify how the models should be
assigned to prompts using the `--model-selection-strategy` option:

```bash
genai-perf \
genai-perf profile \
-m lora_adapter1 lora_adapter2 lora_adapter3 \
--model-selection-strategy round_robin
```
Expand Down
100 changes: 100 additions & 0 deletions src/c++/perf_analyzer/genai-perf/docs/rankings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
<!--
Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of NVIDIA CORPORATION nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->

# Profile Ranking Models with GenAI-Perf


GenAI-Perf allows you to profile ranking models compatible with Hugging Face's
[Text Embeddings Inference's re-ranker API](https://huggingface.co/docs/text-embeddings-inference/en/quick_tour#re-rankers).

## Create a Sample Rankings Input Directory

To create a sample rankings input directory, follow these steps:

Create a directory called rankings_jsonl:
```bash
mkdir rankings_jsonl
```

Inside this directory, create a JSONL file named queries.jsonl with queries data:

```bash
echo '{"text": "What was the first car ever driven?"}
{"text": "Who served as the 5th President of the United States of America?"}
{"text": "Is the Sydney Opera House located in Australia?"}
{"text": "In what state did they film Shrek 2?"}' > rankings_jsonl/queries.jsonl
```

Create another JSONL file named passages.jsonl with passages data:

```bash
echo '{"text": "Eric Anderson (born January 18, 1968) is an American sociologist and sexologist."}
{"text": "Kevin Loader is a British film and television producer."}
{"text": "Francisco Antonio Zea Juan Francisco Antonio Hilari was a Colombian journalist, botanist, diplomat, politician, and statesman who served as the 1st Vice President of Colombia."}
{"text": "Daddys Home 2 Principal photography on the film began in Massachusetts in March 2017 and it was released in the United States by Paramount Pictures on November 10, 2017. Although the film received unfavorable reviews, it has grossed over $180 million worldwide on a $69 million budget."}' > rankings_jsonl/passages.jsonl
```

## Start a Hugging Face Re-Ranker-Compatible Server
To start a Hugging Face re-ranker-compatible server, run the following commands:

```bash
model=BAAI/bge-reranker-large
revision=refs/pr/4
volume=$PWD/data

docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.3 --model-id $model --revision $revision
```

## Run GenAI-Perf
To profile ranking models using GenAI-Perf, use the following command:

```bash
genai-perf profile \
-m BAAI/bge-reranker-large \
--service-kind openai \
--endpoint-type rankings \
--endpoint rerank \
--input-file rankings_jsonl/ \
-u localhost:8080 \
--extra-inputs rankings:tei \
--batch-size 2
```

This command specifies the use of Hugging Face's ranking API with `--endpoint rerank` and `--extra-inputs rankings:tei`.

Example output:

```
Rankings Metrics
┏━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━┳━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━┓
┃ Statistic ┃ avg ┃ min ┃ max ┃ p99 ┃ p90 ┃ p75 ┃
┡━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━╇━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━┩
│ Request latency (ms) │ 5.48 │ 2.50 │ 23.91 │ 10.27 │ 8.34 │ 6.07 │
└──────────────────────┴──────┴──────┴───────┴───────┴──────┴──────┘
Request throughput (per sec): 180.11
```
13 changes: 9 additions & 4 deletions src/c++/perf_analyzer/genai-perf/docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,8 @@ export RELEASE="yy.mm" # e.g. export RELEASE="24.06"
docker run -it --net=host --gpus=all nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk

# Run GenAI-Perf in the container:
genai-perf \
```bash
genai-perf profile \
-m gpt2 \
--service-kind triton \
--backend tensorrtllm \
Expand Down Expand Up @@ -144,7 +145,8 @@ export RELEASE="yy.mm" # e.g. export RELEASE="24.06"
docker run -it --net=host --gpus=1 nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk

# Run GenAI-Perf in the container:
genai-perf \
```bash
genai-perf profile \
-m gpt2 \
--service-kind triton \
--backend vllm \
Expand Down Expand Up @@ -205,7 +207,8 @@ export RELEASE="yy.mm" # e.g. export RELEASE="24.06"
docker run -it --net=host --gpus=all nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk

# Run GenAI-Perf in the container:
genai-perf \
```bash
genai-perf profile \
-m gpt2 \
--service-kind openai \
--endpoint v1/chat/completions \
Expand Down Expand Up @@ -265,8 +268,10 @@ export RELEASE="yy.mm" # e.g. export RELEASE="24.06"

docker run -it --net=host --gpus=all nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk


# Run GenAI-Perf in the container:
genai-perf \
```bash
genai-perf profile \
-m gpt2 \
--service-kind openai \
--endpoint v1/completions \
Expand Down
Loading

0 comments on commit c62232d

Please sign in to comment.