Skip to content

Commit

Permalink
Standarize capitalization, headings
Browse files Browse the repository at this point in the history
  • Loading branch information
dyastremsky committed Oct 18, 2023
1 parent aa9ec65 commit e0161f4
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ main Triton [issues page](https://github.com/triton-inference-server/server/issu

There are several ways to install and deploy the vLLM backend.

### Option 1. Pre-built Docker Container.
### Option 1. Use the Pre-Built Docker Container.

Pull the container with vLLM backend from [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver) registry. This container has everything you need to run your vLLM model.

### Option 2. Build a custom container from source
### Option 2. Build a Custom Container From Source
You can follow steps described in the
[Building With Docker](https://github.com/triton-inference-server/server/blob/main/docs/customization_guide/build.md#building-with-docker)
guide and use the
Expand Down Expand Up @@ -87,7 +87,7 @@ A sample command to build a Triton Server container with all options enabled is
--backend=vllm:r23.10
```

### Option 3. Add the vLLM Backend to the default Triton Container
### Option 3. Add the vLLM Backend to the Default Triton Container

You can install the vLLM backend directly into the NGC Triton container.
In this case, please install vLLM first. You can do so by running
Expand Down

0 comments on commit e0161f4

Please sign in to comment.