diff --git a/examples/llama/README.md b/examples/llama/README.md index 4b44b6496..b450b0b7c 100644 --- a/examples/llama/README.md +++ b/examples/llama/README.md @@ -34,8 +34,8 @@ pip install onnxruntime Run the model builder script to export, optimize, and quantize the model. More details can be found [here](../../src/python/py/models/README.md) ```bash -cd examples/llama -python -m onnxruntime_genai.models.builder.py -m meta-llama/Llama-2-7b-chat-hf -e cpu -p int4 -o ./model +cd examples/phi2 +python -m onnxruntime_genai.models.builder -m meta-llama/Llama-2-7b-chat-hf -e cpu -p int4 -o ./model ``` ## Run Llama