diff --git a/src/routes/blogs/accelerating-phi-3/+page.svx b/src/routes/blogs/accelerating-phi-3/+page.svx index bb62ffd0aa310..b2518677f5183 100644 --- a/src/routes/blogs/accelerating-phi-3/+page.svx +++ b/src/routes/blogs/accelerating-phi-3/+page.svx @@ -49,7 +49,6 @@ Whether it's Windows, Linux, Android, or Mac, there's a path to infer models eff We are pleased to announce our new Generate() API, which makes it easier to run the Phi-3 models across a range of devices, platforms, and EP backends by wrapping several aspects of generative AI inferencing. The Generate() API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). -This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). Example: