From 605a8ff6f5780e847f6d0e0f5b8c61f030f64a2d Mon Sep 17 00:00:00 2001 From: Sophie Schoenmeyer <107952697+sophies927@users.noreply.github.com> Date: Tue, 23 Apr 2024 10:45:33 -0700 Subject: [PATCH] Update src/routes/blogs/accelerating-phi-3/+page.svx --- src/routes/blogs/accelerating-phi-3/+page.svx | 1 - 1 file changed, 1 deletion(-) diff --git a/src/routes/blogs/accelerating-phi-3/+page.svx b/src/routes/blogs/accelerating-phi-3/+page.svx index bb62ffd0aa310..b2518677f5183 100644 --- a/src/routes/blogs/accelerating-phi-3/+page.svx +++ b/src/routes/blogs/accelerating-phi-3/+page.svx @@ -49,7 +49,6 @@ Whether it's Windows, Linux, Android, or Mac, there's a path to infer models eff We are pleased to announce our new Generate() API, which makes it easier to run the Phi-3 models across a range of devices, platforms, and EP backends by wrapping several aspects of generative AI inferencing. The Generate() API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). -This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). Example: