From c72d507186f1a69ac55f886a28cbb4fd59106509 Mon Sep 17 00:00:00 2001 From: Prasanth Pulavarthi Date: Thu, 12 Oct 2023 17:24:37 -0700 Subject: [PATCH] Update pytorch-on-the-edge.html --- blogs/pytorch-on-the-edge.html | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/blogs/pytorch-on-the-edge.html b/blogs/pytorch-on-the-edge.html index f40fe267803ef..0b4b3f3b8c18d 100644 --- a/blogs/pytorch-on-the-edge.html +++ b/blogs/pytorch-on-the-edge.html @@ -97,12 +97,12 @@

Considerations for PyTorch models on the edge

There are several factors to keep in mind when thinking about running a PyTorch model on the edge:

Tools for PyTorch models on the edge

@@ -113,13 +113,13 @@

Tools for PyTorch models on the edge

The popular Hugging Face library also has APIs that build on top of this torch.onnx functionality to export models to the ONNX format. Over 130,000 models are supported making it very likely that the model you care about is one of them.

-

In this article, we’ll show you several examples involving state-of-the-art PyTorch models (like Whisper and Stable Diffusion) on popular devices (like Windows laptops, mobile phones, and web browsers) via various languages (from C# to Javascript to Swift).

+

In this article, we’ll show you several examples involving state-of-the-art PyTorch models (like Whisper and Stable Diffusion) on popular devices (like Windows laptops, mobile phones, and web browsers) via various languages (from C# to JavaScript to Swift).

-

PyTorch models on the edge

+

Examples of PyTorch models on the edge

Stable Diffusion on Windows

-

The Stable Diffusion pipeline consists of five PyTorch models that build an image from a description. The diffusion process iterates on random pixels until the output image matches the description.

+

The Stable Diffusion pipeline consists of five PyTorch models that build an image from a text description. The diffusion process iterates on random pixels until the output image matches the description.

To run on the edge, four of the models can be exported to ONNX format from HuggingFace.

@@ -131,7 +131,7 @@

Stable Diffusion on Windows

You don’t have to export the fifth model, ClipTokenizer, as it is available in ONNX Runtime extensions, a library for pre and post processing PyTorch models.

-

To run this pipeline of models as a .NET application, we built the pipeline code in C#. This code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX Runtime’s device-specific hardware accelerators. This is configured with the ExecutionProviderTarget below.

+

To run this pipeline of models as a .NET application, we build the pipeline code in C#. This code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX Runtime’s device-specific hardware accelerators. This is configured with the ExecutionProviderTarget below.


 static void Main(string[] args)
@@ -159,7 +159,7 @@ 

Stable Diffusion on Windows

}
-

This is the output of the model pipelines, running with 50 inference iterations

+

This is the output of the model pipeline, running with 50 inference iterations:

Two golden retriever puppies playing in the grass @@ -167,7 +167,7 @@

Stable Diffusion on Windows

Text generation in the browser

-

Running a PyTorch model locally in the browser is not only possible but super simple with the transformers.js library. Transformers.js uses ONNX Runtime Web as a backend. Many models are already converted to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing a few lines of HTML.

+

Running a PyTorch model locally in the browser is not only possible but super simple with the transformers.js library. Transformers.js uses ONNX Runtime Web as its backend. Many models are already converted to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing a few lines of HTML:


 <html>
@@ -354,4 +354,4 @@ 

Where to next?

- \ No newline at end of file +