From 512061697a47a597bdcedb6fe9ea8b45411be5a4 Mon Sep 17 00:00:00 2001 From: Nat Kershaw Date: Thu, 12 Oct 2023 17:02:46 -0700 Subject: [PATCH] Format code --- blogs/pytorch-on-the-edge.html | 24 ++++++++++++------------ pytorch.html | 1 + 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/blogs/pytorch-on-the-edge.html b/blogs/pytorch-on-the-edge.html index 5316deafaa3bc..f40fe267803ef 100644 --- a/blogs/pytorch-on-the-edge.html +++ b/blogs/pytorch-on-the-edge.html @@ -133,7 +133,7 @@

Stable Diffusion on Windows

To run this pipeline of models as a .NET application, we built the pipeline code in C#. This code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX Runtime’s device-specific hardware accelerators. This is configured with the ExecutionProviderTarget below.

-
+                                    

 static void Main(string[] args)
 {
     var prompt = "Two golden retriever puppies playing in the grass.";
@@ -157,7 +157,7 @@ 

Stable Diffusion on Windows

Console.WriteLine("Unable to create image, please try again."); } } -
+

This is the output of the model pipelines, running with 50 inference iterations

@@ -169,7 +169,7 @@

Text generation in the browser

Running a PyTorch model locally in the browser is not only possible but super simple with the transformers.js library. Transformers.js uses ONNX Runtime Web as a backend. Many models are already converted to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing a few lines of HTML.

-
+                                    

 <html>
     <body>
         <h1>Enter starting text …</h1>
@@ -210,7 +210,7 @@ 

Text generation in the browser

</body> </html> -
+

You can also embed the call to the transformers pipeline using vanilla JS, or in a web application, with React, or Next.js, or write a browser extension.

@@ -226,7 +226,7 @@

Speech recognition with Whisper on mobile

As an example, the relevant snippet of an Android mobile app to perform speech transcription on short samples of audio, is shown below.

-
+                                    

 init {
     val env = OrtEnvironment.getEnvironment()
     val sessionOptions = OrtSession.SessionOptions()
@@ -263,7 +263,7 @@ 

Speech recognition with Whisper on mobile

return Result(recognizedText, elapsedTimeInMs) } -
+

You can record a short audio clip to transcribe.

@@ -277,7 +277,7 @@

Train a model to recognize your voice on mobile

The PyTorch model is obtained from HuggingFace, and extra layers are added to perform the speaker classification.

-
+                                    

 from transformers import Wav2Vec2ForSequenceClassification, AutoConfig
 import torch
 
@@ -286,11 +286,11 @@ 

Train a model to recognize your voice on mobile

model.classifier = torch.nn.Linear(256, 2) -
+

The model and other components necessary for training (a loss function to measure the quality of the model and an optimizer to instruct how the weights are adjusted during training) are exported with the ONNX Runtime training Python API.

-
+                                    

 artifacts.generate_artifacts(
     onnx_model,
     requires_grad=requires_grad,
@@ -299,11 +299,11 @@ 

Train a model to recognize your voice on mobile

optimizer=artifacts.OptimType.AdamW, artifact_directory="MyVoice/artifacts", ) -
+

This set of artifacts is now ready to be loaded by the mobile application, shown here as iOS Swift code. Within the application, a number of samples of the speaker’s audio are provided to the application and the model is trained with the samples.

-
+                                    

 func trainStep(inputData: [Data], labels: [Int64]) throws  {
 
     let inputs = [try getORTValue(dataList: inputData), try getORTValue(labels: labels)]
@@ -313,7 +313,7 @@ 

Train a model to recognize your voice on mobile

try trainingSession.lazyResetGrad() } -
+

Once the model is trained, you can run it to verify that a voice sample is you!

diff --git a/pytorch.html b/pytorch.html index 10ace637e9f0e..41e385ea4ff89 100644 --- a/pytorch.html +++ b/pytorch.html @@ -229,6 +229,7 @@

Get innovations into production faster

+