Skip to content

Commit

Permalink
Format code
Browse files Browse the repository at this point in the history
  • Loading branch information
natke committed Oct 13, 2023
1 parent 2501261 commit 5120616
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 12 deletions.
24 changes: 12 additions & 12 deletions blogs/pytorch-on-the-edge.html
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ <h3 class="r-heading">Stable Diffusion on Windows</h3>

<p>To run this pipeline of models as a .NET application, we built the pipeline code in C#. This code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX Runtime’s device-specific hardware accelerators. This is configured with the ExecutionProviderTarget below.</p>

<code><pre>
<pre><code class="language-csharp">
static void Main(string[] args)
{
var prompt = "Two golden retriever puppies playing in the grass.";
Expand All @@ -157,7 +157,7 @@ <h3 class="r-heading">Stable Diffusion on Windows</h3>
Console.WriteLine("Unable to create image, please try again.");
}
}
</pre></code>
</code></pre>

<p>This is the output of the model pipelines, running with 50 inference iterations</p>

Expand All @@ -169,7 +169,7 @@ <h3 class="r-heading">Text generation in the browser </h3>

<p>Running a PyTorch model locally in the browser is not only possible but super simple with the <a href="https://huggingface.co/docs/transformers.js/index">transformers.js</a> library. Transformers.js uses ONNX Runtime Web as a backend. Many models are already converted to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing a few lines of HTML.</p>

<code><pre>
<pre><code class="language-html">
&lt;html&gt;
&lt;body&gt;
&lt;h1&gt;Enter starting text …&lt;/h1&gt;
Expand Down Expand Up @@ -210,7 +210,7 @@ <h3 class="r-heading">Text generation in the browser </h3>
&lt;/body&gt;
&lt;/html&gt;

</pre></code>
</code></pre>

<p>You can also embed the call to the transformers pipeline using vanilla JS, or in a web application, with React, or Next.js, or write a browser extension.</p>

Expand All @@ -226,7 +226,7 @@ <h3 class="r-heading">Speech recognition with Whisper on mobile</h3>

<p>As an example, the relevant snippet of an <a href="https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile/examples/speech_recognition">Android mobile app</a> to perform speech transcription on short samples of audio, is shown below.</p>

<code><pre>
<pre><code class="language-swift">
init {
val env = OrtEnvironment.getEnvironment()
val sessionOptions = OrtSession.SessionOptions()
Expand Down Expand Up @@ -263,7 +263,7 @@ <h3 class="r-heading">Speech recognition with Whisper on mobile</h3>
return Result(recognizedText, elapsedTimeInMs)
}

</pre></code>
</code></pre>

<p>You can record a short audio clip to transcribe.</p>

Expand All @@ -277,7 +277,7 @@ <h3 class="r-heading">Train a model to recognize your voice on mobile</h3>

<p>The PyTorch model is obtained from HuggingFace, and extra layers are added to perform the speaker classification.</p>

<code><pre>
<pre><code class="language-python">
from transformers import Wav2Vec2ForSequenceClassification, AutoConfig
import torch

Expand All @@ -286,11 +286,11 @@ <h3 class="r-heading">Train a model to recognize your voice on mobile</h3>

model.classifier = torch.nn.Linear(256, 2)

</pre></code>
</code></pre>

<p>The model and other components necessary for training (a loss function to measure the quality of the model and an optimizer to instruct how the weights are adjusted during training) are exported with the ONNX Runtime training Python API.</p>

<code><pre>
<pre><code class="language-python">
artifacts.generate_artifacts(
onnx_model,
requires_grad=requires_grad,
Expand All @@ -299,11 +299,11 @@ <h3 class="r-heading">Train a model to recognize your voice on mobile</h3>
optimizer=artifacts.OptimType.AdamW,
artifact_directory="MyVoice/artifacts",
)
</pre></code>
</code></pre>

<p>This set of artifacts is now ready to be loaded by the mobile application, shown here as iOS Swift code. Within the application, a number of samples of the speaker’s audio are provided to the application and the model is trained with the samples.</p>

<code><pre>
<pre><code class="language-swift">
func trainStep(inputData: [Data], labels: [Int64]) throws {

let inputs = [try getORTValue(dataList: inputData), try getORTValue(labels: labels)]
Expand All @@ -313,7 +313,7 @@ <h3 class="r-heading">Train a model to recognize your voice on mobile</h3>

try trainingSession.lazyResetGrad()
}
</pre></code>
</code></pre>

<p>Once the model is trained, you can run it to verify that a voice sample is you!</p>

Expand Down
1 change: 1 addition & 0 deletions pytorch.html
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,7 @@ <h2>Get innovations into production faster</h2>
<script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap/4.6.2/js/bootstrap.min.js"
integrity="sha512-7rusk8kGPFynZWu26OKbTeI+QPoYchtxsmPeBqkHIEXJxeun4yJ4ISYe7C6sz9wdxeE1Gk3VxsIWgCZTc+vX3g=="
crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script src="js/custom.js"</script>
</body>

</html>

0 comments on commit 5120616

Please sign in to comment.