From f94a059c42eaac6409d523ab46abd7e440322c83 Mon Sep 17 00:00:00 2001 From: MaanavD Date: Thu, 5 Sep 2024 15:26:27 -0700 Subject: [PATCH 1/4] Fixed many (not all) accessibility issues. Likely will need to change HLJS theme for the rest. --- .../on-device-training/android-app.md | 42 +++++++++---------- docs/tutorials/on-device-training/ios-app.md | 10 ++--- .../blogs/pytorch-on-the-edge/+page.svelte | 32 +++++++------- src/routes/components/footer.svelte | 6 +-- src/routes/events/+page.svelte | 4 +- src/routes/events/event-post.svelte | 4 +- src/routes/getting-started/+page.svelte | 6 +-- src/routes/huggingface/+page.svelte | 32 +++++++------- .../testimonials/testimonial-card.svelte | 4 +- src/routes/training/+page.svelte | 18 ++++---- src/routes/windows/+page.svelte | 6 +-- tailwind.config.js | 1 + 12 files changed, 82 insertions(+), 83 deletions(-) diff --git a/docs/tutorials/on-device-training/android-app.md b/docs/tutorials/on-device-training/android-app.md index b9b0ae49c7bec..5f9fe3bcf3db8 100644 --- a/docs/tutorials/on-device-training/android-app.md +++ b/docs/tutorials/on-device-training/android-app.md @@ -12,7 +12,7 @@ In this tutorial, we will explore how to build an Android application that incor Here is what the application will look like at the end of this tutorial: - +an image classification app with Tom Cruise in the middle. ## Introduction @@ -26,24 +26,22 @@ In this tutorial, we will use data to learn to: ## Contents -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Offline Phase - Building the training artifacts](#offline-phase---building-the-training-artifacts) - - [Export the model to ONNX](#op1) - - [Define the trainable and non trainable parameters](#op2) - - [Generate the training artifacts](#op3) -- [Training Phase - Android application development](#training-phase---android-application-development) - - [Setting up the project in Android Studio](#tp1) - - [Adding the ONNX Runtime dependency](#tp2) - - [Packaging the Prebuilt Training Artifacts and Dataset](#tp3) - - [Interfacing with ONNX Runtime - C++ Code](#tp4) - - [Image Preprocessing](#tp5) - - [Application Frontend](#tp6) -- [Training Phase - Running the application on a device](#training-phase---running-the-application-on-a-device) - - [Running the application on a device](#tp7) - - [Training with a pre-loaded dataset - Animals](#tp8) - - [Training with a custom dataset - Celebrities](#tp9) -- [Conclusion](#conclusion) +- [On-Device Training: Building an Android Application](#on-device-training-building-an-android-application) + - [Introduction](#introduction) + - [Contents](#contents) + - [Prerequisites](#prerequisites) + - [Offline Phase - Building the training artifacts](#offline-phase---building-the-training-artifacts) +- [The original model is trained on imagenet which has 1000 classes.](#the-original-model-is-trained-on-imagenet-which-has-1000-classes) +- [For our image classification scenario, we need to classify among 4 categories.](#for-our-image-classification-scenario-we-need-to-classify-among-4-categories) +- [So we need to change the last layer of the model to have 4 outputs.](#so-we-need-to-change-the-last-layer-of-the-model-to-have-4-outputs) +- [Export the model to ONNX.](#export-the-model-to-onnx) +- [Load the onnx model.](#load-the-onnx-model) +- [Define the parameters that require their gradients to be computed](#define-the-parameters-that-require-their-gradients-to-be-computed) +- [(trainable parameters) and those that do not (frozen/non trainable parameters).](#trainable-parameters-and-those-that-do-not-frozennon-trainable-parameters) +- [Generate the training artifacts.](#generate-the-training-artifacts) + - [Training Phase - Android application development](#training-phase---android-application-development) + - [Training Phase - Running the application on a device](#training-phase---running-the-application-on-a-device) + - [Conclusion](#conclusion) ## Prerequisites @@ -791,7 +789,7 @@ To follow this tutorial, you should have a basic understanding of Android app de b. Launching the application on the device should look like this: - + Barebones ORT Personalize app 2. Training with a pre-loaded dataset - Animals @@ -805,7 +803,7 @@ To follow this tutorial, you should have a basic understanding of Android app de e. Use any animal image from your library for inferencing now. - + ORT Personalize app with an image of a cow As can be seen from the image above, the model correctly predicted `Cow`. @@ -825,7 +823,7 @@ To follow this tutorial, you should have a basic understanding of Android app de g. That's it!. Hopefully the application classified the image correctly. - + an image classification app with Tom Cruise in the middle. ## Conclusion diff --git a/docs/tutorials/on-device-training/ios-app.md b/docs/tutorials/on-device-training/ios-app.md index fff1347923ef0..0acda051fd78a 100644 --- a/docs/tutorials/on-device-training/ios-app.md +++ b/docs/tutorials/on-device-training/ios-app.md @@ -947,27 +947,27 @@ Now, we are ready to run the application. You can run the application on the sim a. Now, when you run the application, you should see the following screen: - +My Voice application with Train and Infer buttons b. Next, click on the `Train` button to navigate to the `TrainView`. The `TrainView` will prompt you to record your voice. You will need to record your voice `kNumRecordings` times. - +My Voice application with words to record c. Once all the recordings are complete, the application will train the model on the given data. You will see the progress bar indicating the progress of the training. - +Loading bar while the app is training d. Once the training is complete, you will see the following screen: - +The app informs you training finished successfully! e. Now, click on the `Infer` button to navigate to the `InferView`. The `InferView` will prompt you to record your voice. Once the recording is complete, it will perform inference with the trained model and display the result of the inference. - +My Voice application allows you to record and infer whether it's you or not. That's it! Hopefully, it identified your voice correctly. diff --git a/src/routes/blogs/pytorch-on-the-edge/+page.svelte b/src/routes/blogs/pytorch-on-the-edge/+page.svelte index 83ab6d2d49db6..d0a9d765cd5f1 100644 --- a/src/routes/blogs/pytorch-on-the-edge/+page.svelte +++ b/src/routes/blogs/pytorch-on-the-edge/+page.svelte @@ -179,9 +179,9 @@ fun run(audioTensor: OnnxTensor): Result {

Run PyTorch models on the edge

- By: Natalie Kershaw + By: Natalie Kershaw and - Prasanth Pulavarthi

@@ -217,12 +217,12 @@ fun run(audioTensor: OnnxTensor): Result { anywhere that is outside of the cloud, ranging from large, well-resourced personal computers to small footprint devices such as mobile phones. This has been a challenging task to accomplish in the past, but new advances in model optimization and software like - ONNX Runtime + ONNX Runtime make it more feasible - even for new generative AI and large language models like Stable Diffusion, Whisper, and Llama2.

-

Considerations for PyTorch models on the edge

+

Considerations for PyTorch models on the edge

There are several factors to keep in mind when thinking about running a PyTorch model on the @@ -292,7 +292,7 @@ fun run(audioTensor: OnnxTensor): Result { -

Tools for PyTorch models on the edge

+

Tools for PyTorch models on the edge

We mentioned ONNX Runtime several times above. ONNX Runtime is a compact, standards-based @@ -305,7 +305,7 @@ fun run(audioTensor: OnnxTensor): Result { format that doesn't require the PyTorch framework and its gigabytes of dependencies. PyTorch has thought about this and includes an API that enables exactly this - torch.onnxtorch.onnx. ONNX is an open standard that defines the operators that make up models. The PyTorch ONNX APIs take the Pythonic PyTorch code and turn it into a functional graph that captures the operators that are needed to run the model without Python. As with everything @@ -318,7 +318,7 @@ fun run(audioTensor: OnnxTensor): Result { The popular Hugging Face library also has APIs that build on top of this torch.onnx functionality to export models to the ONNX format. Over 130,000 models130,000 models are supported making it very likely that the model you care about is one of them.

@@ -328,7 +328,7 @@ fun run(audioTensor: OnnxTensor): Result { and web browsers) via various languages (from C# to JavaScript to Swift).

-

Examples of PyTorch models on the edge

+

Examples of PyTorch models on the edge

Stable Diffusion on Windows

@@ -345,7 +345,7 @@ fun run(audioTensor: OnnxTensor): Result {

You don't have to export the fifth model, ClipTokenizer, as it is available in ONNX Runtime extensionsONNX Runtime extensions, a library for pre and post processing PyTorch models.

@@ -353,7 +353,7 @@ fun run(audioTensor: OnnxTensor): Result { To run this pipeline of models as a .NET application, we build the pipeline code in C#. This code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX Runtime's device-specific hardware accelerators. This is configured with the ExecutionProviderTargetExecutionProviderTarget below.

@@ -366,7 +366,7 @@ fun run(audioTensor: OnnxTensor): Result {

You can build the application and run it on Windows with the detailed steps shown in this tutorialtutorial.

@@ -374,7 +374,7 @@ fun run(audioTensor: OnnxTensor): Result {

Running a PyTorch model locally in the browser is not only possible but super simple with - the transformers.js library. Transformers.js uses ONNX Runtime Web as its backend. Many models are already converted to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing @@ -407,7 +407,7 @@ fun run(audioTensor: OnnxTensor): Result { All components of the Whisper Tiny model (audio decoder, encoder, decoder, and text sequence generation) can be composed and exported to a single ONNX model using the Olive frameworkOlive framework. To run this model as part of a mobile application, you can use ONNX Runtime Mobile, which supports Android, iOS, react-native, and MAUI/Xamarin.

@@ -420,7 +420,7 @@ fun run(audioTensor: OnnxTensor): Result {

The relevant snippet of a example Android mobile appAndroid mobile app that performs speech transcription on short samples of audio is shown below:

@@ -476,11 +476,11 @@ fun run(audioTensor: OnnxTensor): Result {

You can read the full Speaker Verification tutorialSpeaker Verification tutorial, and build and run the application from sourcebuild and run the application from source.

diff --git a/src/routes/components/footer.svelte b/src/routes/components/footer.svelte index b030524976742..e6b855d0ca129 100644 --- a/src/routes/components/footer.svelte +++ b/src/routes/components/footer.svelte @@ -9,7 +9,7 @@