diff --git a/docs/tutorials/on-device-training/android-app.md b/docs/tutorials/on-device-training/android-app.md index b9b0ae49c7bec..ab528a5a1c1ad 100644 --- a/docs/tutorials/on-device-training/android-app.md +++ b/docs/tutorials/on-device-training/android-app.md @@ -7,15 +7,15 @@ nav_order: 1 --- # On-Device Training: Building an Android Application - +{: .no_toc } In this tutorial, we will explore how to build an Android application that incorporates ONNX Runtime's On-Device Training solution. On-device training refers to the process of training a machine learning model directly on an edge device without relying on cloud services or external servers. Here is what the application will look like at the end of this tutorial: - + ## Introduction - +{: .no_toc } We will guide you through the steps to create an Android app that can train a simple image classification model using on-device training techniques. This tutorial showcases the `transfer learning` technique where knowledge gained from training a model on one task is leveraged to improve the performance of a model on a different but related task. Instead of starting the learning process from scratch, transfer learning allows us to transfer the knowledge or features learned by a pre-trained model to a new task. For this tutorial, we will leverage the `MobileNetV2` model which has been trained on large-scale image datasets such as ImageNet (which has 1,000 classes). We will use this model for classifying custom data into one of four classes. The initial layers of MobileNetV2 serve as a feature extractor, capturing generic visual features applicable to various tasks, and only the final classifier layer will be trained for the task at hand. @@ -24,26 +24,10 @@ In this tutorial, we will use data to learn to: - Classify animals into one of four categories using a pre-packed animals dataset. - Classify celebrities into one of four categories using a custom celebrities dataset. -## Contents - -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Offline Phase - Building the training artifacts](#offline-phase---building-the-training-artifacts) - - [Export the model to ONNX](#op1) - - [Define the trainable and non trainable parameters](#op2) - - [Generate the training artifacts](#op3) -- [Training Phase - Android application development](#training-phase---android-application-development) - - [Setting up the project in Android Studio](#tp1) - - [Adding the ONNX Runtime dependency](#tp2) - - [Packaging the Prebuilt Training Artifacts and Dataset](#tp3) - - [Interfacing with ONNX Runtime - C++ Code](#tp4) - - [Image Preprocessing](#tp5) - - [Application Frontend](#tp6) -- [Training Phase - Running the application on a device](#training-phase---running-the-application-on-a-device) - - [Running the application on a device](#tp7) - - [Training with a pre-loaded dataset - Animals](#tp8) - - [Training with a custom dataset - Celebrities](#tp9) -- [Conclusion](#conclusion) + +## Table of Contents +* TOC placeholder +{:toc} ## Prerequisites @@ -791,7 +775,7 @@ To follow this tutorial, you should have a basic understanding of Android app de b. Launching the application on the device should look like this: - + 2. Training with a pre-loaded dataset - Animals @@ -805,7 +789,7 @@ To follow this tutorial, you should have a basic understanding of Android app de e. Use any animal image from your library for inferencing now. - + As can be seen from the image above, the model correctly predicted `Cow`. @@ -825,7 +809,7 @@ To follow this tutorial, you should have a basic understanding of Android app de g. That's it!. Hopefully the application classified the image correctly. - + ## Conclusion diff --git a/docs/tutorials/on-device-training/ios-app.md b/docs/tutorials/on-device-training/ios-app.md index fff1347923ef0..e61bab68596ff 100644 --- a/docs/tutorials/on-device-training/ios-app.md +++ b/docs/tutorials/on-device-training/ios-app.md @@ -7,7 +7,7 @@ nav_order: 2 --- # Building an iOS Application - +{: .no_toc } In this tutorial, we will explore how to build an iOS application that incorporates ONNX Runtime's On-Device Training solution. On-device training refers to the process of training a machine learning model directly on an edge device without relying on cloud services or external servers. In this tutorial, we will build a simple speaker identification app that learns to identify a speaker's voice. We will take a look at how to train a model on-device, export the trained model, and use the trained model to perform inference. @@ -18,6 +18,7 @@ Here is what the application will look like: ## Introduction +{: .no_toc } We will guide you through the process of building an iOS application that can train a simple audio classification model using on-device training techniques. The tutorial showcases the `transfer learning` technique where knowledge gained from training a model on one task is leveraged to improve the performance of a model on a different but related task. Instead of starting the learning process from scratch, transfer learning allows us to transfer the knowledge or features learned by a pre-trained model to a new task. In this tutorial, we will leverage the [`wav2vec`](https://huggingface.co/superb/wav2vec2-base-superb-sid) model which has been trained on large-scale celebrity speech data such as `VoxCeleb1`. We will use the pre-trained model to extract features from the audio data and train a binary classifier to identify the speaker. The initial layers of the model serve as a feature extractor, capturing the important features of the audio data. Only the last layer of the model is trained to perform the classification task. @@ -29,23 +30,9 @@ In the tutorial, we will: - Use the exported model to perform inference -## Contents -- [Building an iOS Application](#building-an-ios-application) - - [Introduction](#introduction) - - [Contents](#contents) - - [Prerequisites](#prerequisites) - - [Generating the training artifacts](#generating-the-training-artifacts) - - [Building the iOS application](#building-the-ios-application) - - [Xcode Setup](#xcode-setup) - - [Application Overview](#application-overview) - - [Training the model](#training-the-model) - - [Inference with the trained model](#inference-with-the-trained-model) - - [Recording Audio](#recording-audio) - - [Train View](#train-view) - - [Infer View](#infer-view) - - [ContentView](#contentview) - - [Running the iOS application](#running-the-ios-application) - - [Conclusion](#conclusion) +## Table of Contents +* TOC placeholder +{:toc} ## Prerequisites @@ -947,27 +934,27 @@ Now, we are ready to run the application. You can run the application on the sim a. Now, when you run the application, you should see the following screen: - + b. Next, click on the `Train` button to navigate to the `TrainView`. The `TrainView` will prompt you to record your voice. You will need to record your voice `kNumRecordings` times. - + c. Once all the recordings are complete, the application will train the model on the given data. You will see the progress bar indicating the progress of the training. - + d. Once the training is complete, you will see the following screen: - + e. Now, click on the `Infer` button to navigate to the `InferView`. The `InferView` will prompt you to record your voice. Once the recording is complete, it will perform inference with the trained model and display the result of the inference. - + That's it! Hopefully, it identified your voice correctly. diff --git a/src/routes/blogs/pytorch-on-the-edge/+page.svelte b/src/routes/blogs/pytorch-on-the-edge/+page.svelte index 83ab6d2d49db6..d0a9d765cd5f1 100644 --- a/src/routes/blogs/pytorch-on-the-edge/+page.svelte +++ b/src/routes/blogs/pytorch-on-the-edge/+page.svelte @@ -179,9 +179,9 @@ fun run(audioTensor: OnnxTensor): Result {
- By: Natalie Kershaw + By: Natalie Kershaw and - Prasanth Pulavarthi
@@ -217,12 +217,12 @@ fun run(audioTensor: OnnxTensor): Result { anywhere that is outside of the cloud, ranging from large, well-resourced personal computers to small footprint devices such as mobile phones. This has been a challenging task to accomplish in the past, but new advances in model optimization and software like - ONNX Runtime + ONNX Runtime make it more feasible - even for new generative AI and large language models like Stable Diffusion, Whisper, and Llama2. -There are several factors to keep in mind when thinking about running a PyTorch model on the @@ -292,7 +292,7 @@ fun run(audioTensor: OnnxTensor): Result { -
We mentioned ONNX Runtime several times above. ONNX Runtime is a compact, standards-based @@ -305,7 +305,7 @@ fun run(audioTensor: OnnxTensor): Result { format that doesn't require the PyTorch framework and its gigabytes of dependencies. PyTorch has thought about this and includes an API that enables exactly this - torch.onnxtorch.onnx. ONNX is an open standard that defines the operators that make up models. The PyTorch ONNX APIs take the Pythonic PyTorch code and turn it into a functional graph that captures the operators that are needed to run the model without Python. As with everything @@ -318,7 +318,7 @@ fun run(audioTensor: OnnxTensor): Result { The popular Hugging Face library also has APIs that build on top of this torch.onnx functionality to export models to the ONNX format. Over 130,000 models130,000 models are supported making it very likely that the model you care about is one of them.
@@ -328,7 +328,7 @@ fun run(audioTensor: OnnxTensor): Result { and web browsers) via various languages (from C# to JavaScript to Swift). -You don't have to export the fifth model, ClipTokenizer, as it is available in ONNX Runtime extensionsONNX Runtime extensions, a library for pre and post processing PyTorch models.
@@ -353,7 +353,7 @@ fun run(audioTensor: OnnxTensor): Result { To run this pipeline of models as a .NET application, we build the pipeline code in C#. This code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX Runtime's device-specific hardware accelerators. This is configured with theExecutionProviderTarget
ExecutionProviderTarget below.
You can build the application and run it on Windows with the detailed steps shown in this tutorialtutorial.
@@ -374,7 +374,7 @@ fun run(audioTensor: OnnxTensor): Result {Running a PyTorch model locally in the browser is not only possible but super simple with - the transformers.js library. Transformers.js uses ONNX Runtime Web as its backend. Many models are already converted to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing @@ -407,7 +407,7 @@ fun run(audioTensor: OnnxTensor): Result { All components of the Whisper Tiny model (audio decoder, encoder, decoder, and text sequence generation) can be composed and exported to a single ONNX model using the Olive frameworkOlive framework. To run this model as part of a mobile application, you can use ONNX Runtime Mobile, which supports Android, iOS, react-native, and MAUI/Xamarin.
@@ -420,7 +420,7 @@ fun run(audioTensor: OnnxTensor): Result {The relevant snippet of a example Android mobile appAndroid mobile app that performs speech transcription on short samples of audio is shown below:
You can read the full Speaker Verification tutorialSpeaker Verification tutorial, and build and run the application from sourcebuild and run the application from source.
diff --git a/src/routes/components/footer.svelte b/src/routes/components/footer.svelte index b030524976742..e6b855d0ca129 100644 --- a/src/routes/components/footer.svelte +++ b/src/routes/components/footer.svelte @@ -9,7 +9,7 @@