diff --git a/_sass/color_schemes/onnxruntime.scss b/_sass/color_schemes/onnxruntime.scss index 4e0cc934e1881..3016cc9046cc1 100644 --- a/_sass/color_schemes/onnxruntime.scss +++ b/_sass/color_schemes/onnxruntime.scss @@ -2,12 +2,82 @@ $link-color: #226aca; $btn-primary-color: #226aca; // Code is too light in default theme // -.highlight .n { - color: #555 !important; -} -.highlight .nn { - color: #555 !important; -} -.highlight .c1 { - color: #188616 !important; -} +// .highlight .n { +// color: #555 !important; +// } +// .highlight .nn { +// color: #555 !important; +// } +// .highlight .c1 { +// color: #188616 !important; +// } + +.highlight .hll { background-color: #ffffcc; } +.highlight { background: #ffffff; } +.highlight .c { color: #767676; } +.highlight .err { background-color: #FFAAAA; color: #a00000; } +.highlight .k { color: #008800; font-weight: bold; } +.highlight .o { color: #333333; } +.highlight .ch { color: #767676; } +.highlight .cm { color: #767676; } +.highlight .cp { color: #557799; } +.highlight .cpf { color: #767676; } +.highlight .c1 { color: #767676; } +.highlight .cs { color: #cc0000; font-weight: bold; } +.highlight .gd { color: #A00000; } +.highlight .ge { font-style: italic; } +.highlight .gr { color: #eb0000; } +.highlight .gh { color: #000080; font-weight: bold; } +.highlight .gi { color: #008700; } +.highlight .go { color: #767676; } +.highlight .gp { font-weight: bold; color: #bc5909; } +.highlight .gs { font-weight: bold; } +.highlight .gu { color: #800080; font-weight: bold; } +.highlight .gt { color: #0044DD; } +.highlight .kc { color: #008800; font-weight: bold; } +.highlight .kd { color: #008800; font-weight: bold; } +.highlight .kn { color: #008800; font-weight: bold; } +.highlight .kp { color: #003388; font-weight: bold; } +.highlight .kr { color: #008800; font-weight: bold; } +.highlight .kt { color: #333399; font-weight: bold; } +.highlight .m { color: #6600EE; font-weight: bold; } +.highlight .s { background-color: #fff0f0; } +.highlight .na { color: #0000CC; } +.highlight .nb { color: #007020; } +.highlight .nc { color: #BB0066; font-weight: bold; } +.highlight .no { color: #003366; font-weight: bold; } +.highlight .nd { color: #555555; font-weight: bold; } +.highlight .ni { color: #880000; font-weight: bold; } +.highlight .ne { font-weight: bold; color: #eb0000; } +.highlight .nf { color: #0066BB; font-weight: bold; } +.highlight .nl { font-weight: bold; color: #8f6f00; } +.highlight .nn { font-weight: bold; color: #0e7eab; } +.highlight .nt { color: #007700; } +.highlight .nv { color: #996633; } +.highlight .ow { color: #000000; font-weight: bold; } +.highlight .w { color: #767676; } +.highlight .mb { color: #6600EE; font-weight: bold; } +.highlight .mf { color: #6600EE; font-weight: bold; } +.highlight .mh { color: #005588; font-weight: bold; } +.highlight .mi { color: #0000DD; font-weight: bold; } +.highlight .mo { color: #4400EE; font-weight: bold; } +.highlight .sa { background-color: #fff0f0; } +.highlight .sb { background-color: #fff0f0; } +.highlight .sc { color: #0044DD; } +.highlight .dl { background-color: #fff0f0; } +.highlight .sd { color: #d54220; } +.highlight .s2 { background-color: #fff0f0; } +.highlight .se { color: #666666; font-weight: bold; background-color: #fff0f0; } +.highlight .sh { background-color: #fff0f0; } +.highlight .si { background-color: #eeeeee; } +.highlight .sx { background-color: #fff0f0; color: #d82100; } +.highlight .sr { color: #000000; background-color: #fff0ff; } +.highlight .s1 { background-color: #fff0f0; } +.highlight .ss { color: #AA6600; } +.highlight .bp { color: #007020; } +.highlight .fm { color: #0066BB; font-weight: bold; } +.highlight .vc { color: #336699; } +.highlight .vg { font-weight: bold; color: #b55f00; } +.highlight .vi { color: #3333BB; } +.highlight .vm { color: #996633; } +.highlight .il { color: #0000DD; font-weight: bold; } \ No newline at end of file diff --git a/docs/tutorials/on-device-training/ios-app.md b/docs/tutorials/on-device-training/ios-app.md index 76f485a2e2648..fff1347923ef0 100644 --- a/docs/tutorials/on-device-training/ios-app.md +++ b/docs/tutorials/on-device-training/ios-app.md @@ -15,7 +15,7 @@ In this tutorial, we will build a simple speaker identification app that learns Here is what the application will look like: - + ## Introduction We will guide you through the process of building an iOS application that can train a simple audio classification model using on-device training techniques. The tutorial showcases the `transfer learning` technique where knowledge gained from training a model on one task is leveraged to improve the performance of a model on a different but related task. Instead of starting the learning process from scratch, transfer learning allows us to transfer the knowledge or features learned by a pre-trained model to a new task. @@ -30,28 +30,22 @@ In the tutorial, we will: ## Contents -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Generating the training artifacts](#generating-the-training-artifacts) - - [Export the model to ONNX](#export-the-model-to-onnx) - - [Define the trainable and non trainable parameters](#define-the-trainable-and-non-trainable-parameters) - - [Generate the training artifacts](#generate-the-training-artifacts) - -- [Building the iOS application](#building-the-ios-application) +- [Building an iOS Application](#building-an-ios-application) + - [Introduction](#introduction) + - [Contents](#contents) + - [Prerequisites](#prerequisites) + - [Generating the training artifacts](#generating-the-training-artifacts) + - [Building the iOS application](#building-the-ios-application) - [Xcode Setup](#xcode-setup) - [Application Overview](#application-overview) - [Training the model](#training-the-model) - - [Loading the training artifacts and initializing training session](#loading-the-training-artifacts-and-initializing-training-session) - - [Training the model](#training-the-model-1) - - [Exporting the trained model](#exporting-the-trained-model) - - [Inference with the trained model](#inference-with-the-trained-model) - [Recording Audio](#recording-audio) - [Train View](#train-view) - [Infer View](#infer-view) - [ContentView](#contentview) -- [Running the iOS application](#running-the-ios-application) -- [Conclusion](#conclusion) + - [Running the iOS application](#running-the-ios-application) + - [Conclusion](#conclusion) ## Prerequisites diff --git a/src/routes/blogs/accelerating-llama-2/+page.svelte b/src/routes/blogs/accelerating-llama-2/+page.svelte index 5854bfcb489e8..aeb3bf6b83fae 100644 --- a/src/routes/blogs/accelerating-llama-2/+page.svelte +++ b/src/routes/blogs/accelerating-llama-2/+page.svelte @@ -45,11 +45,11 @@
- By: Kunal Vaishnavi and - Parinita Rahi + Parinita Rahi
14TH NOVEMBER, 2023 (Updated 22nd November) @@ -76,7 +76,7 @@ Llama2 is a state-of-the-art open source LLM from Meta ranging in scale from 7B to 70B parameters (7B, 13B, 70B). Microsoft and Meta announcedannounced their AI on Azure and Windows collaboration in July 2023. As part of the announcement, Llama2 was added to the Azure AI model catalog, which serves as a hub of foundation models that empower developers and machine learning (ML) professionals to easily discover, evaluate, customize, and @@ -152,7 +152,7 @@
More details on these metrics can be found herehere.
@@ -165,7 +165,7 @@- ONNX Runtime applied Megatron-LM Tensor Parallelism on the 70B model to split the original model weight onto different GPUs. Megatron @@ -176,7 +176,7 @@ You can find additional example scripts herehere.
@@ -252,7 +252,7 @@ calculate the rotary embeddings more efficiently with less memory usage. The rotary embedding compute kernels also support interleaved and non-interleaved formats to support both the Microsoft version of LLaMA-2Microsoft version of LLaMA-2 and the Hugging Face version of LLaMA-2 respectively while sharing the same calculations. @@ -260,11 +260,11 @@The optimizations work for the Hugging Face versionsHugging Face versions (models ending with -hf) and the Microsoft versions. You can download the optimized HF versions from - Microsoft's LLaMA-2 ONNX repository. Stay tuned for newer Microsoft versions coming soon!
@@ -281,7 +281,7 @@Here is an example of Llama2 optimization with OliveLlama2 optimization with Olive, which harnesses ONNX Runtime optimizations highlighted in this blog. Distinct optimization flows cater to various requirements. For instance, you have the flexibility to choose different data types for quantization in CPU and GPU inference, based on your accuracy @@ -294,7 +294,7 @@
Here is a sample notebooksample notebook that shows you an end-to-end example of how you can use the above ONNX Runtime optimizations in your application.
diff --git a/src/routes/training/+page.svelte b/src/routes/training/+page.svelte index 44fd288350c49..a51093a9cb397 100644 --- a/src/routes/training/+page.svelte +++ b/src/routes/training/+page.svelte @@ -221,8 +221,8 @@ Personalization tasks where the model needs to be trained on the user's data + Examples: