Skip to content

Commit

Permalink
fixed links.
Browse files Browse the repository at this point in the history
  • Loading branch information
MaanavD committed Dec 10, 2024
1 parent 0ddcb8d commit f9cbd18
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
4 changes: 2 additions & 2 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
*2024.3 is the current recommended OpenVINO™ version. [OpenVINO™ 2023.3](https://docs.openvino.ai/2023.3/home.html) is minimal OpenVINO™ version requirement.*

2. Configure the target hardware with specific follow on instructions:
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_configurations_for_intel_gpu.html#gpu-guide-windows), [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_configurations_for_intel_gpu.html#linux)
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#linux)


3. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
Expand Down Expand Up @@ -306,7 +306,7 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
* `--use_openvino` builds the OpenVINO™ Execution Provider in ONNX Runtime.
* `<hardware_option>`: Specifies the default hardware target for building OpenVINO™ Execution Provider. This can be overriden dynamically at runtime with another option (refer to [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#summary-of-options) for more details on dynamic device selection). Below are the options for different Intel target devices.
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.

| Hardware Option | Target Device |
| --------------- | ------------------------|
Expand Down
10 changes: 5 additions & 5 deletions docs/execution-providers/OpenVINO-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ To use csharp api for openvino execution provider create a custom nuget package.

### OpenCL queue throttling for GPU devices

Enables [OpenCL queue throttling](https://docs.openvino.ai/latest/groupov_runtime_ocl_gpu_prop_cpp_api.html?highlight=throttling) for GPU devices. Reduces CPU utilization when using GPUs with OpenVINO EP.
Enables [OpenCL queue throttling](https://docs.openvino.ai/2024/api/c_cpp_api/group__ov__runtime__ocl__gpu__prop__cpp__api.html) for GPU devices. Reduces CPU utilization when using GPUs with OpenVINO EP.

### Model caching

Expand Down Expand Up @@ -118,7 +118,7 @@ Int8 models are supported on CPU, GPU and NPU.

OpenVINO™ Execution Provider now supports ONNX models that store weights in external files. It is especially useful for models larger than 2GB because of protobuf limitations.

See the [OpenVINO™ ONNX Support documentation](https://docs.openvino.ai/latest/classov_1_1Core.html).
See the [OpenVINO™ ONNX Support documentation](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-onnx.html).

Converting and Saving an ONNX Model to External Data:
Use the ONNX API's.[documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md#converting-and-saving-an-onnx-model-to-external-data).
Expand Down Expand Up @@ -177,7 +177,7 @@ Use `AUTO:<device 1><device 2>..` as the device name to delegate selection of an
From the application point of view, this is just another device that handles all accelerators in full system.
For more information on Auto-Device plugin of OpenVINO™, please refer to the
[Intel OpenVINO™ Auto Device Plugin](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_AUTO.html).
[Intel OpenVINO™ Auto Device Plugin](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#automatic-device-selection).
### Heterogeneous Execution for OpenVINO™ Execution Provider
Expand All @@ -186,7 +186,7 @@ The heterogeneous execution enables computing for inference on one network on se
* To utilize accelerator's power and calculate the heaviest parts of the network on the accelerator and execute unsupported layers on fallback devices like the CPU to utilize all available hardware more efficiently during one inference.
For more information on Heterogeneous plugin of OpenVINO™, please refer to the
[Intel OpenVINO™ Heterogeneous Plugin](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Hetero_execution.html).
[Intel OpenVINO™ Heterogeneous Plugin](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html).
### Multi-Device Execution for OpenVINO EP
Expand All @@ -196,7 +196,7 @@ Multi-Device plugin automatically assigns inference requests to available comput
* More consistent performance, since the devices can now share the inference burden (so that if one device is becoming too busy, another device can take more of the load)
For more information on Multi-Device plugin of OpenVINO™, please refer to the
[Intel OpenVINO™ Multi Device Plugin](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Running_on_multiple_devices.html).
[Intel OpenVINO™ Multi Device Plugin](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#multi-stream-execution).
### Export OpenVINO Compiled Blob
Export the OpenVINO compiled blob as an ONNX model. Using this ONNX model for subsequent inferences avoids model recompilation and could have a positive impact on Session creation time. This feature is currently enabled for fully supported models only. It complies with the ORT session config keys
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/on-device-training/ios-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -964,5 +964,5 @@ Congratulations! You have successfully built an iOS application that can train a



[Back to top](#contents)
[Back to top](#table-of-contents)

2 changes: 1 addition & 1 deletion src/routes/blogs/accelerating-phi-3/+page.svx
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Whether it's Windows, Linux, Android, or Mac, there's a path to infer models eff

## Try the ONNX Runtime Generate() API

We are pleased to announce our new Generate() API, which makes it easier to run the Phi-3 models across a range of devices, platforms, and EP backends by wrapping several aspects of generative AI inferencing. The Generate() API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial).
We are pleased to announce our new Generate() API, which makes it easier to run the Phi-3 models across a range of devices, platforms, and EP backends by wrapping several aspects of generative AI inferencing. The Generate() API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](https://aka.ms/generate-tutorial).


Example:
Expand Down

0 comments on commit f9cbd18

Please sign in to comment.