From 780fb2f53f63e588de6a1ec47129abe6776dfba5 Mon Sep 17 00:00:00 2001 From: sfatimar Date: Sat, 9 Mar 2024 00:11:23 +0530 Subject: [PATCH 1/3] Update OpenVINO-ExecutionProvider.md (#19794) ### Description Documentation Update for Release 1.17.1 ### Motivation and Context This change is required for Documentation Update for Release 1.17.1 Packages. Alignment with OpenVINO 2023.3. --- docs/execution-providers/OpenVINO-ExecutionProvider.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/execution-providers/OpenVINO-ExecutionProvider.md b/docs/execution-providers/OpenVINO-ExecutionProvider.md index 35f64989e8851..588d6d5d05572 100644 --- a/docs/execution-providers/OpenVINO-ExecutionProvider.md +++ b/docs/execution-providers/OpenVINO-ExecutionProvider.md @@ -20,7 +20,7 @@ Accelerate ONNX models on Intel CPUs, GPUs with Intel OpenVINO™ Execution Prov ## Install Pre-built packages and Docker images are published for OpenVINO™ Execution Provider for ONNX Runtime by Intel for each release. -* OpenVINO™ Execution Provider for ONNX Runtime Release page: [Latest v5.1 Release](https://github.com/intel/onnxruntime/releases) +* OpenVINO™ Execution Provider for ONNX Runtime Release page: [Latest v5.2 Release](https://github.com/intel/onnxruntime/releases) * Python wheels Ubuntu/Windows: [onnxruntime-openvino](https://pypi.org/project/onnxruntime-openvino/) * Docker image: [openvino/onnxruntime_ep_ubuntu20](https://hub.docker.com/r/openvino/onnxruntime_ep_ubuntu20) @@ -30,6 +30,8 @@ ONNX Runtime OpenVINO™ Execution Provider is compatible with three lastest rel |ONNX Runtime|OpenVINO™|Notes| |---|---|---| +|1.17.1|2023.3|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.2)| +|1.17.1|2023.2|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.2)| |1.16.0|2023.1|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.1)| |1.15.0|2023.0|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.0.0)| |1.14.0|2022.3|[Details](https://github.com/intel/onnxruntime/releases/tag/v4.3)| From cdf838a194d5ca3582e6652b8e25d1110529172c Mon Sep 17 00:00:00 2001 From: Yulong Wang <7679871+fs-eire@users.noreply.github.com> Date: Tue, 12 Mar 2024 12:13:49 -0700 Subject: [PATCH 2/3] [doc] fix invalid link in OpenVINO EP doc (#19855) ### Description fix invalid link in OpenVINO EP doc --- docs/execution-providers/OpenVINO-ExecutionProvider.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/execution-providers/OpenVINO-ExecutionProvider.md b/docs/execution-providers/OpenVINO-ExecutionProvider.md index 588d6d5d05572..49ea3c7b203fe 100644 --- a/docs/execution-providers/OpenVINO-ExecutionProvider.md +++ b/docs/execution-providers/OpenVINO-ExecutionProvider.md @@ -96,7 +96,7 @@ Enables [OpenCL queue throttling](https://docs.openvino.ai/latest/groupov_runtim ### Model caching -OpenVINO™ supports [model caching](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_caching_overview.html). +OpenVINO™ supports [model caching](https://docs.openvino.ai/2024/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.html). From OpenVINO™ 2023.1 version, model caching feature is supported on CPU, GPU along with kernel caching on iGPU, dGPU. From 8254a64271b3f496e0185eacbd44d609327e55b5 Mon Sep 17 00:00:00 2001 From: Sophie Schoenmeyer <107952697+sophies927@users.noreply.github.com> Date: Tue, 12 Mar 2024 19:05:56 -0700 Subject: [PATCH 3/3] Add AMD community blog (#19873) Update ORT website blog community tab w/ new AMD + ORT blog: https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html --- src/routes/blogs/+page.svelte | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/src/routes/blogs/+page.svelte b/src/routes/blogs/+page.svelte index 2ccf0470ae1cb..7b0def07a4361 100644 --- a/src/routes/blogs/+page.svelte +++ b/src/routes/blogs/+page.svelte @@ -302,6 +302,13 @@ } ]; let blogsCommunity = [ + { + title: 'Efficient image generation with Stable Diffusion models and ONNX Runtime using AMD GPUs', + date: 'February 23, 2024', + link: 'https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html', + blurb: + 'Use pre-trained Stable Diffusion models to generate images from text (text-to-image), transform existing visuals (image-to-image), and restore damaged pictures (inpainting) on AMD GPUs using ONNX Runtime.' + }, { title: 'AMD expands its AI and ML development tools with ROCm 6.0', date: 'February 15, 2024',