Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/gh-pages' into fs-eire/gh-pages-…
Browse files Browse the repository at this point in the history
…doc-webgpu
  • Loading branch information
fs-eire committed Mar 14, 2024
2 parents f9f4369 + 8254a64 commit b0fb666
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 2 deletions.
6 changes: 4 additions & 2 deletions docs/execution-providers/OpenVINO-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Accelerate ONNX models on Intel CPUs, GPUs with Intel OpenVINO™ Execution Prov
## Install

Pre-built packages and Docker images are published for OpenVINO™ Execution Provider for ONNX Runtime by Intel for each release.
* OpenVINO™ Execution Provider for ONNX Runtime Release page: [Latest v5.1 Release](https://github.com/intel/onnxruntime/releases)
* OpenVINO™ Execution Provider for ONNX Runtime Release page: [Latest v5.2 Release](https://github.com/intel/onnxruntime/releases)
* Python wheels Ubuntu/Windows: [onnxruntime-openvino](https://pypi.org/project/onnxruntime-openvino/)
* Docker image: [openvino/onnxruntime_ep_ubuntu20](https://hub.docker.com/r/openvino/onnxruntime_ep_ubuntu20)

Expand All @@ -30,6 +30,8 @@ ONNX Runtime OpenVINO™ Execution Provider is compatible with three lastest rel

|ONNX Runtime|OpenVINO™|Notes|
|---|---|---|
|1.17.1|2023.3|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.2)|
|1.17.1|2023.2|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.2)|
|1.16.0|2023.1|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.1)|
|1.15.0|2023.0|[Details](https://github.com/intel/onnxruntime/releases/tag/v5.0.0)|
|1.14.0|2022.3|[Details](https://github.com/intel/onnxruntime/releases/tag/v4.3)|
Expand Down Expand Up @@ -94,7 +96,7 @@ Enables [OpenCL queue throttling](https://docs.openvino.ai/latest/groupov_runtim

### Model caching

OpenVINO™ supports [model caching](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_caching_overview.html).
OpenVINO™ supports [model caching](https://docs.openvino.ai/2024/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.html).

From OpenVINO™ 2023.1 version, model caching feature is supported on CPU, GPU along with kernel caching on iGPU, dGPU.

Expand Down
7 changes: 7 additions & 0 deletions src/routes/blogs/+page.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,13 @@
}
];
let blogsCommunity = [
{
title: 'Efficient image generation with Stable Diffusion models and ONNX Runtime using AMD GPUs',
date: 'February 23, 2024',
link: 'https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html',
blurb:
'Use pre-trained Stable Diffusion models to generate images from text (text-to-image), transform existing visuals (image-to-image), and restore damaged pictures (inpainting) on AMD GPUs using ONNX Runtime.'
},
{
title: 'AMD expands its AI and ML development tools with ROCm 6.0',
date: 'February 15, 2024',
Expand Down

0 comments on commit b0fb666

Please sign in to comment.