Skip to content

Commit

Permalink
Loading encrypted compiled_blob
Browse files Browse the repository at this point in the history
Signed-off-by: sgolebiewski-intel <[email protected]>
  • Loading branch information
sgolebiewski-intel committed Dec 4, 2024
1 parent 7e42cb2 commit 9ea9cd8
Showing 1 changed file with 16 additions and 1 deletion.
17 changes: 16 additions & 1 deletion docs/articles_en/documentation/openvino-security.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ Hardware-based protection such as Intel Software Guard Extensions (Intel SGX) ca
decryption operation secrets and bind them to a device. For more information, see
the `Intel Software Guard Extensions <https://software.intel.com/en-us/sgx>`__.

Use the ``ov::Core::read_model`` to set model representations and weights respectively.
Use the `ov::Core::read_model <../api/c_cpp_api/group__ov__dev__exec__model.html#classov_1_1_core_1ae0576a95f841c3a6f5e46e4802716981>`__
to set model representations and weights respectively.

Currently there is no way to read external weights from memory for ONNX models.
The ``ov::Core::read_model(const std::string& model, const Tensor& weights)`` method
Expand All @@ -65,6 +66,20 @@ should be called with ``weights`` passed as an empty ``ov::Tensor``.
:language: cpp
:fragment: part1


Encrypted models that have already been compiled, in the form of blob files,
can be loaded using the
`ov::Core::import_model <../api/c_cpp_api/group__ov__runtime__cpp__api.html#_CPPv4N2ov4Core12import_modelERNSt7istreamERKNSt6stringERK6AnyMap>`__
method, as shown in the code sample below:

.. code-block:: cpp
ov::Core core;
// Import a model from a blob.
std::ifstream compiled_blob(blob, std::ios_base::in | std::ios_base::binary);
auto compiled_model = core.import_model(compiled_blob, "CPU");
Additional Resources
####################

Expand Down

0 comments on commit 9ea9cd8

Please sign in to comment.