From 185086986104fc2657f67d2b82c4f30ec8abdbed Mon Sep 17 00:00:00 2001 From: matyash12 <93146910+matyash12@users.noreply.github.com> Date: Tue, 7 Nov 2023 22:04:11 +0100 Subject: [PATCH] Typo (#18327) fix typo elctron => electron --- docs/tutorials/web/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/web/index.md b/docs/tutorials/web/index.md index ce3a615f24321..e36391237e449 100644 --- a/docs/tutorials/web/index.md +++ b/docs/tutorials/web/index.md @@ -41,7 +41,7 @@ For more detail on the steps below, see the [build a web application](./build-we * **The model is too large** and requires higher hardware specs. In order to do inference on the client you need to have a model that is small enough to run efficiently on less powerful hardware. * You don't want the model to be downloaded onto the device. - You can also use the onnxruntime-node package in the backend of an elctron app. + You can also use the onnxruntime-node package in the backend of an electron app. * Inference on server using other language APIs. Use the ONNX Runtime packages for C/C++ and other languages.