Skip to content

Commit

Permalink
update category term (#2615)
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored Feb 19, 2024
1 parent b08a7f5 commit d09f53e
Show file tree
Hide file tree
Showing 19 changed files with 185 additions and 185 deletions.
14 changes: 7 additions & 7 deletions cpu/2.2.0+cpu/tutorials/api_doc.html
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@
<li class="toctree-l3"><a class="reference internal" href="#ipex.verbose"><code class="docutils literal notranslate"><span class="pre">verbose</span></code></a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#fast-bert-experimental">Fast Bert (Experimental)</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#fast-bert-prototype">Fast Bert (Prototype)</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#ipex.fast_bert"><code class="docutils literal notranslate"><span class="pre">fast_bert()</span></code></a></li>
</ul>
</li>
Expand Down Expand Up @@ -232,7 +232,7 @@ <h2>General<a class="headerlink" href="#general" title="Permalink to this headin
<code class="docutils literal notranslate"><span class="pre">True</span></code>. You might get better performance at the cost of extra memory usage.
The default value is <code class="docutils literal notranslate"><span class="pre">None</span></code>. Explicitly setting this knob overwrites the
configuration set by <code class="docutils literal notranslate"><span class="pre">level</span></code> knob.</p></li>
<li><p><strong>graph_mode</strong> – (bool) [experimental]: It will automatically apply a combination of methods
<li><p><strong>graph_mode</strong> – (bool) [prototype]: It will automatically apply a combination of methods
to generate graph or multiple subgraphs if True. The default value is <code class="docutils literal notranslate"><span class="pre">False</span></code>.</p></li>
<li><p><strong>concat_linear</strong> (<em>bool</em>) – Whether to perform <code class="docutils literal notranslate"><span class="pre">concat_linear</span></code>. It only
works for inference model. The default value is <code class="docutils literal notranslate"><span class="pre">None</span></code>. Explicitly
Expand Down Expand Up @@ -387,12 +387,12 @@ <h2>General<a class="headerlink" href="#general" title="Permalink to this headin
</dd></dl>

</section>
<section id="fast-bert-experimental">
<h2>Fast Bert (Experimental)<a class="headerlink" href="#fast-bert-experimental" title="Permalink to this heading"></a></h2>
<section id="fast-bert-prototype">
<h2>Fast Bert (Prototype)<a class="headerlink" href="#fast-bert-prototype" title="Permalink to this heading"></a></h2>
<dl class="py function">
<dt class="sig sig-object py" id="ipex.fast_bert">
<span class="sig-prename descclassname"><span class="pre">ipex.</span></span><span class="sig-name descname"><span class="pre">fast_bert</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">dtype</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">torch.float32</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">optimizer</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">unpad</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">False</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#ipex.fast_bert" title="Permalink to this definition"></a></dt>
<dd><p>Use TPP to speedup training/inference. fast_bert API is still a experimental
<dd><p>Use TPP to speedup training/inference. fast_bert API is still a prototype
feature and now only optimized for bert model.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
Expand Down Expand Up @@ -556,7 +556,7 @@ <h2>Graph Optimization<a class="headerlink" href="#graph-optimization" title="Pe
</dl>
</dd></dl>

<p>Experimental API, introduction is avaiable at <a class="reference external" href="./features/int8_recipe_tuning_api.html">feature page</a>.</p>
<p>Prototype API, introduction is avaiable at <a class="reference external" href="./features/int8_recipe_tuning_api.html">feature page</a>.</p>
<dl class="py function">
<dt class="sig sig-object py" id="ipex.quantization.autotune">
<span class="sig-prename descclassname"><span class="pre">ipex.quantization.</span></span><span class="sig-name descname"><span class="pre">autotune</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">calib_dataloader</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">calib_func</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">eval_func</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">op_type_dict</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">smoothquant_args</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">sampling_sizes</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">accuracy_criterion</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">tuning_time</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">0</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#ipex.quantization.autotune" title="Permalink to this definition"></a></dt>
Expand Down Expand Up @@ -810,4 +810,4 @@ <h2>Graph Optimization<a class="headerlink" href="#graph-optimization" title="Pe
</script>

</body>
</html>
</html>
32 changes: 16 additions & 16 deletions cpu/2.2.0+cpu/tutorials/examples.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@
<li class="toctree-l6"><a class="reference internal" href="#id3">BERT</a></li>
</ul>
</li>
<li class="toctree-l5"><a class="reference internal" href="#torchdynamo-mode-experimental-new-feature-from-2-0-0">TorchDynamo Mode (Experimental, <em>NEW feature from 2.0.0</em>)</a><ul>
<li class="toctree-l5"><a class="reference internal" href="#torchdynamo-mode-beta-new-feature-from-2-0-0">TorchDynamo Mode (Beta, <em>NEW feature from 2.0.0</em>)</a><ul>
<li class="toctree-l6"><a class="reference internal" href="#id4">Resnet50</a></li>
<li class="toctree-l6"><a class="reference internal" href="#id5">BERT</a></li>
</ul>
Expand All @@ -107,14 +107,14 @@
<li class="toctree-l6"><a class="reference internal" href="#id12">BERT</a></li>
</ul>
</li>
<li class="toctree-l5"><a class="reference internal" href="#id13">TorchDynamo Mode (Experimental, <em>NEW feature from 2.0.0</em>)</a><ul>
<li class="toctree-l5"><a class="reference internal" href="#id13">TorchDynamo Mode (Beta, <em>NEW feature from 2.0.0</em>)</a><ul>
<li class="toctree-l6"><a class="reference internal" href="#id14">Resnet50</a></li>
<li class="toctree-l6"><a class="reference internal" href="#id15">BERT</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="#fast-bert-experimental">Fast Bert (<em>Experimental</em>)</a></li>
<li class="toctree-l4"><a class="reference internal" href="#fast-bert-beta">Fast Bert (<em>Beta</em>)</a></li>
<li class="toctree-l4"><a class="reference internal" href="#int8">INT8</a><ul>
<li class="toctree-l5"><a class="reference internal" href="#calibration">Calibration</a><ul>
<li class="toctree-l6"><a class="reference internal" href="#static-quantization">Static Quantization</a></li>
Expand Down Expand Up @@ -217,7 +217,7 @@ <h4>Single-instance Training<a class="headerlink" href="#single-instance-trainin
<span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">)</span>
<span class="c1"># For BFloat16</span>
<span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">bfloat16</span><span class="p">)</span>
<span class="c1"># Invoke the code below to enable experimental feature torch.compile</span>
<span class="c1"># Invoke the code below to enable beta feature torch.compile</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">backend</span><span class="o">=</span><span class="s2">&quot;ipex&quot;</span><span class="p">)</span>
<span class="o">...</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">zero_grad</span><span class="p">()</span>
Expand Down Expand Up @@ -259,7 +259,7 @@ <h5>Float32<a class="headerlink" href="#float32" title="Permalink to this headin
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">()</span>

<span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">)</span>
<span class="c1"># Uncomment the code below to enable experimental feature torch.compile</span>
<span class="c1"># Uncomment the code below to enable beta feature torch.compile</span>
<span class="c1"># model = torch.compile(model, backend=&quot;ipex&quot;)</span>

<span class="k">for</span> <span class="n">batch_idx</span><span class="p">,</span> <span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">train_loader</span><span class="p">):</span>
Expand Down Expand Up @@ -312,7 +312,7 @@ <h5>BFloat16<a class="headerlink" href="#bfloat16" title="Permalink to this head
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">()</span>

<span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">bfloat16</span><span class="p">)</span>
<span class="c1"># Uncomment the code below to enable experimental feature torch.compile</span>
<span class="c1"># Uncomment the code below to enable beta feature torch.compile</span>
<span class="c1"># model = torch.compile(model, backend=&quot;ipex&quot;)</span>

<span class="k">for</span> <span class="n">batch_idx</span><span class="p">,</span> <span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">train_loader</span><span class="p">):</span>
Expand Down Expand Up @@ -513,8 +513,8 @@ <h6>BERT<a class="headerlink" href="#id3" title="Permalink to this heading"><
</div>
</section>
</section>
<section id="torchdynamo-mode-experimental-new-feature-from-2-0-0">
<h5>TorchDynamo Mode (Experimental, <em>NEW feature from 2.0.0</em>)<a class="headerlink" href="#torchdynamo-mode-experimental-new-feature-from-2-0-0" title="Permalink to this heading"></a></h5>
<section id="torchdynamo-mode-beta-new-feature-from-2-0-0">
<h5>TorchDynamo Mode (Beta, <em>NEW feature from 2.0.0</em>)<a class="headerlink" href="#torchdynamo-mode-beta-new-feature-from-2-0-0" title="Permalink to this heading"></a></h5>
<section id="id4">
<h6>Resnet50<a class="headerlink" href="#id4" title="Permalink to this heading"></a></h6>
<p><strong>Note:</strong> You need to install <code class="docutils literal notranslate"><span class="pre">torchvision</span></code> Python package to run the following example.</p>
Expand All @@ -525,7 +525,7 @@ <h6>Resnet50<a class="headerlink" href="#id4" title="Permalink to this heading">
<span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">128</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">)</span>

<span class="c1"># Experimental Feature</span>
<span class="c1"># Beta Feature</span>
<span class="c1">#################### code changes #################### # noqa F401</span>
<span class="kn">import</span> <span class="nn">intel_extension_for_pytorch</span> <span class="k">as</span> <span class="nn">ipex</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">weights_prepack</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
Expand Down Expand Up @@ -553,7 +553,7 @@ <h6>BERT<a class="headerlink" href="#id5" title="Permalink to this heading"><
<span class="n">seq_length</span> <span class="o">=</span> <span class="mi">512</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="n">vocab_size</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="p">[</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_length</span><span class="p">])</span>

<span class="c1"># Experimental Feature</span>
<span class="c1"># Beta Feature</span>
<span class="c1">#################### code changes #################### # noqa F401</span>
<span class="kn">import</span> <span class="nn">intel_extension_for_pytorch</span> <span class="k">as</span> <span class="nn">ipex</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">weights_prepack</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
Expand Down Expand Up @@ -685,7 +685,7 @@ <h6>BERT<a class="headerlink" href="#id12" title="Permalink to this heading">
</section>
</section>
<section id="id13">
<h5>TorchDynamo Mode (Experimental, <em>NEW feature from 2.0.0</em>)<a class="headerlink" href="#id13" title="Permalink to this heading"></a></h5>
<h5>TorchDynamo Mode (Beta, <em>NEW feature from 2.0.0</em>)<a class="headerlink" href="#id13" title="Permalink to this heading"></a></h5>
<section id="id14">
<h6>Resnet50<a class="headerlink" href="#id14" title="Permalink to this heading"></a></h6>
<p><strong>Note:</strong> You need to install <code class="docutils literal notranslate"><span class="pre">torchvision</span></code> Python package to run the following example.</p>
Expand All @@ -696,7 +696,7 @@ <h6>Resnet50<a class="headerlink" href="#id14" title="Permalink to this heading"
<span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">128</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">)</span>

<span class="c1"># Experimental Feature</span>
<span class="c1"># Beta Feature</span>
<span class="c1">#################### code changes #################### # noqa F401</span>
<span class="kn">import</span> <span class="nn">intel_extension_for_pytorch</span> <span class="k">as</span> <span class="nn">ipex</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">bfloat16</span><span class="p">,</span> <span class="n">weights_prepack</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
Expand Down Expand Up @@ -724,7 +724,7 @@ <h6>BERT<a class="headerlink" href="#id15" title="Permalink to this heading">
<span class="n">seq_length</span> <span class="o">=</span> <span class="mi">512</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="n">vocab_size</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="p">[</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_length</span><span class="p">])</span>

<span class="c1"># Experimental Feature</span>
<span class="c1"># Beta Feature</span>
<span class="c1">#################### code changes #################### # noqa F401</span>
<span class="kn">import</span> <span class="nn">intel_extension_for_pytorch</span> <span class="k">as</span> <span class="nn">ipex</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ipex</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">bfloat16</span><span class="p">,</span> <span class="n">weights_prepack</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
Expand All @@ -740,8 +740,8 @@ <h6>BERT<a class="headerlink" href="#id15" title="Permalink to this heading">
</section>
</section>
</section>
<section id="fast-bert-experimental">
<h4>Fast Bert (<em>Experimental</em>)<a class="headerlink" href="#fast-bert-experimental" title="Permalink to this heading"></a></h4>
<section id="fast-bert-beta">
<h4>Fast Bert (<em>Beta</em>)<a class="headerlink" href="#fast-bert-beta" title="Permalink to this heading"></a></h4>
<p><strong>Note:</strong> You need to install <code class="docutils literal notranslate"><span class="pre">transformers</span></code> Python package to run the following example.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">BertModel</span>
Expand Down Expand Up @@ -1061,4 +1061,4 @@ <h2>Intel® AI Reference Models<a class="headerlink" href="#intel-ai-reference-m
</script>

</body>
</html>
</html>
Loading

0 comments on commit d09f53e

Please sign in to comment.