Skip to content

Commit

Permalink
deploy: ee01102
Browse files Browse the repository at this point in the history
  • Loading branch information
lowrank committed Feb 15, 2024
1 parent f24ad3c commit 78df9a5
Show file tree
Hide file tree
Showing 3 changed files with 47 additions and 16 deletions.
32 changes: 26 additions & 6 deletions Intro.html
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,10 @@ <h2> Contents </h2>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#convexity">Convexity</a></li>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#fundamentals-of-unconstrained-optimization">Fundamentals of unconstrained optimization</a></li>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#global-and-local-minimizer">Global and local minimizer</a></li>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#overview-of-optimization-algorithms">Overview of optimization algorithms</a></li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#overview-of-optimization-algorithms">Overview of optimization algorithms</a><ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#two-strategies-line-search-and-trust-region">Two strategies: line search and trust region</a></li>
</ul>
</li>
</ul>
Expand Down Expand Up @@ -424,7 +427,7 @@ <h2>Formulation of Optimization Problems<a class="headerlink" href="#formulation
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>&lt;matplotlib.legend.Legend at 0x7fd2282e1fa0&gt;
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>&lt;matplotlib.legend.Legend at 0x7faa07f2dfa0&gt;
</pre></div>
</div>
<img alt="_images/4fbd4785c8f46d899e6c6f409adae52b9a649215d6feb0ddf820a9bb6d9d328a.png" src="_images/4fbd4785c8f46d899e6c6f409adae52b9a649215d6feb0ddf820a9bb6d9d328a.png" />
Expand Down Expand Up @@ -504,7 +507,7 @@ <h3>Fundamentals of unconstrained optimization<a class="headerlink" href="#funda
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>&lt;matplotlib.collections.PathCollection at 0x7fd229447d90&gt;
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>&lt;matplotlib.collections.PathCollection at 0x7faa0468dc10&gt;
</pre></div>
</div>
<img alt="_images/5f051da6c9abf2d9014dbfdfa4abc36725e6313c616292295207cda44ceaa6c1.png" src="_images/5f051da6c9abf2d9014dbfdfa4abc36725e6313c616292295207cda44ceaa6c1.png" />
Expand All @@ -531,7 +534,7 @@ <h3>Fundamentals of unconstrained optimization<a class="headerlink" href="#funda
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>&lt;matplotlib.legend.Legend at 0x7fd214694550&gt;
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>&lt;matplotlib.legend.Legend at 0x7fa9f92e67c0&gt;
</pre></div>
</div>
<img alt="_images/b23ef640ce15dba97b6036471797d3d96fa69c3e64afb2e85cf823a5d988ddde.png" src="_images/b23ef640ce15dba97b6036471797d3d96fa69c3e64afb2e85cf823a5d988ddde.png" />
Expand Down Expand Up @@ -559,8 +562,22 @@ <h3>Global and local minimizer<a class="headerlink" href="#global-and-local-mini
<p>Consider the function <span class="math notranslate nohighlight">\(f(x) = x^4\)</span>. The first derivative is zero at <span class="math notranslate nohighlight">\(x=0\)</span>, but the second derivative is zero at <span class="math notranslate nohighlight">\(x=0\)</span>. The point <span class="math notranslate nohighlight">\(x=0\)</span> is a local minimizer but not a strict local minimizer.</p>
</section>
</div></section>
</section>
<section id="overview-of-optimization-algorithms">
<h3>Overview of optimization algorithms<a class="headerlink" href="#overview-of-optimization-algorithms" title="Link to this heading">#</a></h3>
<h2>Overview of optimization algorithms<a class="headerlink" href="#overview-of-optimization-algorithms" title="Link to this heading">#</a></h2>
<p>The optimization algorithms are <em>iterative</em>, which means they start from an initial point <span class="math notranslate nohighlight">\(x_0\in\mathbb{R}^n\)</span> and then generate a sequence of points <span class="math notranslate nohighlight">\(x_k\)</span>, <span class="math notranslate nohighlight">\(k=1,2,\cdots\)</span> that converge to the (possibly) optimal solution. To decide how to move from <span class="math notranslate nohighlight">\(x_k\)</span> to <span class="math notranslate nohighlight">\(x_{k+1}\)</span>, the algorithms usually require the information of <span class="math notranslate nohighlight">\(f\)</span> at earlier points.</p>
<section id="two-strategies-line-search-and-trust-region">
<h3>Two strategies: line search and trust region<a class="headerlink" href="#two-strategies-line-search-and-trust-region" title="Link to this heading">#</a></h3>
<p>Here we introduce two classical strategies for optimization algorithms: <strong>line search</strong> and <strong>trust region</strong>.</p>
<ul>
<li><p><strong>Line search</strong>: the line search strategy selects a direction <span class="math notranslate nohighlight">\(p_k\)</span> and then searches along this direction from the current point to minimize the objective function. The distance to move is determined by the following one-dimensional optimization problem</p>
<div class="math notranslate nohighlight">
\[\min_{\alpha&gt;0} f(x_k + \alpha p_k).\]</div>
<p>Here <span class="math notranslate nohighlight">\(\alpha\)</span> is called the <strong>step length</strong>. The above minimization can be sometimes very expensive to arrive at the exact solution and it is also unnecessary to obtain such an exact solution in practice. The practical line search algorithm will generate a finite number of trial step lengths until it finds a satisfactory one.</p>
<p>The line search strategy is widely used in the optimization algorithms, such as the steepest descent method, the Newton method, the quasi-Newton method, etc.</p>
</li>
<li><p><strong>Trust region</strong>: the trust region method does not optimize the objective function directly. Instead, it considers an approximated yet simple model <span class="math notranslate nohighlight">\(m_k\)</span> of the objective function, whose behavior near the current point <span class="math notranslate nohighlight">\(x_k\)</span> is similar to the objective function.</p></li>
</ul>
</section>
</section>
</section>
Expand Down Expand Up @@ -639,7 +656,10 @@ <h3>Overview of optimization algorithms<a class="headerlink" href="#overview-of-
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#convexity">Convexity</a></li>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#fundamentals-of-unconstrained-optimization">Fundamentals of unconstrained optimization</a></li>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#global-and-local-minimizer">Global and local minimizer</a></li>
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#overview-of-optimization-algorithms">Overview of optimization algorithms</a></li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#overview-of-optimization-algorithms">Overview of optimization algorithms</a><ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#two-strategies-line-search-and-trust-region">Two strategies: line search and trust region</a></li>
</ul>
</li>
</ul>
Expand Down
29 changes: 20 additions & 9 deletions _sources/Intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,23 +109,23 @@ These properties are usually conflicting with each other. For example, a rapidly

### Convexity

The **convexity** plays an important role in optimization. Usually it implies some benign properties of the optimization problem. The **convexity** applies to both sets and functions.
The **convexity** plays an important role in optimization. Usually it implies some benign properties of the optimization problem. The **convexity** applies to both sets and functions.

- For sets, a set $C\subseteq\mathbb{R}^n$ is called **convex** if the line segment between any two points in $C$ is also in $C$. Mathematically, it means
- For sets, a set $C\subseteq\mathbb{R}^n$ is called **convex** if the line segment between any two points in $C$ is also in $C$. Mathematically, it means

$$\lambda x + (1-\lambda)y\in C, \quad \forall x, y\in C,\quad \lambda\in[0,1].$$

- For functions, a function $f(x)$ is called **convex** if its domain is a convex set and the following inequality holds

$$f(\lambda x + (1-\lambda)y) \le \lambda f(x) + (1-\lambda)f(y), \quad \forall x, y\in\text{dom}f,\quad \lambda\in[0,1].$$

A function $f$ is called **concave** if $-f$ is convex. A function $f$ is called **strictly convex** if the inequality is strict.
A function $f$ is called **concave** if $-f$ is convex. A function $f$ is called **strictly convex** if the inequality is strict.

Convex programming is a special case of mathematical optimization in which
Convex programming is a special case of mathematical optimization in which

* the objective function is convex.
* the equality constraints are affine.
* the inequality constraints are convex.
- the objective function is convex.
- the equality constraints are affine.
- the inequality constraints are convex.

### Fundamentals of unconstrained optimization

Expand Down Expand Up @@ -188,7 +188,7 @@ The sufficient condition for $x^*$ to be a local minimizer is
- $\nabla f(x^*)=0$, i.e., the first order necessary condition.
- $\nabla^2 f(x^*)$ is positive definite, i.e., the second order sufficient condition.

A strictly local minimizer may fail to satisfy the second order sufficient condition.
A strictly local minimizer may fail to satisfy the second order sufficient condition.

````{prf:example}
:label: ex-quadratic-function
Expand All @@ -197,9 +197,20 @@ Consider the function $f(x) = x^4$. The first derivative is zero at $x=0$, but t
````

## Overview of optimization algorithms

The optimization algorithms are *iterative*, which means they start from an initial point $x_0\in\mathbb{R}^n$ and then generate a sequence of points $x_k$, $k=1,2,\cdots$ that converge to the (possibly) optimal solution. To decide how to move from $x_k$ to $x_{k+1}$, the algorithms usually require the information of $f$ at earlier points.

### Two strategies: line search and trust region

Here we introduce two classical strategies for optimization algorithms: **line search** and **trust region**.

- **Line search**: the line search strategy selects a direction $p_k$ and then searches along this direction from the current point to minimize the objective function. The distance to move is determined by the following one-dimensional optimization problem

### Overview of optimization algorithms
$$\min_{\alpha>0} f(x_k + \alpha p_k).$$

Here $\alpha$ is called the **step length**. The above minimization can be sometimes very expensive to arrive at the exact solution and it is also unnecessary to obtain such an exact solution in practice. The practical line search algorithm will generate a finite number of trial step lengths until it finds a satisfactory one.

The line search strategy is widely used in the optimization algorithms, such as the steepest descent method, the Newton method, the quasi-Newton method, etc.

- **Trust region**: the trust region method does not optimize the objective function directly. Instead, it considers an approximated yet simple model $m_k$ of the objective function, whose behavior near the current point $x_k$ is similar to the objective function.
Loading

0 comments on commit 78df9a5

Please sign in to comment.