Skip to content

Commit

Permalink
Built site for gh-pages
Browse files Browse the repository at this point in the history
  • Loading branch information
Quarto GHA Workflow Runner committed Oct 16, 2024
1 parent bf805a7 commit d40d327
Show file tree
Hide file tree
Showing 10 changed files with 985 additions and 280 deletions.
2 changes: 1 addition & 1 deletion .nojekyll
Original file line number Diff line number Diff line change
@@ -1 +1 @@
380ff1bf
deb4dbf2
4 changes: 2 additions & 2 deletions 2024/homework.html
Original file line number Diff line number Diff line change
Expand Up @@ -213,8 +213,8 @@ <h1 class="title">Homework Overview</h1>


<p>Here is an overview of all Homework up to date:</p>
<p><strong>Hw 1</strong>: <a href="../2024/homework/homework01.html">here</a></p>
<p>TBD</p>
<p><strong>Hw 1: Google Colab</strong>: <a href="../2024/homework/homework01.html">here</a></p>
<p><strong>Hw 2: Neural Networks</strong>: <a href="../2024/homework/homework02.html">here</a></p>



Expand Down
670 changes: 670 additions & 0 deletions 2024/homework/homework02.html

Large diffs are not rendered by default.

Binary file added 2024/weeks/week02/nn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added 2024/weeks/week02/nn_layers.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added 2024/weeks/week02/nn_neurons.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added 2024/weeks/week02/nn_perceptron.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
272 changes: 155 additions & 117 deletions 2024/weeks/week02/slides.html
Original file line number Diff line number Diff line change
Expand Up @@ -359,131 +359,163 @@ <h1 class="title"><font style="font-size:1em;">Week 02<br> Basics of Neural Netw
<nav role="doc-toc">
<h2 id="toc-title">What we will cover today:</h2>
<ul>
<li><a href="#/mathematical-concepts" id="/toc-mathematical-concepts">Mathematical concepts</a></li>
<li><a href="#/machine-learning" id="/toc-machine-learning">Machine learning</a></li>
<li><a href="#/neural-networks" id="/toc-neural-networks">Neural Networks</a></li>
</ul>
</nav>
</section>
<section>
<section id="mathematical-concepts" class="title-slide slide level1 smaller">
<h1>Mathematical concepts</h1>
<section class="slide level2">

</section>
<section id="section" class="slide level2 smaller">
<h2></h2>
<p><strong>Scalars</strong>: single number</p>
<p><span class="math display">\[
<!--
# Mathematical concepts {.smaller}
## {.smaller}
**Scalars**: single number
$$
x = 1
\]</span></p>
<p><strong>Vectors</strong>: sequence of numbers</p>
<p><span class="math display">\[
$$
**Vectors**: sequence of numbers
$$
v = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}
\]</span></p>
<p><strong>Matrix</strong>: 2D array of numbers</p>
<p><span class="math display">\[
M = \begin{bmatrix} 1 &amp; 2 &amp; 3 \\ 4 &amp; 5 &amp; 6 \end{bmatrix}
\]</span></p>
</section>
<section id="section-1" class="slide level2 smaller">
<h2></h2>
<p><strong>Matrix multiplication</strong></p>
<p><span class="math display">\[
\begin{bmatrix} 1 &amp; 2 &amp; 3 \\ 4 &amp; 5 &amp; 6 \end{bmatrix} \times \begin{bmatrix} 1 &amp; 2 \\ 3 &amp; 4 \\ 5 &amp; 6 \end{bmatrix}
\]</span></p>
<p><span class="math display">\[
= \begin{bmatrix} 22 &amp; 28 \\ 49 &amp; 64 \end{bmatrix}
\]</span></p>
<ul>
<li>The first matrix has 2 rows and 3 columns, and the second matrix has 3 rows and 2 columns.</li>
<li>The number of columns in the first matrix should be equal to the number of rows in the second matrix.</li>
<li>The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.</li>
</ul>
$$
**Matrix**: 2D array of numbers
$$
M = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}
$$
## {.smaller}
**Matrix multiplication**
$$
\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} \times \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}
$$
$$
= \begin{bmatrix} 22 & 28 \\ 49 & 64 \end{bmatrix}
$$
- The first matrix has 2 rows and 3 columns, and the second matrix has 3 rows and 2 columns.
- The number of columns in the first matrix should be equal to the number of rows in the second matrix.
- The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.
## {.smaller}
**Element-wise multiplication**
$$
\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} \odot \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}
$$
$$
= \begin{bmatrix} 1 & 4 & 9 \\ 16 & 25 & 36 \end{bmatrix}
$$
- The matrices should have the same dimensions.
- The resulting matrix will have the same dimensions as the input matrices.
- You multiply the corresponding elements of the matrices.
## {.smaller}
**Matrix addition**
$$
\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} + \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}
$$
$$
= \begin{bmatrix} 2 & 4 & 6 \\ 8 & 10 & 12 \end{bmatrix}
$$
- You add the corresponding elements of the matrices.
## {.smaller}
**Dot product**
$$
\begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}
$$
$$
= 1 \times 1 + 2 \times 2 + 3 \times 3 = 14
$$
- The number of columns in the first matrix should be equal to the number of rows in the second matrix.
- The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.
- You multiply the corresponding elements of the matrices and sum them up. -->
</section>
<section id="section-2" class="slide level2 smaller">
<h2></h2>
<p><strong>Element-wise multiplication</strong></p>
<p><span class="math display">\[
\begin{bmatrix} 1 &amp; 2 &amp; 3 \\ 4 &amp; 5 &amp; 6 \end{bmatrix} \odot \begin{bmatrix} 1 &amp; 2 &amp; 3 \\ 4 &amp; 5 &amp; 6 \end{bmatrix}
\]</span></p>
<p><span class="math display">\[
= \begin{bmatrix} 1 &amp; 4 &amp; 9 \\ 16 &amp; 25 &amp; 36 \end{bmatrix}
\]</span></p>
<section id="machine-learning" class="title-slide slide level1">
<h1>Machine learning</h1>
<ul>
<li>The matrices should have the same dimensions.</li>
<li>The resulting matrix will have the same dimensions as the input matrices.</li>
<li>You multiply the corresponding elements of the matrices.</li>
</ul>
</section>
<section id="section-3" class="slide level2 smaller">
<h2></h2>
<p><strong>Matrix addition</strong></p>
<p><span class="math display">\[
\begin{bmatrix} 1 &amp; 2 &amp; 3 \\ 4 &amp; 5 &amp; 6 \end{bmatrix} + \begin{bmatrix} 1 &amp; 2 &amp; 3 \\ 4 &amp; 5 &amp; 6 \end{bmatrix}
\]</span></p>
<p><span class="math display">\[
= \begin{bmatrix} 2 &amp; 4 &amp; 6 \\ 8 &amp; 10 &amp; 12 \end{bmatrix}
\]</span></p>
<li>Using <strong>learning algorithms</strong> to learn from existing data and predict for new data.</li>
<li>We have seen two types of Machine Learning models:
<ul>
<li>You add the corresponding elements of the matrices.</li>
<li><strong>Statistical language models</strong></li>
<li><strong>Probabilistic language models</strong></li>
</ul></li>
<li>Today: Neural Networks</li>
</ul>
<!-- ->who remembers what a statistical/probabilistic language model is? What is their "learning algorithm"?-->
</section>
<section id="section-4" class="slide level2 smaller">
<h2></h2>
<p><strong>Dot product</strong></p>
<p><span class="math display">\[
\begin{bmatrix} 1 &amp; 2 &amp; 3 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}
\]</span></p>
<p><span class="math display">\[
= 1 \times 1 + 2 \times 2 + 3 \times 3 = 14
\]</span></p>
<ul>
<li>The number of columns in the first matrix should be equal to the number of rows in the second matrix.</li>
<li>The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.</li>
<li>You multiply the corresponding elements of the matrices and sum them up.</li>
</ul>
</section></section>

<section>
<section id="neural-networks" class="title-slide slide level1">
<section id="neural-networks" class="title-slide slide level1 smaller">
<h1>Neural Networks</h1>
<ul>
<li>Neural networks are a class of machine learning models inspired by the human brain.</li>
</ul>
<p><strong>How do neural networks work?</strong></p>
<ul>
<li><strong>Input</strong>: The network receives data (like an image or text).</li>
<li><strong>Processing</strong>: The data is processed through a series of layers.</li>
<li><strong>Output</strong>: The network produces an output (like a prediction or classification).</li>
</ul>
</section>
<section id="section-5" class="slide level2 smaller">
<h2></h2>
<p><strong>Learning</strong></p>
<p>Neural networks are a class of machine learning models inspired by the human brain.</p>
<p><br></p>
<p><strong>Learning Alorithm</strong></p>
<ul>
<li>Neural networks learn by looking at many examples.</li>
<li>They adjust their internal settings to improve their accuracy.</li>
<li>This process is called training.</li>
</ul>
<p><strong>Components of a neural network</strong></p>
<ul>
<li><strong>Neurons</strong>: Basic building blocks of a neural network.</li>
<li><strong>Layers</strong>: Neurons are organized into layers.</li>
<li><strong>Weights and biases</strong>: Parameters that the network learns during training.</li>
<li>They adjust their internal settings (= <strong>parameters</strong>) to improve their accuracy.</li>
<li>This is process is called <strong>training</strong>.</li>
</ul>
<p><strong>Advantages of neural networks</strong></p>
<ul>
<li>Can learn complex patterns.</li>
<li>Can generalize to new data.</li>
<li>Can be used for a wide range of tasks (like image recognition, speech recognition, and natural language processing).</li>
<li>Can be used for a wide range of tasks (speech recognition, and natural language processing).</li>
</ul>
</section>
<section id="section-6" class="slide level2 smaller">
<section id="architecture" class="slide level2">
<h2>Architecture</h2>
<p><br> <img src="nn.png"> <br> The input can be a vector, and the output some classification, like a corresponding animal.</p>
</section>
<section id="section" class="slide level2">
<h2></h2>
<p><strong>Perceptron</strong></p>
<div class="cell" data-reveal="true" data-layout-align="center">
<div class="cell-output-display">
<div>
<p></p><figure class=""><p></p>
<div>
<p><br> <img src="nn_layers.png"> Every Neural Network has <strong>Layers</strong>. They are responsible for a specific action, like addition, and pass information to eachother. <br></p>
</section>
<section id="section-1" class="slide level2">
<h2></h2>
<p><br> <img src="nn_neurons.png"> Layers consist of <strong>neurons</strong> which each modify the input in some way. <br></p>
</section>
<section id="section-2" class="slide level2">
<h2></h2>
<p><br> <img src="nn_perceptron.png"> The simplest Neural network only has one layer with one neuron. This single neuron is called a <strong>perceptron</strong>. <br></p>
</section>
<section id="perceptron" class="slide level2">
<h2>Perceptron</h2>
<!--
::::::{.cell reveal=true layout-align="center"}
:::::{.cell-output-display}
::::{}
`<figure class=''>`{=html}
:::{}
<pre class="mermaid mermaid-js">graph LR
subgraph Inputs
x1((x1))
Expand Down Expand Up @@ -511,19 +543,25 @@ <h2></h2>
style act fill:#98FB98,stroke:#333,stroke-width:2px
style b fill:#FFFF00,stroke:#333,stroke-width:2px
</pre>
</div>
<p></p></figure><p></p>
</div>
</div>
</div>
<ul>
<li>Input Nodes (x1, x2, x3): Each input is a number.</li>
<li>Weights (w1, w2, w3): Each weight is a number that determines the importance of the corresponding input.</li>
<li>Bias (b): A constant value that shifts the output of the perceptron.</li>
<li>Sum Node (Σ): Calculates the weighted sum of the inputs and the bias.</li>
<li>Activation Function: Introduces non-linearity to the output of the perceptron.</li>
<li>Output Node: The final output of the perceptron.</li>
</ul>
:::
`</figure>`{=html}
::::
:::::
::::::
- Input Nodes (x1, x2, x3): Each input is a number.
- Weights (w1, w2, w3): Each weight is a number that determines the importance of the corresponding input.
- Bias (b): A constant value that shifts the output of the perceptron.
- Sum Node (Σ): Calculates the weighted sum of the inputs and the bias.
- Activation Function: Introduces non-linearity to the output of the perceptron.
- Output Node: The final output of the perceptron.
-->
</section>
<section id="activation-functions" class="slide level2 smaller">
<h2>Activation functions</h2>
Expand All @@ -544,7 +582,7 @@ <h2>Activation functions</h2>
<li>It is used in the output layer of a binary classification problem.</li>
</ul>
</section>
<section id="section-7" class="slide level2 smaller">
<section id="section-3" class="slide level2 smaller">
<h2></h2>
<p><strong>ReLU function</strong></p>
<p><span class="math display">\[
Expand All @@ -562,7 +600,7 @@ <h2></h2>
<li>It is a popular activation function used in deep learning models.</li>
</ul>
</section>
<section id="section-8" class="slide level2 smaller">
<section id="section-4" class="slide level2 smaller">
<h2></h2>
<p><strong>Feedforward Neural Network</strong></p>
<div class="cell" data-reveal="true" data-fig-width="5" data-fig-height="3" data-layout-align="center">
Expand Down Expand Up @@ -639,7 +677,7 @@ <h2>Feedforward Neural Network</h2>
<li>The weights and biases are learned during the training process.</li>
</ul>
</section>
<section id="section-9" class="slide level2 smaller">
<section id="section-5" class="slide level2 smaller">
<h2></h2>
<p><strong>Loss function</strong></p>
<ul>
Expand Down
Loading

0 comments on commit d40d327

Please sign in to comment.