From a02aa420edbcbaee1d95a925d61cdc0fc81f17aa Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <14574229+github-actions[bot]@users.noreply.github.com> Date: Wed, 29 Nov 2023 14:46:00 +0000 Subject: [PATCH] Deployed c1102d6 with MkDocs version: 1.5.3 --- .nojekyll | 0 404.html | 894 +++ all_of_somap/index.html | 1256 +++ altair_plot_backend.png | Bin 0 -> 131575 bytes altair_plot_backend2.png | Bin 0 -> 62077 bytes api/core/index.html | 2546 +++++++ api/datasets/index.html | 1281 ++++ api/distance/index.html | 1737 +++++ api/learning_rate/index.html | 1562 ++++ api/neighborhood/index.html | 2090 +++++ api/plot/index.html | 1240 +++ api/plot_backends/index.html | 2769 +++++++ api/serialisation/index.html | 1180 +++ api/som/index.html | 1800 +++++ api/update/index.html | 1535 ++++ array2image_plot_backend.png | Bin 0 -> 135527 bytes assets/_mkdocstrings.css | 64 + assets/images/favicon.png | Bin 0 -> 1870 bytes assets/javascripts/bundle.cd18aaf1.min.js | 29 + assets/javascripts/bundle.cd18aaf1.min.js.map | 7 + assets/javascripts/lunr/min/lunr.ar.min.js | 1 + assets/javascripts/lunr/min/lunr.da.min.js | 18 + assets/javascripts/lunr/min/lunr.de.min.js | 18 + assets/javascripts/lunr/min/lunr.du.min.js | 18 + assets/javascripts/lunr/min/lunr.el.min.js | 1 + assets/javascripts/lunr/min/lunr.es.min.js | 18 + assets/javascripts/lunr/min/lunr.fi.min.js | 18 + assets/javascripts/lunr/min/lunr.fr.min.js | 18 + assets/javascripts/lunr/min/lunr.he.min.js | 1 + assets/javascripts/lunr/min/lunr.hi.min.js | 1 + assets/javascripts/lunr/min/lunr.hu.min.js | 18 + assets/javascripts/lunr/min/lunr.hy.min.js | 1 + assets/javascripts/lunr/min/lunr.it.min.js | 18 + assets/javascripts/lunr/min/lunr.ja.min.js | 1 + assets/javascripts/lunr/min/lunr.jp.min.js | 1 + assets/javascripts/lunr/min/lunr.kn.min.js | 1 + assets/javascripts/lunr/min/lunr.ko.min.js | 1 + assets/javascripts/lunr/min/lunr.multi.min.js | 1 + assets/javascripts/lunr/min/lunr.nl.min.js | 18 + assets/javascripts/lunr/min/lunr.no.min.js | 18 + assets/javascripts/lunr/min/lunr.pt.min.js | 18 + assets/javascripts/lunr/min/lunr.ro.min.js | 18 + assets/javascripts/lunr/min/lunr.ru.min.js | 18 + assets/javascripts/lunr/min/lunr.sa.min.js | 1 + .../lunr/min/lunr.stemmer.support.min.js | 1 + assets/javascripts/lunr/min/lunr.sv.min.js | 18 + assets/javascripts/lunr/min/lunr.ta.min.js | 1 + assets/javascripts/lunr/min/lunr.te.min.js | 1 + assets/javascripts/lunr/min/lunr.th.min.js | 1 + assets/javascripts/lunr/min/lunr.tr.min.js | 18 + assets/javascripts/lunr/min/lunr.vi.min.js | 1 + assets/javascripts/lunr/min/lunr.zh.min.js | 1 + assets/javascripts/lunr/tinyseg.js | 206 + assets/javascripts/lunr/wordcut.js | 6708 +++++++++++++++++ .../workers/search.f886a092.min.js | 42 + .../workers/search.f886a092.min.js.map | 7 + assets/stylesheets/main.fad675c6.min.css | 1 + assets/stylesheets/main.fad675c6.min.css.map | 1 + assets/stylesheets/palette.356b1318.min.css | 1 + .../stylesheets/palette.356b1318.min.css.map | 1 + css/ansi-colours.css | 174 + css/jupyter-cells.css | 10 + css/pandas-dataframe.css | 36 + demo1.png | Bin 0 -> 62676 bytes demo_custom.png | Bin 0 -> 47557 bytes faq/index.html | 994 +++ hyperparameters_tuning/index.html | 1090 +++ index.html | 1005 +++ mnist/index.html | 1374 ++++ objects.inv | Bin 0 -> 996 bytes search/search_index.json | 1 + sitemap.xml | 78 + sitemap.xml.gz | Bin 0 -> 329 bytes social.png | Bin 0 -> 171121 bytes som_on_mnist_hex.png | Bin 0 -> 19032 bytes som_on_mnist_square.png | Bin 0 -> 21961 bytes 76 files changed, 32006 insertions(+) create mode 100644 .nojekyll create mode 100644 404.html create mode 100644 all_of_somap/index.html create mode 100644 altair_plot_backend.png create mode 100644 altair_plot_backend2.png create mode 100644 api/core/index.html create mode 100644 api/datasets/index.html create mode 100644 api/distance/index.html create mode 100644 api/learning_rate/index.html create mode 100644 api/neighborhood/index.html create mode 100644 api/plot/index.html create mode 100644 api/plot_backends/index.html create mode 100644 api/serialisation/index.html create mode 100644 api/som/index.html create mode 100644 api/update/index.html create mode 100644 array2image_plot_backend.png create mode 100644 assets/_mkdocstrings.css create mode 100644 assets/images/favicon.png create mode 100644 assets/javascripts/bundle.cd18aaf1.min.js create mode 100644 assets/javascripts/bundle.cd18aaf1.min.js.map create mode 100644 assets/javascripts/lunr/min/lunr.ar.min.js create mode 100644 assets/javascripts/lunr/min/lunr.da.min.js create mode 100644 assets/javascripts/lunr/min/lunr.de.min.js create mode 100644 assets/javascripts/lunr/min/lunr.du.min.js create mode 100644 assets/javascripts/lunr/min/lunr.el.min.js create mode 100644 assets/javascripts/lunr/min/lunr.es.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.he.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hu.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hy.min.js create mode 100644 assets/javascripts/lunr/min/lunr.it.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ja.min.js create mode 100644 assets/javascripts/lunr/min/lunr.jp.min.js create mode 100644 assets/javascripts/lunr/min/lunr.kn.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ko.min.js create mode 100644 assets/javascripts/lunr/min/lunr.multi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.nl.min.js create mode 100644 assets/javascripts/lunr/min/lunr.no.min.js create mode 100644 assets/javascripts/lunr/min/lunr.pt.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ro.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ru.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sa.min.js create mode 100644 assets/javascripts/lunr/min/lunr.stemmer.support.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sv.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ta.min.js create mode 100644 assets/javascripts/lunr/min/lunr.te.min.js create mode 100644 assets/javascripts/lunr/min/lunr.th.min.js create mode 100644 assets/javascripts/lunr/min/lunr.tr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.vi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.zh.min.js create mode 100644 assets/javascripts/lunr/tinyseg.js create mode 100644 assets/javascripts/lunr/wordcut.js create mode 100644 assets/javascripts/workers/search.f886a092.min.js create mode 100644 assets/javascripts/workers/search.f886a092.min.js.map create mode 100644 assets/stylesheets/main.fad675c6.min.css create mode 100644 assets/stylesheets/main.fad675c6.min.css.map create mode 100644 assets/stylesheets/palette.356b1318.min.css create mode 100644 assets/stylesheets/palette.356b1318.min.css.map create mode 100644 css/ansi-colours.css create mode 100644 css/jupyter-cells.css create mode 100644 css/pandas-dataframe.css create mode 100644 demo1.png create mode 100644 demo_custom.png create mode 100644 faq/index.html create mode 100644 hyperparameters_tuning/index.html create mode 100644 index.html create mode 100644 mnist/index.html create mode 100644 objects.inv create mode 100644 search/search_index.json create mode 100644 sitemap.xml create mode 100644 sitemap.xml.gz create mode 100644 social.png create mode 100644 som_on_mnist_hex.png create mode 100644 som_on_mnist_square.png diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..7392a8e --- /dev/null +++ b/404.html @@ -0,0 +1,894 @@ + + + +
+ + + + + + + + + + + + + + + + + + +A self-organizing map (SOM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional, typically two-dimensional, representation of input data. It's like a map that organizes itself to represent different patterns or features found in the input data, arranging similar data close together and dissimilar data far apart. This makes it useful for visualizing complex data in a way that highlights its inherent similarities and differences.
+SOMs can take any kind of input data as long as it can be represented as vectors of values. For example, a 28x28 MNIST image will be internally represented as a flatten vector of 784 values. You can directly pass your unflatten values to a Somap SOM so that it remembers the shape of your data during the rendering step.
+To help you test quickly your SOMs against common datasets, you can load them via somap.datasets
:
import somap as smp
+
+# Load the MNIST dataset as a Numpy array of shape (60000, 28, 28)
+data = smp.datasets.MNIST().data
+
See this page for a list of integrated datasets.
+In addition to classic bottum-up driving inputs described above, future versions of Somap will support extra inputs for lateral contextual and top-down modulatory inputs.
+In Somap, SOMs are defined by:
+The SOM algorithm involves initializing a grid of nodes, each with a randomly assigned weight vector. During training, each input data point is compared to all nodes, and the node with the weight vector most similar to the input (the "Best Matching Unit", aka BMU) is identified. The weights of this node and its neighbors are then adjusted to become more like the input data, gradually organizing the grid so that similar data points are mapped to nearby nodes.
+Different kinds of SOMs can be imagined depending on:
+The simplest SOM available in Somap is a time-independent version of the Kohonen SOM, called StaticKsom
. It is defined by 2 parameters:
sigma
: The width of the Gaussian neighborhood function around the Best Matching Unit (BMU). A larger sigma value means a wider neighborhood, where more nodes are influenced significantly during each training step. The influence of the BMU on neighboring nodes decreases with distance in a Gaussian manner, meaning nodes closer to the BMU are adjusted more significantly than those further away.alpha
: The learning rate for the SOM. It dictates how much the weights of the nodes in the network are adjusted in response to each input data point.model = smp.StaticKsom(
+ shape = (11, 13),
+ topography = "hex",
+ borderless = False,
+ input_shape = (28, 28),
+ params = smp.KsomParams(sigma=0.3, alpha=0.5)
+)
+
The classic Kohonen SOM is defined by the following parameters:
+t_f
: The final time or iteration step of the training process. It represents when the training of the SOM will end. The training typically involves gradually decreasing the learning rate and the neighborhood radius over time.sigma_i
and sigma_f
: The initial and final values of the width of the Gaussian neighborhood function around the Best Matching Unit (BMU). A larger sigma value means a wider neighborhood, where more nodes are influenced significantly during each training step. sigma_i
is larger to allow broader learning initially, and sigma_f
is smaller, focusing the learning more locally towards the end of training.alpha_i
and alpha_f
: The initial and final learning rates. The learning rate controls how much the weights of the SOM nodes are adjusted during training. A higher initial learning rate allows the network to quickly adapt to the data, while the lower final rate allows for finer adjustments as the training progresses.model = smp.Ksom(
+ shape = (11, 13),
+ topography = "hex",
+ borderless = False,
+ input_shape = (28, 28),
+ params = smp.KsomParams(
+ t_f=60000,
+ sigma_i=0.7,
+ sigma_f=0.01,
+ alpha_i=0.1,
+ alpha_f=0.001
+ )
+)
+
The Dynamic SOM was introduced by N. Rougier and Y. Boniface in 2011. It is a variation of the self-organising map algorithm where the original time-dependent (learning rate and neighbourhood) learning function is replaced by a time-invariant one. This allows for on-line and continuous learning on both static and dynamic data distributions.
+ +plasticity
: This parameter controls the overall ability of the network to adapt to new data over time. High plasticity allows the network to change rapidly in response to new data, making it more flexible but potentially less stable. Lower plasticity means slower adaptation, leading to more stability but less responsiveness to new or changing patterns in the data.alpha
: Similar to traditional SOMs, alpha in the Dynamic SOM represents the learning rate. It determines the extent to which the weights of the nodes are adjusted in response to each input data point. This parameter works in conjunction with the plasticity
parameter to regulate the network's adaptation to the input data over time.model = smp.Dsom(
+ shape = (11, 13),
+ topography = "hex",
+ borderless = False,
+ input_shape = (28, 28),
+ params = smp.DsomParams(alpha=0.001, plasticity=0.02)
+)
+
You can also define your custom SOM by choosing functions over the existing catalog:
+import somap as smp
+from jaxtyping import Array, Float
+
+class MyCustomSomParams(smp.AbstractSomParams):
+ sigma: float | Float[Array, "..."]
+ alpha: float | Float[Array, "..."]
+
+class MyCustomSom(smp.AbstractSom):
+
+ @staticmethod
+ def generate_algo(p: MyCustomSomParams) -> smp.SomAlgo:
+ return smp.SomAlgo(
+ f_dist=smp.EuclidianDist(),
+ f_nbh=smp.GaussianNbh(sigma=p.sigma),
+ f_lr=smp.ConstantLr(alpha=p.alpha),
+ f_update=smp.SomUpdate(),
+ )
+
If you need custom distance, neighborhood, learning rate and update functions for your SOM, you can define them by inheriting from smp.AbstractDist
, smp.AbstractNbh
, smp.AbstractLr
and smp.AbstractUpdate
. See the library source code for how to do it.
SOMs utilize online learning, continuously updating their weights after processing each input. Due to JAX's immutable nature, SOM models are generated as new objects at every step. Additionally, an auxiliary variable is returned, containing metrics or information for debugging purposes.
+For running a single step: +
data = ... # Array whose leading axis represents the different data examples
+
+# Do a single iteration on the first element of data
+model, aux = smp.make_step(model, {"bu_v": data[0]})
+
For running multiple steps when input data is known in advance, prefer the more optimized somap.make_steps
function:
+
data = ... # Array whose leading axis represents the different data examples
+
+# Iterate over all the elements in `data`
+model, aux = smp.make_steps(model, {"bu_v": data})
+
Somap
comes with several plotting backends to visualize a SOM.
The default plotting backend relies on the altair
plotting library. This enables the dynamic rendering of both square and hexagonal grids. Additionally, tooltips are provided to offer supplementary information when hovering the mouse over a node.
Show the prototypes: +
import matplotlib
+smp.plot(model, show_prototypes=True, show_activity=False, img_inverted_colors=True, img_cmap=matplotlib.cm.gnuplot2)
+
Show the activity of each node (how many times they have been activated): +
+ +Impact of the borderless
parameter
This is a map with borderless=False
. You can observe the unequal repartition of activity between the nodes, with a high biais toward corners and borders. A borderless map won't have this effect.
See this page for details about available options.
+This plotting backend leverages the array2image
library, offering a quicker and more direct method for rendering square grids.
To use this backend, set the following environment variable: SOMAP_PLOT_BACKEND=array2image
See this page for details about available options.
+Evaluating SOMs typically involves the following metrics:
+Those metrics are available in the auxilarry data returned by the somap.make_step
and somap.make_steps
functions.
# Iterate over all the elements in `data`
+model, aux = smp.make_steps(model, {"bu_v": data})
+
+# Retrieve the errors from all steps
+quantization_errors = aux["metrics"]["quantization_error"]
+topographic_errors = aux["metrics"]["topographic_error"]
+
You can save and load SOM models via the following functions (which act as a pass-through the corresponding Equinox functions because SOM models are Equinox modules).
+ + + + + + + +fSO^-c2D
zT|(d#@*omuYgHLd4{2v#nE9&?h0EG9$hscE%#X4G&SnY06}e?FKvwaK%SIZghZL|&
zWLlPmJn$#zm}WFm;k1*bg{k0UL6-o4n`j-vJQ}`mo-iZiC}ej3pbJLD8K?MhWD!mU
zcALuf5=cKH5v7Y-)Q8yE;KFPQ!n$r;q~xKZGK{4S=m0?sgit{&fDk *tnE7r0Y>M*1BAGfAa3_|%oPM|$rCAl~;wSSM=_f?4vWPf&
zX||#ClrVg6oUNqx2Fh5O-Q>QXGAKnTf79Kv4>o|q!tKWH)D-E*O#|p=5
z5w?Hz{(2BL7$
Ca1C~zM@FBY=dmAkNnufNlU-?SZG@7g6e|5Kb-NhP
zck?45Ih;k+8tOyw5IAMU+ted-u3bJ#8UIJb
*vBI2N|G0Kv+?V3Z3`RfpVsto!qrZ|dSt3?Lk^?hexfdhzmsAzpt
zU1K#F4{67@g3J=FkTj&{j(R45<^+?6pbq*fBSFd}{)J$#3Z4P)dazFiVjd>mRj{9~
z5=SlG74VBT{z_5?xvxsr3#;Tju?gN$Y?0#0)Tk25ny60VH_7=ixq(UT4nKSs-~
~4YX%xEtSGs(;#zX*y%TUAWV|sbXQSK*S3gd={fwdLCYCZl7&Y$)n$`=$BRdIE_
zbmg;{B0ioV$UJ#IhRN?Xi2Ww;HqZt6@?U`esMkkfeE>U
zT8zCp5$L3h7a?t4?{;tYkHuiiIi~au4bXt(=jsQ&JKly`95kU@8`dN@h;9P2OL}H<
z28hO}O>unDTED+tZSb&9SVIB;5Zm&1ZFP$0yhD$a!5yZbAO5N8#(=p{P*s=-?sNyO
zXH2ai8d7l5U^WEg1&&BbJGeefzY2~zFt!DD28@^_r@TvvZeX=|Go
zFhg^okN(!O#=l_k;!rS>&Dj%t1t>gT_kBa{zHfUHa`s7%1s+m&ihL~0(DmXK(EjL$
zXUlqA_ob
ncoXnb&SWC>^)vYIM4lq>g
zXKot*lXT8}GGj+1Ts6}CQYW~zytd!tX{MJW9OgkKA%E<{%B0_vCWW(Uq