Skip to content

Commit

Permalink
added ojas rule; sphinx makefile addition
Browse files Browse the repository at this point in the history
  • Loading branch information
seeholza committed Mar 2, 2016
1 parent 7856ae1 commit 19946fa
Show file tree
Hide file tree
Showing 10 changed files with 247 additions and 2 deletions.
6 changes: 5 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
all: pypi
all: sphinx pypi conda

sphinx:
sphinx-apidoc -o doc/modules neurodynex -f
make -C doc html

pypi:
rm -rf dist/*
Expand Down
2 changes: 2 additions & 0 deletions conda_build/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ test:
- neurodynex.hopfield_network
- neurodynex.neuron_type
- neurodynex.phase_plane_analysis
- neurodynex.ojas_rule
- neurodynex.test

commands:
Expand All @@ -42,6 +43,7 @@ test:
- nosetests --verbosity=2 neurodynex.test.test_hopfield
- nosetests --verbosity=2 neurodynex.test.test_nagumo
- nosetests --verbosity=2 neurodynex.test.test_neuron_type
- nosetests --verbosity=2 neurodynex.test.test_oja

requires:
- nose
Expand Down
3 changes: 2 additions & 1 deletion doc/exercises/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,5 @@ Exercises
hodgkin-huxley
phase-plane-analysis
neuron-type
hopfield-network
hopfield-network
ojas-rule
58 changes: 58 additions & 0 deletions doc/exercises/ojas-rule.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
Oja's hebbian learning rule
===========================

**Book chapters**

See `Chapter 19 Section 2 <Chapter_>`_ on the learning rule of Oja.

.. _Chapter: http://neuronaldynamics.epfl.ch/online/Ch19.S2.html#SS1.p6


**Python classes**

The :mod:`.ojas_rule.oja` module contains all code required for this exercise.
At the beginning of your exercise solution file, import the contained functions by

.. code-block:: py
from neurodynex.ojas_rule.oja import *
You can then simply run the exercise functions by executing, e.g.

.. code-block:: py
cloud = make_cloud() # generate data points
wcourse = learn(cloud) # learn weights and return timecourse
Exercise: Circular data
-----------------------

Use the functions :func:`make_cloud <.ojas_rule.oja.make_cloud>` and :func:`learn <.ojas_rule.oja.learn>` to get the timecourse for weights that are learned on a **circular** data cloud (``ratio=1``). Plot the time course
of both components of the weight vector. Repeat this many times (:func:`learn <.ojas_rule.oja.learn>` will choose random initial conditions on each run), and plot this into the same plot. Can you explain what happens?


Exercise: Elliptic data
-----------------------


Repeat the previous question with an **elongated** elliptic data cloud (e.g. ``ratio=0.3``). Again, repeat this several times.

Question
~~~~~~~~

What difference in terms of learning do you observe with respect to the circular data clouds?

Question
~~~~~~~~

Try to change the orientation of the ellipsoid (try several different angles). Can you explain what Oja's rule does?

.. note::
To gain more insight, plot the learned weight vector in 2D space, and relate its orientation to that of the ellipsoid of data clouds.

Exercise: Non-centered data
---------------------------

The above exercises assume that the input activities can be negative (indeed the inputs were always statistically centered). In actual neurons, if we think of their activity as their firing rate, this cannot be less than zero.

Try again the previous exercise, but applying the learning rule on a noncentered data cloud. E.g., use ``5 + make_cloud(...)``, which centers the data around ``(5,5)``. What conclusions can you draw? Can you think of a modification to the learning rule?
22 changes: 22 additions & 0 deletions doc/modules/neurodynex.ojas_rule.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
neurodynex.ojas_rule package
============================

Submodules
----------

neurodynex.ojas_rule.oja module
-------------------------------

.. automodule:: neurodynex.ojas_rule.oja
:members:
:undoc-members:
:show-inheritance:


Module contents
---------------

.. automodule:: neurodynex.ojas_rule
:members:
:undoc-members:
:show-inheritance:
1 change: 1 addition & 0 deletions doc/modules/neurodynex.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ Subpackages
neurodynex.hopfield_network
neurodynex.leaky_integrate_and_fire
neurodynex.neuron_type
neurodynex.ojas_rule
neurodynex.phase_plane_analysis
neurodynex.test

Expand Down
8 changes: 8 additions & 0 deletions doc/modules/neurodynex.test.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,14 @@ neurodynex.test.test_neuron_type module
:undoc-members:
:show-inheritance:

neurodynex.test.test_oja module
-------------------------------

.. automodule:: neurodynex.test.test_oja
:members:
:undoc-members:
:show-inheritance:


Module contents
---------------
Expand Down
1 change: 1 addition & 0 deletions neurodynex/ojas_rule/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__all__ = ['oja']
140 changes: 140 additions & 0 deletions neurodynex/ojas_rule/oja.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
"""
This file implements Oja's hebbian learning rule.
Relevant book chapters:
- http://neuronaldynamics.epfl.ch/online/Ch19.S2.html#SS1.p6
"""

# This file is part of the exercise code repository accompanying
# the book: Neuronal Dynamics (see http://neuronaldynamics.epfl.ch)
# located at http://github.com/EPFL-LCN/neuronaldynamics-exercises.

# This free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License 2.0 as published by the
# Free Software Foundation. You should have received a copy of the
# GNU General Public License along with the repository. If not,
# see http://www.gnu.org/licenses/.

# Should you reuse and publish the code for your own purposes,
# please cite the book or point to the webpage http://neuronaldynamics.epfl.ch.

# Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski.
# Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition.
# Cambridge University Press, 2014.

import matplotlib.pyplot as plt
import numpy as np


def make_cloud(n=10000, ratio=1, angle=0):
"""Returns an oriented elliptic
gaussian cloud of 2D points
Args:
n (int, optional): number of points in the cloud
ratio (int, optional): (std along the short axis) /
(std along the long axis)
angle (int, optional): rotation angle [deg]
Returns:
numpy.ndarray: array of datapoints
"""

if ratio > 1.:
ratio = 1. / ratio

x = np.random.randn(n, 1)
y = ratio * np.random.randn(n, 1)
z = np.concatenate((x, y), 1)
radangle = (180. - angle) * np.pi / 180.
transfo = [
[np.cos(radangle), np.sin(radangle)],
[-np.sin(radangle), np.cos(radangle)]
]
return np.dot(transfo, z.T).T


def learn(cloud, initial_angle=None, eta=0.001):
"""Run one batch of Oja's learning over
a cloud of datapoints
Args:
cloud (numpy.ndarray): array of datapoints
initial_angle (float, optional): angle of initial
set of weights [deg]. If None, this is random.
eta (float, optional): learning rate
Returns:
numpy.ndarray: time course of the weight vector
"""

# get angle if not set
if initial_angle is None:
initial_angle = np.random.rand() * 360.
radangle = initial_angle * np.pi / 180.

w = np.array([np.cos(radangle), np.sin(radangle)])
wcourse = np.zeros((len(cloud), 2), float)
for i in range(0, len(cloud)):
wcourse[i] = w
y = np.dot(w, cloud[i]) # output
w = w + eta * y * (cloud[i] - y * w) # ojas rule
return wcourse


def run_oja(n=10000, ratio=1., angle=0., do_plot=True):
"""Generates a point cloud and runs Oja's learning
rule once. Optionally plots the result.
Args:
n (int, optional): number of points in the cloud
ratio (float, optional): (std along the short axis) /
(std along the long axis)
angle (float, optional): rotation angle [deg]
do_plot (bool, optional): plot the result
"""

cloud = make_cloud(n=n, ratio=ratio, angle=angle)
wcourse = learn(cloud)

if do_plot:

# plot data cloud
plt.scatter(
cloud[:, 0],
cloud[:, 1],
marker='.',
facecolor='none',
edgecolor='#222222',
alpha=.2
)

# color time and plot with colorbar
time = np.arange(len(wcourse))
colors = plt.cm.cool(time/float(len(time)))
sm = plt.cm.ScalarMappable(
cmap=plt.cm.cool,
norm=plt.Normalize(vmin=0, vmax=n)
)
sm.set_array(time)
cb = plt.colorbar(sm)
cb.set_label("Datapoints")
plt.scatter(
wcourse[:, 0],
wcourse[:, 1],
facecolor=colors,
edgecolor='none',
lw=2
)

# ensure rectangular plot
x_min = cloud[:, 0].min()
x_max = cloud[:, 0].max()
y_min = cloud[:, 1].min()
y_max = cloud[:, 1].max()
lims = [min(x_min, y_min), max(x_max, y_max)]

plt.xlim(lims)
plt.ylim(lims)

plt.show()
8 changes: 8 additions & 0 deletions neurodynex/test/test_oja.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
import matplotlib
matplotlib.use('Agg') # needed for plotting on travis


def test_oja():
"""Test if Oja learning rule is runnable."""
from neurodynex.ojas_rule.oja import run_oja
run_oja() # this uses all functions in the module

0 comments on commit 19946fa

Please sign in to comment.