Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Normalisation of Learning StateVariables by an attribute #147

Open
ajc158 opened this issue Dec 6, 2017 · 0 comments
Open

Normalisation of Learning StateVariables by an attribute #147

ajc158 opened this issue Dec 6, 2017 · 0 comments
Assignees

Comments

@ajc158
Copy link
Collaborator

ajc158 commented Dec 6, 2017

Currently normalisation for learning requires a State Variable to be passed out of a Weight Update, remapped using a connection, then passed back in. This is both inefficient and opaque for something that is a commonly required method.

Instead the normalisation should be defined as an attribute for Learning StateVariables @norm={'none','post,'pre'} to define no normalisation, postsynaptic normalisation or presynaptic normalisation respectively. If @norm is absent the 'none' is assumed.

Any comments?

@ajc158 ajc158 self-assigned this Dec 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant