You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently normalisation for learning requires a State Variable to be passed out of a Weight Update, remapped using a connection, then passed back in. This is both inefficient and opaque for something that is a commonly required method.
Instead the normalisation should be defined as an attribute for Learning StateVariables @norm={'none','post,'pre'} to define no normalisation, postsynaptic normalisation or presynaptic normalisation respectively. If @norm is absent the 'none' is assumed.
Any comments?
The text was updated successfully, but these errors were encountered:
Currently normalisation for learning requires a State Variable to be passed out of a Weight Update, remapped using a connection, then passed back in. This is both inefficient and opaque for something that is a commonly required method.
Instead the normalisation should be defined as an attribute for Learning StateVariables @norm={'none','post,'pre'} to define no normalisation, postsynaptic normalisation or presynaptic normalisation respectively. If @norm is absent the 'none' is assumed.
Any comments?
The text was updated successfully, but these errors were encountered: