Replies: 11 comments 51 replies
-
Hey @Dola47 , Your architecture is quite deep compared to most of those reported in SNN papers. I recommend either: Good luck. |
Beta Was this translation helpful? Give feedback.
-
Hi @jeshraghian , Really thanks for your quick response it's much appreciated. So, If I got it correctly there is no problem with my above SNN model implementation and it should be functioning correctly. A) What do you mean by surrogate gradient, do you mean the slope of surrogate gradient? If so can you briefly clarify what is actually the slope in the Surrogate gradient? B) Actually for the network and architecture I made a lot of trials, and I have simpler architectures where also the ANN outperforms the SNN. C) Can it be added to my current architecture. If so can you please illustrate to me how that is can be achieved by a simple coding example? D) I Will have a look. E) Do we have any chat channel for the Simulator in Discord for example, where we can get in contact with you in case of urgent stuff? Really thanks and looking forward to your further responses. |
Beta Was this translation helpful? Give feedback.
-
Away from our previous discussion here are some of my findings from my current work that I would like you to have a look at: 1- SNNs exceed the accuracy of ANNs on networks with very low computations and after having a sufficient number of time steps or temporal dimensions. 2- ANNs will perform better on networks with a high number of computations aside from the number of steps you have. 3- As a conclusion from 1, I see that of course there should be some temporal information learned within the SNNs but it is not really very clear until we have a sufficient number of steps. Questions: 1- If for complex networks ANNs is better and if we thought about digital implementation we have an extra number of computations in the SNNs that need to be done inside of Spiking Neuron, what is the benefit of SNNs in such an approach? 2- If only the temporal information to be learned from the SNNS is to be noticed after a large number of steps, why do not we go directly to GRUs or RNNs, or any model that has some kind of memory? I am looking forward to your thoughts. Thanks Jason :) |
Beta Was this translation helpful? Give feedback.
-
Hi @jeshraghian , Should not I be able after the latest merge to set Sorry, but I do not get it completely how to set Thanks. UPDATE: I just checked that the pull request has not been merged yet, sorry!! But, it will be totally helpful if you can as well give a short example of how it can be implemented :) |
Beta Was this translation helpful? Give feedback.
-
@jeshraghian Just a follow-up question to our previous discussion. If we agree that we will have an increase in the number of computations in the spiking neuron compared to the classical neuron, from a digital implementation perspective the SNNs will not be more energy efficient than the ANN version of it. It will be energy efficient only if we considered the analog implementation. Then, I think the only benefit of having the SNN in the digital domain is to get better accuracy than the ANN in a fewer number of epochs, do you agree? |
Beta Was this translation helpful? Give feedback.
-
@jeshraghian A side from our discussion please inform me of how should I cite the snnTorch simulator? |
Beta Was this translation helpful? Give feedback.
-
@jeshraghian Sorry for frequently raising questions, but I have a few urgent questions. I am aware that we are approximating the non-differentiability of the spikes by the gradient of the surrogate gradient function in the backward pass, but can you clarify more what actually the slope of the surrogate gradient means? Is it like how much we are approximating the true gradient of for example sigmoid function? If so how does it mean if we set high or low values for the slope of the surrogate gradient? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi @jeshraghian , Is it expected to have any kind of Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi @jeshraghian , Just a follow-up question related to our previous temporal dimension discussion. I started to get the things mixed up and would like you to make it clearer for me. I have my input dimension in the shape of For that, I sent one frame at a time to my network, so my forward pass looks like
Where the Following the above implementation (which is basically being used in most of the tutorials), I think then we are doing: 1- Synchronous implementation of the neurons, as we send one frame at a time in sequence, and I think by default spiking neurons should be Asynchronous? I think the same can be thought of, if we created encoding of a number from 2- I checked the source code of the Snyaptic neuron, and I found that we are not weighting the spikes, we are just doing 3- We do not have really integration in each 4- Also, I am sending my float data represented in 5- Finally, his current implementation of the neurons makes the SNN the same as BNN (binary-neural-networks) except that they do not have synaptic current and membrane voltages? Thanks a lot and looking forward to your response? |
Beta Was this translation helpful? Give feedback.
-
Hi @Huizerd , Is it right to summarize the Surrogate gradient idea by stating that Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi @jeshraghian , Do we consider the refractory period in our neurons? Moreover, is it expected to have any kind of recurrent-LIF implemented soon? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Dear @jeshraghian ,
I would like to know if the SNNs are really learning something from the temporal dimension or not. As most frequently the ANNs outperform the SNNs even if I only calculate the accuracy from the last frame of my temporal dimension. This should be weird because the SNNs should remember some information from the temporal domain while the ANNs should not!!
Thanks and looking really very quickly for your response.
Beta Was this translation helpful? Give feedback.
All reactions