You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Back-propagation is a fairy simple training algorithm and if provided only for basic operation blocks ("core1" domain as I mentioned in phasing issue ) then it shouldn't require significant effort to implement and test it on vendors site.
Benefits seems sort of obvious, but lets briefly list what comes to my mind quickly:
lower entrance barrier to ML world for JS developers (just enough to get familiar with API and they could quickly jump on into ML)
easier phasing and quicker standard shipment - no need to cover all possible operation blocks from different ML environments, enough is just a small basic subset of blocks
possibility to train or adjust NNs "on the fly", for example in gaming industry
The text was updated successfully, but these errors were encountered:
DanielMazurkiewicz
changed the title
Remove training capabilities from out of scope (shortly: add training capabilities :-) )
Remove training capabilities from out of scope (shortly: add training capabilities to scope :-) )
Jan 12, 2019
Thanks for your input @DanielMazurkiewicz. Since there's an existing charter issue #3 to discuss training capabilities, I'd suggest you add your comment there to avoid forking the discussion over multiple issues to make it easier to follow and contribute. Thanks!
Back-propagation is a fairy simple training algorithm and if provided only for basic operation blocks ("core1" domain as I mentioned in phasing issue ) then it shouldn't require significant effort to implement and test it on vendors site.
Benefits seems sort of obvious, but lets briefly list what comes to my mind quickly:
The text was updated successfully, but these errors were encountered: