You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mod text util so that when converting float bits to chars, it uses:
(a) 0.0 when <= 0.0
(b) 1.0 when > =1.0
(c) rounds bits [to 0.0 or 1.0]
Add #(un)certainty methods for iod data
Add code/tests/benches for utf text files (for RNN usage)
test: errors should decrease (kinda depends on net and training data structure/size)
update this app and shards to be Crystal v1.0 compatible
Implement Bi-directional RNN (i.e.: RnnSimple pulls from inputs and previous time column.)
Switch to (or add associated classes to use) BigFloat instead of Float64
Reasons:
(a) Outputs are getting too big when using RELU for larger networks.
(b) I tried handling nan and infinity values via forcing to 0,1,etc or by breeding and purging, but that starts to fail for larger networks.
(c) I tried auto-scaling down the initial weights (based on network params) to avoid (a), but when scaled down too much (i.e.: larger networks), I get arithmetic errors.
(d) The things I have tried (or upgrade to Crystal 1.0) seems to now tend to lead to more memory usage (sometimes maxing out my memory).
(?) Phase 5: Add more examples and benchmarking (e.g.: simple wave form learning)
WHY is the last ti's value in outputs_guessed reset to '0.0' after training (but NOT after eval'ing)??? (and NOT reset to '0.0' after next round of training???)
(?) Convert RnnSimple's validation @errors to Hash(<Enum>, String). (I switched it from Hash(Symbol, String) to Hash(String, String), due to issues w/ #from_json. An Enum would probably be more efficient than a String.)
Convert all specs to Spectator format.
Likewise, split up Chain into chain_concerns and fix where applicable.
Add simple RNN functionality (See Issue: #15 )
type
key?) (in a later PR, at least partly in PR: Drhuffman12/cmn basic rnn (part 2) #47)RnnSimple#train(..)
RnnSimple#train(..)
that loops thru the 'sequence of input and output data'BreederUtils
andBreeder
(See PR: Drhuffman12/cmn basic rnn part 6 #54)error_distance
related code intoErrorStats
(See PR: drhuffman12/add_team_utils (part 1) #56 and https://github.com/crystal-lang/crystal/tree/master/.github/workflows)ErrorStats
(See PR: drhuffman12/add_team_utils (part 1) #56)BreedParent
*Manager
classes)Breeder(T)
(See PR: drhuffman12/add_team_utils (part 1) #56)MiniNetBreeder < Breeder(MiniNet)
(See PR: Drhuffman12/add team utils part 2 #57)ChainBreeder < Breeder(Chain)
RnnSimpleBreeder < Breeder(RnnSimple)
(See PR: Drhuffman12/add team utils part 3 #58)BackproagationBreeder < Breeder(Backproagation)
TeamUtils
(See PR: Drhuffman12/add team utils part 4 #59)spec_bench/ai4cr/neural_network/rnn/rnn_simple_manager_spec.cr
re training a team of RNN nets on a text file.(a) 0.0 when <= 0.0
(b) 1.0 when > =1.0
(c) rounds bits [to 0.0 or 1.0]
#(un)certainty
methods foriod
dataBigFloat
instead ofFloat64
(a) Outputs are getting too big when using RELU for larger networks.
(b) I tried handling nan and infinity values via forcing to 0,1,etc or by breeding and purging, but that starts to fail for larger networks.
(c) I tried auto-scaling down the initial weights (based on network params) to avoid (a), but when scaled down too much (i.e.: larger networks), I get arithmetic errors.
(d) The things I have tried (or upgrade to Crystal 1.0) seems to now tend to lead to more memory usage (sometimes maxing out my memory).
Chain
?); e.g.:spec_bench
(reChain
?); e.g.:@errors
toHash(<Enum>, String)
. (I switched it fromHash(Symbol, String)
toHash(String, String)
, due to issues w/#from_json
. An Enum would probably be more efficient than a String.)Spectator
format.Chain
intochain_concerns
and fix where applicable.The text was updated successfully, but these errors were encountered: