-
Notifications
You must be signed in to change notification settings - Fork 9
/
Copy pathTODO.txt
48 lines (31 loc) · 1.88 KB
/
TODO.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
- Complete and fix some issues with current dataset
1. Needs to add dividends, lctq, loq, and capital expenditures
2. Momentum calcs are wrong size (i.e., 6mo is actually 5mo)
- Support multi-class output
- Allow for hidden layers to have different sizes in both MLP and RNN models.
Right now all hidden layers of a NN have a fixed size given be --num_hidden.
- Change keep_prob for dropout to a tf placeholder so that we dont need two
seperate model objects which share weights for training and validation
- Allow RNN model step function to exclude the first num_unrollings
from the loss function calculation
- Redo configs.py so that it is not such a kludge and give ability for
user to make a command line arg mandatory
- Make training global step number pick up the last number from last save
instead of starting over at epoch #1
- (DONE: These features were added to classify_examples.py)
Create python script that takes predications and targets and calcs
performance metrics on out of sample data by date (this is done by
time_plot.R but it would be nice to have something that doesn't
depend on installing R)
- (DONE) Test different size batch configurations, how to handle large
batch size x unrollings that produce zero steps? User "raise RuntimeError(msg)
to handle bad config"
- (DONE) Test for bad configs such as training files doesn't exist
- (DONE) Report network geometry in verbose mode before training begins
- (DONE) Force secret key to be input at stdin in download_data.py
- (DONE) Make check point directory if doesn't exist.
- (DONE) Confirm that code works under tensorflow version r0.10
- (DONE) Create test suite that doesn't depend on prorprietary data.
- (DONE) Create a sample experiment that doesn't depend on proprietary data
- (DONE) Implement non-rnn (simple multi-layer perceptron) model
Maybe new class defined in deep_mlp_model.py