TensorLayer 1.8.0
We recommend users to update and report bugs or issues.
Features
- Experimentally support the Command-Line-Interface (CLI) module. (@luomai @lgarithm)
- Support cli:
tl train
which can bootstrap a GPU/CPU parallel training job.
- Support cli:
- Use logging instead of print() to output logs. (by @luomai @lgarithm)
- Update dropout implementation of RNN layers (by @nebulaV)
- Layers support slicing and iterating:
>>> x = tf.placeholder("float32", [None, 100])
>>> n = tl.layers.InputLayer(x, name='in')
>>> n = tl.layers.DenseLayer(n, 80, name='d1')
>>> n = tl.layers.DenseLayer(n, 80, name='d2')
>>> print(n)
... Last layer is: DenseLayer (d2) [None, 80]
The outputs can be sliced as follow:
>>> n2 = n[:, :30]
>>> print(n2)
... Last layer is: Layer (d2) [None, 30]
The outputs of all layers can be iterated as follow:
>>> for l in n:
>>> print(l)
... Tensor("d1/Identity:0", shape=(?, 80), dtype=float32)
... Tensor("d2/Identity:0", shape=(?, 80), dtype=float32)
APIs
- Simplify
DeformableConv2dLayer
intoDeformableConv2d
(by @zsdonghao) - Merge
tl.ops
intotl.utils
(by @luomai) DeConv2d
not longer requireout_size
for TensorFlow 1.3+ (by @zsdonghao)ElementwiseLayer
supports activation (by @zsdonghao)DepthwiseConv2d
supports rate 91e5824 (by @zsdonghao)GroupConv2d
#363 6ee4bca (by @Windaway)
Others
- Address codebase issues suggested by codacy (by @luomai @zsdonghao @lgarithm)
- Optimize the layers folder structure. (by @zsdonghao @luomai)
- Many documentation fixes and improvements (by @zsdonghao @luomai @lgarithm)
- Mini contribution guide in 5 lines. (by @lgarithm @luomai)
- Setup many CI tests. (@lgarithm @luomai)