Skip to content

Commit

Permalink
doc
Browse files Browse the repository at this point in the history
  • Loading branch information
Lingjun Liu committed Sep 14, 2019
2 parents 80c985c + ca882d6 commit 2f316b0
Show file tree
Hide file tree
Showing 10 changed files with 351 additions and 157 deletions.
8 changes: 4 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ env:
# Backward Compatibility in insured for release less than 1 year old.
# https://pypi.org/project/tensorflow/#history
matrix:
- _TF_VERSION=2.0.0b1
- _TF_VERSION=2.0.0-rc1
# - _TF_VERSION=1.12.0 # Remove on Oct 22, 2019
# - _TF_VERSION=1.11.0 # Remove on Sep 28, 2019
# - _TF_VERSION=1.10.1 # Remove on Aug 24, 2019
Expand Down Expand Up @@ -63,7 +63,7 @@ matrix:
install:
- |
if [[ -v _DOC_AND_YAPF_TEST ]]; then
pip install tensorflow==2.0.0b1
pip install tensorflow==2.0.0-rc1
pip install yapf
pip install -e .[doc]
else
Expand Down Expand Up @@ -101,7 +101,7 @@ deploy:
on:
tags: true
python: '3.6'
condition: '$_TF_VERSION = 2.0.0b1'
condition: '$_TF_VERSION = 2.0.0-rc1'
# condition: '$_TF_VERSION = 1.11.0'

# Documentation: https://docs.travis-ci.com/user/deployment/releases/
Expand All @@ -115,5 +115,5 @@ deploy:
on:
tags: true
python: '3.6'
condition: '$_TF_VERSION = 2.0.0b1'
condition: '$_TF_VERSION = 2.0.0-rc1'
# condition: '$_TF_VERSION = 1.11.0'
20 changes: 13 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@ To release a new version, please update the changelog as followed:
### Deprecated

### Fixed
- RNN updates: remove warnings, fix if seq_len=0, unitest (#PR 1033)

### Removed

Expand All @@ -88,21 +87,30 @@ To release a new version, please update the changelog as followed:
### Contributors


## [2.2.1]
## [2.2.0] - 2019-09-13

TensorLayer 2.2.0 is a maintenance release.
It contains numerous API improvement and bug fixes.
This release is compatible with TensorFlow 2 RC1.

### Added
- Support nested layer customization (#PR 1015)
- Support string dtype in InputLayer (#PR 1017)
- Support Dynamic RNN in RNN (#PR 1023)
- Add ResNet50 static model (#PR 1030)
- Add Transformer model (#PR 1027)
- Add performance test code in static model (#PR 1041)

### Changed

- `SpatialTransform2dAffine` auto `in_channels`
- support TensorFlow 2.0.0-beta1
- support TensorFlow 2.0.0-rc1
- Update model weights property, now returns its copy (#PR 1010)

### Fixed
- RNN updates: remove warnings, fix if seq_len=0, unitest (#PR 1033)
- BN updates: fix BatchNorm1d for 2D data, refactored (#PR 1040)

### Dependencies Update

### Deprecated
Expand All @@ -116,6 +124,7 @@ To release a new version, please update the changelog as followed:
- Copy original model's `trainable_weights` and `nontrainable_weights` when initializing `LayerList` (#PR 1029)
- Remove redundant parts in `model.all_layers` (#PR 1029)
- Replace `tf.image.resize_image_with_crop_or_pad` with `tf.image.resize_with_crop_or_pad` (#PR 1032)
- Fix a bug in `ResNet50` static model (#PR 1041)

### Removed

Expand Down Expand Up @@ -199,15 +208,12 @@ A maintain release.
- @warshallrho: #PR966
- @zsdonghao: #931
- @yd-yin: #963
<<<<<<< HEAD
- @Tokarev-TT-33: # 995
- @initial-h: # 995
- @quantumiracle: #995
- @Officium: #995
=======
- @1FengL: #958
- @dvklopfenstein: #971
>>>>>>> 560dbb8a17963023a3b1d59a79e1c2752530114a


## [2.0.0] - 2019-05-04
Expand Down Expand Up @@ -560,7 +566,7 @@ To many PR for this update, please check [here](https://github.com/tensorlayer/t
@zsdonghao @luomai @DEKHTIARJonathan

[Unreleased]: https://github.com/tensorlayer/tensorlayer/compare/2.0....master
[2.1.1]: https://github.com/tensorlayer/tensorlayer/compare/2.1.1...2.1.1
[2.2.0]: https://github.com/tensorlayer/tensorlayer/compare/2.2.0...2.2.0
[2.1.0]: https://github.com/tensorlayer/tensorlayer/compare/2.1.0...2.1.0
[2.0.2]: https://github.com/tensorlayer/tensorlayer/compare/2.0.2...2.0.2
[2.0.1]: https://github.com/tensorlayer/tensorlayer/compare/2.0.1...2.0.1
Expand Down
111 changes: 45 additions & 66 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,45 +34,43 @@

<br/>

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides a large collection of customizable neural layers / functions that are key to build real-world AI applications. TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049).
TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build complex AI models. TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049).
TensorLayer can also be found at [iHub](https://code.ihub.org.cn/projects/328) and [Gitee](https://gitee.com/organizations/TensorLayer).

# News

🔥📰🔥 Reinforcement Learning Model Zoos: [Low-level APIs for Research](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) and [High-level APIs for Production](https://github.com/tensorlayer/RLzoo)

🔥📰🔥 [Sipeed Maxi-EMC](https://github.com/sipeed/Maix-EMC): Run TensorLayer models on the **low-cost AI chip** (e.g., K210) (Alpha Version)

🔥📰🔥 [NNoM](https://github.com/majianjia/nnom): Run TensorLayer quantized models on the **MCU** (e.g., STM32) (Coming Soon)


# Features

As deep learning practitioners, we have been looking for a library that can address various development
purposes. This library is easy to adopt by providing diverse examples, tutorials and pre-trained models.
Also, it allow users to easily fine-tune TensorFlow; while being suitable for production deployment. TensorLayer aims to satisfy all these purposes. It has three key features:
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.

- ***Simplicity*** : TensorLayer lifts the low-level dataflow interface of TensorFlow to *high-level* layers / models. It is very easy to learn through the rich [example codes](https://github.com/tensorlayer/awesome-tensorlayer) contributed by a wide community.
- ***Flexibility*** : TensorLayer APIs are transparent: it does not mask TensorFlow from users; but leaving massive hooks that help *low-level tuning* and *deep customization*.
- ***Zero-cost Abstraction*** : TensorLayer can achieve the *full power* of TensorFlow. The following table shows the training speeds of [VGG16](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) using TensorLayer and native TensorFlow on a TITAN Xp.
- ***Simplicity*** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- ***Flexibility*** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- ***Zero-cost Abstraction*** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).

| Mode | Lib | Data Format | Max GPU Memory Usage(MB) |Max CPU Memory Usage(MB) | Avg CPU Memory Usage(MB) | Runtime (sec) |
| :-------: | :-------------: | :-----------: | :-----------------: | :-----------------: | :-----------------: | :-----------: |
| AutoGraph | TensorFlow 2.0 | channel last | 11833 | 2161 | 2136 | 74 |
| | Tensorlayer 2.0 | channel last | 11833 | 2187 | 2169 | 76 |
| Graph | Keras | channel last | 8677 | 2580 | 2576 | 101 |
| Eager | TensorFlow 2.0 | channel last | 8723 | 2052 | 2024 | 97 |
| | TensorLayer 2.0 | channel last | 8723 | 2010 | 2007 | 95 |
TensorLayer is NOT yet another library in the TensorFlow world. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom, complex AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and most importantly, pythonic.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.

# Multilingual documents

TensorLayer stands at a unique spot in the library landscape. Other wrapper libraries like Keras and TFLearn also provide high-level abstractions. They, however, often
hide the underlying engine from users, which make them hard to customize
and fine-tune. On the contrary, TensorLayer APIs are generally lightweight, flexible and transparent.
Users often find it easy to start with the examples and tutorials, and then dive
into TensorFlow seamlessly. In addition, TensorLayer does not create library lock-in through native supports for importing components from Keras.
TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in
both English and Chinese.

TensorLayer has a fast growing usage among top researchers and engineers, from universities like Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and
University of Technology of Compiegne (UTC), and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
[![English Documentation](https://img.shields.io/badge/documentation-english-blue.svg)](https://tensorlayer.readthedocs.io/)
[![Chinese Documentation](https://img.shields.io/badge/documentation-%E4%B8%AD%E6%96%87-blue.svg)](https://tensorlayercn.readthedocs.io/)
[![Chinese Book](https://img.shields.io/badge/book-%E4%B8%AD%E6%96%87-blue.svg)](http://www.broadview.com.cn/book/5059/)

If you want to try the experimental features on the the master branch, you can find the latest document
[here](https://tensorlayer.readthedocs.io/en/latest/).

# Tutorials and Real-World Applications
# Extensive examples

You can find a large collection of tutorials, examples and real-world applications using TensorLayer within [examples](examples/) or through the following space:

Expand All @@ -82,73 +80,42 @@ You can find a large collection of tutorials, examples and real-world applicatio
</div>
</a>

# Documentation

TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in
both English and Chinese. Please click the following icons to find the documents you need:

[![English Documentation](https://img.shields.io/badge/documentation-english-blue.svg)](https://tensorlayer.readthedocs.io/)
[![Chinese Documentation](https://img.shields.io/badge/documentation-%E4%B8%AD%E6%96%87-blue.svg)](https://tensorlayercn.readthedocs.io/)
[![Chinese Book](https://img.shields.io/badge/book-%E4%B8%AD%E6%96%87-blue.svg)](http://www.broadview.com.cn/book/5059/)

If you want to try the experimental features on the the master branch, you can find the latest document
[here](https://tensorlayer.readthedocs.io/en/latest/).

# Install
# Installing TensorLayer is easy

For latest code for TensorLayer 2.0, please build from the source. TensorLayer 2.0 has pre-requisites including TensorFlow 2, numpy, and others. For GPU support, CUDA and cuDNN are required.
TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.

Install TensorFlow:

```bash
pip3 install tensorflow-gpu==2.0.0-beta1 # specific version (YOU SHOULD INSTALL THIS ONE NOW)
pip3 install tensorflow-gpu # GPU version
pip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)
pip3 install tensorflow # CPU version
```

Install the stable version of TensorLayer:
Install the stable release of TensorLayer:

```bash
pip3 install tensorlayer
```

Install the latest version of TensorLayer:
Install the unstable development version of TensorLayer:

```bash
pip3 install git+https://github.com/tensorlayer/tensorlayer.git
or
pip3 install https://github.com/tensorlayer/tensorlayer/archive/master.zip
```

For developers, you should clone the folder to your local machine and put it along with your project scripts.

If you want to install the additional dependencies, you can also run
```bash
git clone https://github.com/tensorlayer/tensorlayer.git
```

If you want install TensorLayer 1.X, the simplest way to install TensorLayer 1.X is to use the **Py**thon **P**ackage **I**ndex (PyPI):

```bash
# for last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.X

# for latest release candidate of TensorLayer 1.X
pip3 install --upgrade --pre tensorlayer

# if you want to install the additional dependencies, you can also run
pip3 install --upgrade tensorlayer[all] # all additional dependencies
pip3 install --upgrade tensorlayer[extra] # only the `extra` dependencies
pip3 install --upgrade tensorlayer[contrib_loggers] # only the `contrib_loggers` dependencies
```
<!---
Alternatively, you can install the latest or development version by directly pulling from github:

If you are TensorFlow 1.X users, you can use TensorLayer 1.X:

```bash
pip3 install https://github.com/tensorlayer/tensorlayer/archive/master.zip
# or
# pip3 install https://github.com/tensorlayer/tensorlayer/archive/<branch-name>.zip
# For last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.X
```
--->

<!---
## Using Docker
Expand Down Expand Up @@ -182,6 +149,18 @@ nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASS
```
--->

# Benchmark

The following table shows the training speeds of [VGG16](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) using TensorLayer and native TensorFlow on a TITAN Xp.

| Mode | Lib | Data Format | Max GPU Memory Usage(MB) |Max CPU Memory Usage(MB) | Avg CPU Memory Usage(MB) | Runtime (sec) |
| :-------: | :-------------: | :-----------: | :-----------------: | :-----------------: | :-----------------: | :-----------: |
| AutoGraph | TensorFlow 2.0 | channel last | 11833 | 2161 | 2136 | 74 |
| | Tensorlayer 2.0 | channel last | 11833 | 2187 | 2169 | 76 |
| Graph | Keras | channel last | 8677 | 2580 | 2576 | 101 |
| Eager | TensorFlow 2.0 | channel last | 8723 | 2052 | 2024 | 97 |
| | TensorLayer 2.0 | channel last | 8723 | 2010 | 2007 | 95 |

# Contribute

Please read the [Contributor Guideline](CONTRIBUTING.md) before submitting your PRs.
Expand All @@ -201,4 +180,4 @@ If you use TensorLayer for any projects, please cite this paper:

# License

TensorLayer is released under the Apache 2.0 license. We also host TensorLayer on [iHub](https://code.ihub.org.cn/projects/328) and [Gitee](https://gitee.com/organizations/TensorLayer).
TensorLayer is released under the Apache 2.0 license.
4 changes: 4 additions & 0 deletions tensorlayer/files/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -2666,6 +2666,10 @@ def _load_weights_from_hdf5_group(f, layers, skip=False):
elif isinstance(layer, tl.layers.Layer):
weight_names = [n.decode('utf8') for n in g.attrs['weight_names']]
for iid, w_name in enumerate(weight_names):
# FIXME : this is only for compatibility
if isinstance(layer, tl.layers.BatchNorm) and np.asarray(g[w_name]).ndim > 1:
assign_tf_variable(layer.all_weights[iid], np.asarray(g[w_name]).squeeze())
continue
assign_tf_variable(layer.all_weights[iid], np.asarray(g[w_name]))
else:
raise Exception("Only layer or model can be saved into hdf5.")
Expand Down
Loading

0 comments on commit 2f316b0

Please sign in to comment.