Skip to content

About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf

License

Notifications You must be signed in to change notification settings

Cui-yshoho/Flowformer

 
 

Repository files navigation

Flowformer (ICML 2022)

Flowformer: Linearizing Transformers with Conservation Flows

Transformers have achieved impressive success in various areas. However, the attention mechanism has a quadratic complexity, significantly impeding Transformers from dealing with numerous tokens and scaling up to bigger models. In pursuing the linear complexity and task-universal foundation model, we propose Flowformer [paper] with the following merits:

  • Linear complexity w.r.t sequence length, can handle extermely long sequence (over 4k tokens)
  • Without specific inducitve bias, purely derived from the flow network theory
  • Task-universal, showing strong performance in $\color{red}{\text{Long sequence, Vision, NLP, Time series, RL}}$.

Flow-Attention Design

We cast the attention mechanism into flow network, where the information flow is aggregated from the sources (values) to the sinks (results) through the learned flow capacities (attentions).

By conducting the conservation in both source and sink ascpects, we can bring competition into Flow-Attention design to avoid trivial attention in the spirit that "fixed resource will cause competition''.



Figure 1. Flow-Attention with Competition and Allocation mechanisms.

Get Started

  1. Please refer to different folders for detailed experiment instructions.

    Note: We have suffered a lot in configuring environments for different tasks. If you also have problems in solving the environment, feel free to contact us and discuss about it.

  2. List of benchmarks

  • Core code: see Flow_Attention.py
  • GPT-style Pytorch Module: see Flowformer_TorchModule
  • Long Sequence Modeling in LRA: see Flowformer_LRA
  • Vision Recognization in ImageNet-1K: see Flowformer_CV
  • Language Modeling in WikiText-103: see Flowformer_NLP
  • Time series classification in UEA: see Flowformer_TimeSeries
  • Reinforcement Learning in D4RL: see Flowformer_RL
  • CUDA speed up version

Main Results

See the [paper] for detailed results, including nearly 20 comparing baselines.

Task Metrics Flowformer Performer Reformer Vanilla
Transformer
Long Sequence Modeling
(LRA)
Avg Acc (%) $\uparrow$ 56.48 51.41 50.67 OOM
Vision Recognization
(ImageNet-1K)
Top-1 Acc (%) $\uparrow$ 80.6 78.1 79.6 78.7
Language Modeling
(WikiText-103)
Perplexity $\downarrow$ 30.8 37.5 33.6 33.0
Time series classification
(UEA)
Avg Acc (%) $\uparrow$ 73.0 71.5 71.9 71.9
Offline RL
(D4RL)
Avg Reward $\uparrow$
Avg Deviation $\downarrow$
73.5 $\pm$ 2.9 63.8 $\pm$ 7.6 63.9 $\pm$ 2.9 72.2 $\pm$ 2.6

Vanilla Transformer means Decision Transorfomer in RL.

Attention Visualization



Figure 2. Attention visualization. Flowformer can capture the essential parts successfully.

Citation

If you find this repo useful, please cite our paper.

@inproceedings{wu2022flowformer,
  title={Flowformer: Linearizing Transformers with Conservation Flows},
  author={Haixu Wu and Jialong Wu and Jiehui Xu and Jianmin Wang and Mingsheng Long},
  booktitle={International Conference on Machine Learning},
  year={2022}
}

Contact

If you have any questions or want to use the code, please contact [email protected].

About

About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.9%
  • Shell 1.1%