Skip to content

Easy and lightning fast training of πŸ€— Transformers on Habana Gaudi processor (HPU)

License

Notifications You must be signed in to change notification settings

regisss/optimum-habana

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Optimum Habana

πŸ€— Optimum Habana is the interface between the πŸ€— Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.

What is a Habana Processing Unit (HPU)?

HPUs offer fast model training and inference as well as a great price-performance ratio. Check out this blog post about BERT pre-training and this article benchmarking Habana Gaudi2 versus Nvidia A100 GPUs for concrete examples. If you are not familiar with HPUs and would like to know more about them, we recommend you take a look at our conceptual guide.

Install

To install the latest release of this package:

pip install optimum[habana]

To use DeepSpeed on HPUs, you also need to run the following command:

pip install git+https://github.com/HabanaAI/[email protected]

Optimum Habana is a fast-moving project, and you may want to install it from source:

pip install git+https://github.com/huggingface/optimum-habana.git

Last but not least, don't forget to install the requirements for every example:

cd <example-folder>
pip install -r requirements.txt

How to use it?

Quick Start

πŸ€— Optimum Habana was designed with one goal in mind: to make training and inference straightforward for any πŸ€— Transformers and πŸ€— Diffusers user while leveraging the complete power of Gaudi processors.

Transformers Interface

There are two main classes one needs to know:

  • GaudiTrainer: the trainer class that takes care of compiling (lazy or eager mode) and distributing the model to run on HPUs, and performing training and evaluation.
  • GaudiConfig: the class that enables to configure Habana Mixed Precision and to decide whether optimized operators and optimizers should be used or not.

The GaudiTrainer is very similar to the πŸ€— Transformers Trainer, and adapting a script using the Trainer to make it work with Gaudi will mostly consist in simply swapping the Trainer class for the GaudiTrainer one. That's how most of the example scripts were adapted from their original counterparts.

Here is an example:

- from transformers import Trainer, TrainingArguments
+ from optimum.habana import GaudiConfig, GaudiTrainer, GaudiTrainingArguments

- training_args = TrainingArguments(
+ training_args = GaudiTrainingArguments(
  # training arguments...
+ use_habana=True,
+ use_lazy_mode=True,  # whether to use lazy or eager mode
+ gaudi_config_name=path_to_gaudi_config,
)

# A lot of code here

# Initialize our Trainer
- trainer = Trainer(
+ trainer = GaudiTrainer(
    model=model,
    args=training_args,  # Original training arguments.
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

where gaudi_config_name is the name of a model from the Hub (Gaudi configurations are stored in model repositories) or a path to a local Gaudi configuration file (you can see here how to write your own).

Diffusers Interface

You can generate images from prompts using Stable Diffusion on Gaudi using the GaudiStableDiffusionPipeline class and the [GaudiDDIMScheduler] which have been both optimized for HPUs. Here is how to use them and the differences with the πŸ€— Diffusers library:

- from diffusers import DDIMScheduler, StableDiffusionPipeline
+ from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline


model_name = "runwayml/stable-diffusion-v1-5"

- scheduler = DDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
+ scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

- pipeline = StableDiffusionPipeline.from_pretrained(
+ pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
+   use_habana=True,
+   use_hpu_graphs=True,
+   gaudi_config="Habana/stable-diffusion",
)

outputs = generator(
    ["An image of a squirrel in Picasso style"],
    num_images_per_prompt=16,
+   batch_size=4,
)

Documentation

Check out the documentation of Optimum Habana for more advanced usage.

Validated Models

The following model architectures, tasks and device distributions have been validated for πŸ€— Optimum Habana:

Architecture Single Card Multi Card DeepSpeed Tasks
BERT βœ”οΈ βœ”οΈ βœ”οΈ
  • text classification
  • question answering
  • language modeling
  • RoBERTa βœ”οΈ βœ”οΈ βœ”οΈ
  • question answering
  • language modeling
  • ALBERT βœ”οΈ βœ”οΈ βœ”οΈ
  • question answering
  • language modeling
  • DistilBERT βœ”οΈ βœ”οΈ βœ”οΈ
  • question answering
  • language modeling
  • GPT2 βœ”οΈ βœ”οΈ βœ”οΈ
  • language modeling
  • T5 βœ”οΈ βœ”οΈ βœ”οΈ
  • summarization
  • translation
  • question answering
  • ViT βœ”οΈ βœ”οΈ βœ”οΈ
  • image classification
  • Swin βœ”οΈ βœ”οΈ βœ”οΈ
  • image classification
  • Wav2Vec2 βœ”οΈ βœ”οΈ βœ”οΈ
  • audio classification
  • speech recognition
  • Stable Diffusion βœ”οΈ βœ— βœ—
  • text-to-image generation
  • CLIP βœ”οΈ βœ”οΈ βœ”οΈ
  • contrastive image-text training
  • BLOOM(Z) βœ— βœ— βœ”οΈ
  • text generation
  • StarCoder βœ”οΈ βœ— βœ”οΈ
  • text generation
  • ESMFold βœ”οΈ βœ— βœ—
  • protein folding
  • Other models and tasks supported by the πŸ€— Transformers library may also work. You can refer to this section for using them with πŸ€— Optimum Habana. Besides, this page explains how to modify any example from the πŸ€— Transformers library to make it work with πŸ€— Optimum Habana.

    If you find any issue while using those, please open an issue or a pull request.

    Gaudi Setup

    Please refer to Habana Gaudi's official installation guide.

    Tests should be run in a Docker container based on Habana Docker images.

    The current version has been validated for SynapseAI 1.9.

    Development

    Check the contributor guide for instructions.

    About

    Easy and lightning fast training of πŸ€— Transformers on Habana Gaudi processor (HPU)

    Resources

    License

    Stars

    Watchers

    Forks

    Packages

    No packages published

    Languages

    • Python 97.4%
    • Jupyter Notebook 2.0%
    • Other 0.6%