Skip to content

Latest commit

 

History

History
30 lines (23 loc) · 4.9 KB

File metadata and controls

30 lines (23 loc) · 4.9 KB

Background and Motivation

ML system papers targeting efficient training on heterogeneous cluster(cluster with different types of devices) are less studied than homogeneous cluster(cluster with same type of devices). However, there is a growing interest in this area. The motivation of having heterogeneous cluster in distributed training are:

  1. for data centers, the use of heterogeneous GPUs is inevitable due to the short release cycle of new GPU architecture
  2. for users, they can purchase spot instance with a combination of available and cheap heterogeneous devices to reduce expense and failure's cost(when one type of device failed because of out-biling(bidding price is lower than spot price), the training can still continue on other types of devices).

We have categorized different challenges brought by heterogeneous devices and the corresponding solutions(papers) in the following sections. If you have any papers to add, feel free to ping me([email protected]).

Papers targeting inter-pipeline heterogeneity(each pipeline contains homogeneous devices, different pipelines have heterogeneous devices):

Main problem to solve: inter-pipeline heterogeneity leads to load imbalance.

Papers using batch distribution to balance the workload among pipelines

Papers using decentralized synchronization to improve overall throughput

Papers targeting intra-pipeline heterogeneity(A pipeline contains heterogeneous devices):

Main problem to solve: Within a pipeline, optimal layer assignment problem on heterogeneous devices is NP-hard with respective to the number of device types.

Other papers targeting heterogeneous cluster: