You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is just a note to document that I'm getting close to a first "official" release
for the package.
Things that are working
Single TPU Core mode for CV (use Learner.to_xla() to set single tpu core mode)
Batch Transforms using Affine transforms execute on the CPU
Everything works (more or less)
Note that once the single tpu core mode is set, the mode cannot be reset to multi TPU core mode (due to xla_device being set on the main process) unless you restart the kernel.
Multiple TPU Core mode for CV (use Learner.xla_* methods)
Batch training methods (with xla_ equivalents, e.g. xla_fit)
LR Find (xla_lr_find) (somewhat buggy though)
Batch Transforms work (but all are run the CPU - so slower than running on Tesla P100 GPU )
If not using affine batch transforms (zoom, rotate, resize, warp) is much faster than K80 or P100 GPU (appx. 1.4x faster) on CIFAR, MNIST
Things to be done
Tabular, Collab - may not yet work.
For NLP
AWD LSTM - still has a bug (not exactly sure if AWD LSTM has a torch XLA implementation that works, much less a pretrained fastai model)
Optimize performance of fastai dataloaders on multi TPU core mode (especially for larger image sizes)
Try out alternative ways to sync actions across spawned processes (e.g. cancel batch, etc)
Update smooth loss metric calculation on a per batch basis (average across all ranks)
Test out the package on Kaggle
Update documentation on how to use the multi-core TPU mode
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Project Status
This is just a note to document that I'm getting close to a first "official" release
for the package.
Things that are working
Single TPU Core mode for CV (use
Learner.to_xla()
to set single tpu core mode)Multiple TPU Core mode for CV (use
Learner.xla_*
methods)xla_
equivalents, e.g.xla_fit
)xla_lr_find
) (somewhat buggy though)Things to be done
Plans
Beta Was this translation helpful? Give feedback.
All reactions