You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Focusing on affine transforms (zoom, warp, rotate) + random resize crop GPU - they seem to cause much of the slowdown.
Normalize and lighting (contrast and brightness) transforms dont seem to slow it down.
Will start narrowing where the slowdown is and do some profiling on the specific tensor operations where its slow.
Update: as of 2020/12/14, using updated Pytorch 1.7 XLA and latest fastai (2.1.8) and fastai_xla_extensions (0.0.4) packages, training with batch transforms is still slower than training without batch transforms.
Confirming that batch transforms are slow
Same notebook without batch tfms - each epoch runs at 1:34 to 2:25 mins
For exact same notebook with batch tfms
The text was updated successfully, but these errors were encountered: