-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incomplete Operation Support for torchdet
Test Tool
#61
Comments
torchdet
testertorchdet
Test Tool
I'm going to add option for specifying output file since I'm pretty confident that @sanjif-shanmugavelu and @chrisculver aren't implementing that. ;) |
I've added code on the feature branch to suppress the following SciPy warning:
|
Added column |
Also added a I've also dropped in |
@sanjif-shanmugavelu , is there a benchmark you could have me work on to add? I'd start on the above list, but there's the risk I'd be duplicating your efforts. |
Per our conversation on slack, I'll work on implementing benchmark for |
I have pushed a version of support for a |
My first contribution to the list: bmm. I keep it separated for now and will open a PR with other kernels as well. |
Just merged my contributions. |
List of Non-Deterministic Operations in PyTorch
The following operations in PyTorch exhibit non-deterministic behavior according to the
torch.use_deterministic_algorithms
documentation. We should ensure the testing tool supports runtime tests on the operations below. Note the list is scraped from the PyTorch 2.4 Stable Release version, and we ideally want to support all ops from1.6.0
to2.3
, to ensure compatibility with the scannerTODO: Add Tests for Non-Deterministic Operations
torch.nn.Conv1d
when called on a CUDA tensortorch.nn.Conv2d
when called on a CUDA tensortorch.nn.Conv3d
when called on a CUDA tensortorch.nn.ConvTranspose1d
when called on a CUDA tensortorch.nn.ConvTranspose2d
when called on a CUDA tensortorch.nn.ConvTranspose3d
when called on a CUDA tensortorch.nn.ReplicationPad2d
when attempting to differentiate a CUDA tensortorch.bmm()
when called on sparse-dense CUDA tensors (Mathieu, done)torch.Tensor.__getitem__()
when attempting to differentiate a CPU tensor and the index is a list of tensorstorch.Tensor.index_put()
withaccumulate=False
torch.Tensor.index_put()
withaccumulate=True
when called on a CPU tensortorch.Tensor.put_()
withaccumulate=True
when called on a CPU tensortorch.Tensor.scatter_add_()
when called on a CUDA tensortorch.gather()
when called on a CUDA tensor that requires gradtorch.index_add()
when called on a CUDA tensortorch.index_select()
when attempting to differentiate a CUDA tensortorch.repeat_interleave()
when attempting to differentiate a CUDA tensortorch.Tensor.index_copy()
when called on a CPU or CUDA tensortorch.Tensor.scatter()
whensrc
type is Tensor and called on a CUDA tensortorch.Tensor.scatter_reduce()
whenreduce='sum'
orreduce='mean'
and called on a CUDA tensortorch.nn.AvgPool3d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.AdaptiveAvgPool2d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.AdaptiveAvgPool3d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.MaxPool3d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.AdaptiveMaxPool2d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.FractionalMaxPool2d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.FractionalMaxPool3d
when attempting to differentiate a CUDA tensor @chrisculvertorch.nn.MaxUnpool1d
@chrisculvertorch.nn.MaxUnpool2d
@chrisculvertorch.nn.MaxUnpool3d
@chrisculvertorch.nn.functional.interpolate()
when attempting to differentiate a CUDA tensor and one of the following modes is used:linear
bilinear
bicubic
trilinear
torch.nn.ReflectionPad1d
when attempting to differentiate a CUDA tensortorch.nn.ReflectionPad2d
when attempting to differentiate a CUDA tensortorch.nn.ReflectionPad3d
when attempting to differentiate a CUDA tensortorch.nn.ReplicationPad1d
when attempting to differentiate a CUDA tensortorch.nn.ReplicationPad3d
when attempting to differentiate a CUDA tensortorch.nn.NLLLoss
when called on a CUDA tensortorch.nn.CTCLoss
when attempting to differentiate a CUDA tensortorch.nn.EmbeddingBag
when attempting to differentiate a CUDA tensor whenmode='max'
@sanjif-shanmugavelutorch.Tensor.put_()
whenaccumulate=False
(@mtaillefumier )torch.Tensor.put_()
whenaccumulate=True
and called on a CUDA tensor (@mtaillefumier )torch.histc()
when called on a CUDA tensor (@mtaillefumier)torch.bincount()
when called on a CUDA tensor and weights tensor is given (@mtaillefumier )torch.kthvalue()
when called on a CUDA tensor @sanjif-shanmugavelutorch.median()
with indices output when called on a CUDA tensortorch.nn.functional.grid_sample()
when attempting to differentiate a CUDA tensor @sanjif-shanmugavelutorch.cumsum()
when called on a CUDA tensor when dtype is floating point or complex (@mtaillefumier)torch.Tensor.scatter_reduce()
whenreduce='prod'
and called on a CUDA tensortorch.Tensor.resize_()
when called with a quantized tensor @sanjif-shanmugavelutorch.nn backwards
benchmarks @sanjif-shanmugaveluThe text was updated successfully, but these errors were encountered: