Skip to content
This repository has been archived by the owner on Dec 19, 2023. It is now read-only.

the intent of SpecialSpmmFunctionFinal and whether it can be done by Pytorch itself ? #39

Open
zhangxin9988 opened this issue Dec 11, 2021 · 0 comments

Comments

@zhangxin9988
Copy link

I think SpecialSpmmFunctionFinal 's forward section is intend to compute the row sum of sparse matrix ,and backward return the gradient for the sparse matrix's values,but I find torch.sparse might solve the backward of row sum operation,for example:
`i = torch.LongTensor([[0, 1, 1],[2, 0, 2]]) #row, col
v = torch.FloatTensor([3, 4, 5]) #data
v.requires_grad=True
m=torch.sparse_coo_tensor(i, v, torch.Size([2,3])) #torch.Size()
m.retain_grad()

m1=torch.sparse.sum(m,dim=1)
m1.retain_grad()

m2=torch.sparse.sum(m1)
m2.backward()
print(v.grad)#v's gradient is tensor([1., 1., 1.])`
So why do you write the autograd function or something I understand is wrong?
waitting for your reply,thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant