You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to have pytorch code call a subroutine that takes uses DLPack, such that the subroutine is generic across frameworks. However, I noticed that the interface provided by DLPack allows compliant interfaces to output only DLManagedTensor, not DLTensor, requiring the consumer subroutine to take ownership of the input. Indeed, the documentation says: "The consumer must transer ownership of the DLManangedTensor from the capsule to its own object."
This is no good if you want the input tensor to continue to be used after the subroutine. Here is a small example to describe what I want to do:
Is DLPack just no good for this use case? __dlpack__() outputs a DLManagedTensor (well, a capsule refering to a DLManagedTensor), which forces the consumer to take ownership (i.e., it implements what C++ programmers call "move semantics").
However, this causes the chain of deleters (the deleter function pointer has to call the original function pointer) to grow every time this rigamarole happens, so it seems very non ideal.
Is there basically no way to access just a DLTensor instead of a DLManagedTensor if I want reference semantics instead of move semantics?
The text was updated successfully, but these errors were encountered:
It is possible for the tensor continue to be shared, the mechanism is that making DLTensor export incref the tensor, and have DLManagedTensor's deleter decref the reference counter, rather than perform the deletion. I believe that was also the common implementation of many framework integration already
We have developed a way to share the DLPack tensor without following the stringent requirements placed by the DLPack protocol. We term the new protocol as the "producer only DLPack" protocol. You can read it in our arxiv paper (https://arxiv.org/pdf/2404.04118): Section IV-A.
I would like to have pytorch code call a subroutine that takes uses DLPack, such that the subroutine is generic across frameworks. However, I noticed that the interface provided by DLPack allows compliant interfaces to output only DLManagedTensor, not DLTensor, requiring the consumer subroutine to take ownership of the input. Indeed, the documentation says: "The consumer must transer ownership of the DLManangedTensor from the capsule to its own object."
This is no good if you want the input tensor to continue to be used after the subroutine. Here is a small example to describe what I want to do:
Is DLPack just no good for this use case?
__dlpack__()
outputs a DLManagedTensor (well, a capsule refering to a DLManagedTensor), which forces the consumer to take ownership (i.e., it implements what C++ programmers call "move semantics").I suppose one work around might be:
However, this causes the chain of deleters (the deleter function pointer has to call the original function pointer) to grow every time this rigamarole happens, so it seems very non ideal.
Is there basically no way to access just a DLTensor instead of a DLManagedTensor if I want reference semantics instead of move semantics?
The text was updated successfully, but these errors were encountered: