Replies: 5 comments 4 replies
-
Hi @Cfather for the second order dynamics derivatives, we were able to prove that the analytical gradients are faster. Following paper shows the results: https://ieeexplore.ieee.org/document/10449483 For the first-order derivatives, Pinocchio developers also showed that the analytical derivatives are faster. The comparison for some models can be found in I can't find a comparison of compute time by exploiting brax, but I won't be surprised if they are more efficient. I believe the specific computes can be made efficient by exploiting parallelism, but that can done in both cases- Automatic-Diff, and analytical. |
Beta Was this translation helpful? Give feedback.
-
I could provide a benchmark, but the idea is that Pinocchio's gradients are almost 50x faster than auto diff gradients and more than 10x faster when considering code generation, and this is for small-size to medium-size robots. There is a paper between KU Leuven and my team at Inria considering analytical derivatives in the frame of auto diff frameworks such as CasADi where we highlight the benefits of analytical ones. Please see https://inria.hal.science/hal-03541487/file/Mixed%20Use%20of%20Analytical%20Derivatives%20and%20Algorithmic%20Differentiation%20for%20NMPC%20of%20Robot%20Manipulators.pdf. |
Beta Was this translation helpful? Give feedback.
-
Another advantage of Pinocchio's derivatives and further extensions is that they operate on the right tangent space. In contrast, auto diff frameworks will tend to operate on the parametrization itself (e.g., differentiation Quaternion components, while we should instead use exponential mapping related to SO(3), the Special Orthogonal group of dimension 3 defining the group of 3d rotations). |
Beta Was this translation helpful? Give feedback.
-
@jcarpent @shubhamsingh91 Thank you very much for your responses! I have read your work and they are amazing! I appreciate such detailed answers! |
Beta Was this translation helpful? Give feedback.
-
Just found a paper: https://arxiv.org/pdf/2109.06976 |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a very general question about the computation time of the gradient of the robot kinematics and dynamics.
It is great that Pinocchio has implemented the analytical gradient and even the hessian of the robot dynamics.
But the recent advancement seems to be in favor of computing the gradient using automatic differentiation, mainly backed up by pytorch or jax, such as brax or IssacGym.
I wonder if people have done any experiments to actually compare the speed of these different approaches, especially in the case where we need to evaluate the gradients for a lot of samples in parallel? Which one would be faster?
Beta Was this translation helpful? Give feedback.
All reactions