We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I understand that Stacked Hourglass predictions (MPII format) are permuted to Human36 format. To infer, as mentioned here in issue #15, it is possible to pre-process annotations in-the-wild to Human36 ground-truth 2D format.
I was wondering if the same permuting work has been done for COCO keypoints format? Any pointers are appreciated. Thanks!
The text was updated successfully, but these errors were encountered:
Hi @aitikgupta ,
Thanks for your interest in our work!
One possible solution for mapping COCO to H36M format can be found here: https://github.com/JimmySuen/integral-human-pose/blob/master/pytorch_projects/common_pytorch/dataset/hm36.py#L73
Or I think you can reduce all the numbers of key points to 13 and then mapping them.
Best, Long
Sorry, something went wrong.
@garyzhao ,您好, 请您推荐一个复现Stacked Hourglass Networks比较好的代码 ,非常期待您的回复
Hi @HDYYZDN ,
You can check this one: https://github.com/bearpaw/pytorch-pose
No branches or pull requests
I understand that Stacked Hourglass predictions (MPII format) are permuted to Human36 format.
To infer, as mentioned here in issue #15, it is possible to pre-process annotations in-the-wild to Human36 ground-truth 2D format.
I was wondering if the same permuting work has been done for COCO keypoints format? Any pointers are appreciated.
Thanks!
The text was updated successfully, but these errors were encountered: