-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird results when applying SemGCN to 2D pose from image #32
Comments
@duckduck-sys I think there are two points to note about the data:
|
hi, how do you calculate the spine point |
hi @duckduck-sys |
@develduan Hi Duan, can you share this solution? I'm also facing somewhat same problem. |
@develduan Hi. I also face the same problem about how to figure out the spine point, because the stack-hourglass doesn't output the spine point |
@lisa676 @dandingol03 Hi, I'm sorry that I stopped following this project because it didn't work very well on my dataset(the wild environment). In my dataset, all pedestrians stand upright, so I simply treated the midpoint of the neck and the pelvis as the thorax/spine: In my case, I want to get the 3D posture directly from the image instead of getting a 2D posture and then a 3D posture, and I got a better result by following this paper "Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik. End-to-end Recovery of Human Shape and Pose". |
@develduan Firstly, thanks for your kindly apply. Secondly, the paper " End-to-end Recovery of Human Shape and Pose" is cool, i will delve into the paper soon. And last, here is my email [email protected], maybe someday we can exchange idea about 3d pose estimation~ |
Inference on images in the wild using SemGCN has been partially covered in this thread and others, but only the overall process has been made clear. I.e.:
Below i will follow each step, using the test image of size 300x600 to the left.
For Step 1, i use EfficientPose to generate the MPII format 2d pose of the test image as shown above on the right, here's the numeric output:
For Step 2, i run this:
positions = positions[:, SH_TO_GT_PERM, :]
To get the output:
For Step 3, i run this:
positions[..., :2] = normalize_screen_coordinates(positions[..., :2], w=300, h=600)
To get the output:
For Step 4, the above is used as input to the SemGCN SH model running this:
Which gives the output:
When visualized this looks completely wrong... See image below. Can anyone highlight on where the problem lies? Is it a problem with the pre-processing, or with the model?
The text was updated successfully, but these errors were encountered: