Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ran the main_test_avatarposer.py but there was no response. #28

Open
Seuk-Lee opened this issue Apr 4, 2024 · 7 comments
Open

Ran the main_test_avatarposer.py but there was no response. #28

Seuk-Lee opened this issue Apr 4, 2024 · 7 comments

Comments

@Seuk-Lee
Copy link

Seuk-Lee commented Apr 4, 2024

Hello.

I've downloaded all the necessary packages to run Avatarposer and executed "main_test_avatarposer.py" on my Windows OS. I expected a new window to open, waiting for user input, but there seems to be no such response. Could you please provide additional instructions on how to proceed with the execution? Your assistance would be greatly appreciated.

Capture

Best regards,
SU

@michaelneri
Copy link

At the moment, the main_test_avatarposer.py is the code to reproduce the results of the paper on the test set, without any user input. If you want to have some visual output, i.e., a video of the animation, just set to True this flag.

save_animation = False

@Seuk-Lee
Copy link
Author

Thank you for your kind advice!

Do you happen to know how to create a motion dataset directly with AvatarPoser? I have successfully visualized the provided default dataset. Currently, I am using an HTC VIVE VR device.

@michaelneri
Copy link

At the moment, I am stuck on the inference of the model on another VR device (Meta Quest Pro) (see Issue #30 that I recently opened).

What do you mean by "create a motion dataset directly with AvatarPoser"?

@Seuk-Lee
Copy link
Author

Thank you for your response. I apologize for my previous message, as I didn't get much sleep and it came out strange.

What I meant to ask was how to create new motion capture data with my VR device and use it with AvatarPoser, instead of using the existing AMASS motion capture dataset. I am unsure how to create the motion capture data so that it can be run in AvatarPoser with the VR device I have.

Thanks!

@michaelneri
Copy link

There is no need to create a new motion capture dataset, as AMASS already has high-quality data to effectively train the model and to make inference on commercial VR headsets.

The main problem, that I am dealing with too, is how to feed the data collected with Unity correctly to the model and visualize it to an humanoid avatar. Several convertions are needed (from left hand to right hand coordinate systems, rotation matrices etc..), but I did not manage to have good poses yet.

@ChrisGoettfert
Copy link

ChrisGoettfert commented Jul 10, 2024 via email

@michaelneri
Copy link

@ChrisGoettfert Thanks for the reference! However, it seems that they did not employed SMPL in the Unity environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants