-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ran the main_test_avatarposer.py but there was no response. #28
Comments
At the moment, the main_test_avatarposer.py is the code to reproduce the results of the paper on the test set, without any user input. If you want to have some visual output, i.e., a video of the animation, just set to True this flag. AvatarPoser/main_test_avatarposer.py Line 24 in e2b16e0
|
Thank you for your kind advice! Do you happen to know how to create a motion dataset directly with AvatarPoser? I have successfully visualized the provided default dataset. Currently, I am using an HTC VIVE VR device. |
At the moment, I am stuck on the inference of the model on another VR device (Meta Quest Pro) (see Issue #30 that I recently opened). What do you mean by "create a motion dataset directly with AvatarPoser"? |
Thank you for your response. I apologize for my previous message, as I didn't get much sleep and it came out strange. What I meant to ask was how to create new motion capture data with my VR device and use it with AvatarPoser, instead of using the existing AMASS motion capture dataset. I am unsure how to create the motion capture data so that it can be run in AvatarPoser with the VR device I have. Thanks! |
There is no need to create a new motion capture dataset, as AMASS already has high-quality data to effectively train the model and to make inference on commercial VR headsets. The main problem, that I am dealing with too, is how to feed the data collected with Unity correctly to the model and visualize it to an humanoid avatar. Several convertions are needed (from left hand to right hand coordinate systems, rotation matrices etc..), but I did not manage to have good poses yet. |
There is an SIGGRAPH 24 Demo of a similar project here:
https://github.com/sebastianstarke/AI4Animation with an compiled unity
demo. You might want to check that one out, won't help you integrating it
in your own project (I also gave up on that btw), but showcases how good it
works.
Michael Neri ***@***.***> schrieb am Mi., 10. Juli 2024,
10:23:
… There is no need to create a new motion capture dataset, as AMASS already
has high-quality data to effectively train the model and to make inference
on commercial VR headsets.
The main problem, that I am dealing with too, is how to feed the data
collected with Unity correctly to the model and visualize it to an humanoid
avatar. Several convertions are needed (from left hand to right hand
coordinate systems, rotation matrices etc..), but I did not manage to have
good poses yet.
—
Reply to this email directly, view it on GitHub
<#28 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKX7HNHVUDX5FZ4P4LACSPLZLTOQRAVCNFSM6AAAAABKTKSJ5WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJZG42TCMJTGQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
@ChrisGoettfert Thanks for the reference! However, it seems that they did not employed SMPL in the Unity environment. |
Hello.
I've downloaded all the necessary packages to run Avatarposer and executed "main_test_avatarposer.py" on my Windows OS. I expected a new window to open, waiting for user input, but there seems to be no such response. Could you please provide additional instructions on how to proceed with the execution? Your assistance would be greatly appreciated.
Best regards,
SU
The text was updated successfully, but these errors were encountered: