-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate mocap.npz? #1
Comments
+1, could you provide some details and format description about the mocap data? Thanks! |
+1, could you provide some details and format description about the mocap data? Thanks! |
the raw data format is not complex, just read the code below |
Below is some information about the data format. We also note the length of each mocap sequence (as mentioned above at L141). This is so we don't sample invalid transitions for training. If the mocap clip is one long continuous sequence, then there no reason to do this.
For extracting training data from mocap datasets, I think fairmotion might be helpful. Based on the examples I have seen, though I haven't tested, should be something like below. Root deltas need some more processing; essentially find the displacement vector and rotate by the current facing direction of the character. Same thing for positions and velocities, they should be projected to the character space to make learning easier.
|
@belinghy can you tell me more about how to get root deltas? I think a sample, formula or code would be better |
I may have misunderstood the whole process but since there aren't any sample of mocap.npz, I assume that mocap.npz should be like this: mocap.npz It seems mocap data has to include only 22 joints, so, extracting from other public datasets may not work as bvh files or other mocap data out there may have different number of joints. Therefore.. I think there are two ways to solve:
I wasn't able to find what mocap database this project had used. and it wasn't in the paper..:( |
Your understanding of the format is correct, except As you've noted, mocap_env.py could definitely refactored. I think the only things to change if you are using different input format are these lines and these lines. The second reference is only if |
So.. as it mentioned above, if I get this right, end_indices might contain one integer value if an input clip is a long continuous sequence, |
Yes, it's a frame number. end_indices contains one integer value if there is exactly one input clip is a long continuous sequence. |
Hi, I have some confusion about Furthermore, can you provide some examples for Thank you |
Maybe this will help: https://arxiv.org/pdf/2103.14274.pdf : see pose representation for root information. I think the paper and code is slightly different in terms of what up-vector they have used. |
Hello, @belinghy , when reshaping the rotation components, as returned by fairmotion: version 1:
version 2:
|
Hi @Gabriel-Bercaru, I'm not sure what is fairmotion's convention. Are you rendering the character using joint orientations? If not, for the purpose of neural network input, the order shouldn't matter. |
Hello, indeed for the input training data, it doesn't really matter, but I was trying to render a mesh over a trained model. As far as I have seen, rigging makes use of the joint orientations and in order to get them I should convert those 6D orientation vectors to either Euler rotations or quaternions |
The way it's indexed, e.g., |
Hi,
How to generage mocap.npz, which seems not easy to me.
Can u give a clue how to generate mocap.npz from public mocap dataset?
The train_mvae.py script assumes the mocap data to be at environments/mocap.npz. The original training data is not included in this repo; but can be easily extracted from other public datasets.
Thanks very much!
BEST
The text was updated successfully, but these errors were encountered: