You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i retrain the model, Why did I obtain metrics close to those reported in the paper on the THuman2 dataset, but the metrics on the CAPE dataset are not as good?
like cape-nfp(chamfer: 1.1177, p2s:0.9627, nc: 0.0520) , cape-fp(chamfer:0.8776, p2s:0.8042, nc:0.0442)
The text was updated successfully, but these errors were encountered:
My retrained model has worse metrics on both THuman2.0 and CAPE. It's like:
{'cape-easy-NC': 0.04301762208342552,
'cape-easy-chamfer': 0.9157772064208984,
'cape-easy-execution_time': 0.44610512733459473,
'cape-easy-p2s': 0.8151900768280029,
'cape-hard-NC': 0.050745654851198196,
'cape-hard-chamfer': 1.1736739873886108,
'cape-hard-execution_time': 0.45782910426457724,
'cape-hard-p2s': 1.015979290008545}
I also encounter another strange phenomenon that training takes actually a few hours on a 4090, rather than the reported 2 days on a 3090. I am using THuman2.0 for training.
i retrain the model, Why did I obtain metrics close to those reported in the paper on the THuman2 dataset, but the metrics on the CAPE dataset are not as good?
like cape-nfp(chamfer: 1.1177, p2s:0.9627, nc: 0.0520) , cape-fp(chamfer:0.8776, p2s:0.8042, nc:0.0442)
The text was updated successfully, but these errors were encountered: