Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the dataset split of the grasp anything++ training and val. #11

Open
unira-zwj opened this issue Oct 19, 2024 · 4 comments
Open

Comments

@unira-zwj
Copy link

Glad you shared such a great job.

I noticed in the code that the number of split/grasp-anything++/train/seen.obj is 14516, the number of split/grasp-anything++/test/seen.obj is 573, and the number of split/grasp-anything++/test/unseen.obj is 230.

Is these number the correct setting?

@andvg3
Copy link
Collaborator

andvg3 commented Oct 25, 2024

Hi @unira-zwj ,

To some extent, yes. I set the small number for testing. For more comprehensive results, you can increase the number of samples by appending seen.obj and unseen.obj with more idx of the samples.

@Pierre0089
Copy link

Hi @unira-zwj, @andvg3

On my side, I found the number of split/grasp-anything++/test/seen.obj is also 573, and the number of seen grasp files is 1376. I made evaluations of GR-ConvNet, and the IOU results is 0.28, not 0.37 in Table 1. What about your results?

image

@andvg3
Copy link
Collaborator

andvg3 commented Oct 28, 2024

Hello @Pierre0089 ,

The slight drop in results may be due to mismatched dependency versions. We used an older version of PyTorch (1.12); could you please have a check?

@Pierre0089
Copy link

Hi,

Thx for your quick feedback.

I have followed the installation https://github.com/Fsoft-AIC/LGD?tab=readme-ov-file#installation. I just checked, and found the installed PyTorch version is 1.12.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants