Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TS-CRF #10

Closed
GaoQiyu opened this issue Nov 5, 2019 · 11 comments
Closed

TS-CRF #10

GaoQiyu opened this issue Nov 5, 2019 · 11 comments

Comments

@GaoQiyu
Copy link

GaoQiyu commented Nov 5, 2019

I trained ResUNetIN14 with Adam(lr=0.1,weight_decay=1e-4,) on S3DIS fold1 split
but it only achieve 30%mIOU......
Then I test the trained model and found points on the same object have multiple labels
I think maybe the reason is no CRF?

@chrischoy
Copy link
Owner

Repost the same response again:

Instance Norm is worse than BN. That is why I removed all IN instantiation. I added only for experiments. Also, SGD is better than Adam if you use BN. CRF for static scenes don't really help much.

What batch_size are you using? If you can't squeeze in a lot of batches, try increasing iter_size. This is effectively gradient accumulation.

Also, it is important to set the collision labels as ignore_label OR use higher resolution.

@GaoQiyu
Copy link
Author

GaoQiyu commented Nov 5, 2019

I use batch_size = 8
sorry I can't find how to set collision labels
and ...
I've applied many times Scannet dataset, but no reply....
Do you know how to get this dataset..??

@chrischoy
Copy link
Owner

chrischoy commented Nov 6, 2019

There's only one way to get the scannet through the official channel.

30% is really low. How long have you trained it for? what was the voxel size?

Sorry, the collision label was done with the GPU voxelizer which we removed for the final version. However, the performance drop is insignificant for the ScanNet.

@GaoQiyu
Copy link
Author

GaoQiyu commented Nov 6, 2019

with voxel size 2cm
trained 50 epoch on S3DIS(30epoch acheieve 30%, 50epoch still 30%)
do you mean if I use voxelizer in this code then I don't need to set collision label
or just I don't need to set collision label?
30% is really....really piss me off.

@chrischoy
Copy link
Owner

chrischoy commented Nov 6, 2019

Haha, Stanford dataset is extremely small

30 epoch on the stanford dataset is 1k iterations with batch_size 8. However, 50 epoch 30% seems quite problematic.

I'm currently under a bit of load, but let me get back to you with the Stanford training scripts.

@GaoQiyu
Copy link
Author

GaoQiyu commented Nov 6, 2019

very thanks!

@chrischoy
Copy link
Owner

chrischoy commented Nov 6, 2019

This is a pretty old log that I found. I could not find one for ResUNet18 which is what you are using, but the trend should look similar.

The score is the Overall Accuracy.

0-15k: https://pastebin.com/X1y7TSvR
15k-30k: https://pastebin.com/57i4rRD5
30k-45k: https://pastebin.com/22vgGMev
45k-60k: https://pastebin.com/F5wkyQcD

@GaoQiyu
Copy link
Author

GaoQiyu commented Nov 7, 2019

I am thinking whether it's S3DIS data's problem.
because it looks like too dirty compared with Scannet
have you try to train model only on S3DIS before?

@chrischoy
Copy link
Owner

chrischoy commented Nov 7, 2019

I've never trained the S3DIS with scannet before. Look at the log.

Are you using the provided data augmentation at all?

@GaoQiyu
Copy link
Author

GaoQiyu commented Nov 8, 2019

I mean have you trained Res16UNet34C or other pointcloud segmentation model using minkowskii engine only on S3DIS fold1?
Yes I've follow your code including dataset and voxelizer part, and set the parameters according to the log.

If it's not the problem of dataset...I will check my code again

@chrischoy
Copy link
Owner

The Stanford training script has been added in the commit 75c33f5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants