-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
very small miou #19
Comments
My guess is something goes wrong during the training cause your training loss becomes nan. Did you get the |
yes i get this message |
and this message i do not know CUDA out of memory. Tried to allocate 802.00 MiB (GPU 1; 10.76 GiB total capacity; 8.28 GiB already allocated; 678.12 MiB free; 8.48 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289) |
It seems like you don't have enough GPU memory for the training. You can try training model with a smaller feature map, like |
If I set the batch size to 1 , will it affect the accuracy |
I haven't tried training with batch size 1, but it should have a similar result. |
thank for your help and this great job. |
Close this issue for now. |
i train this net but i got very small miou
Validation per class iou:
car : 4.74%
bicycle : 0.03%
motorcycle : 0.03%
truck : 0.14%
bus : 0.55%
person : 0.03%
bicyclist : 0.05%
motorcyclist : 0.00%
road : 0.44%
parking : 0.85%
sidewalk : 12.27%
other-ground : 0.08%
building : 3.78%
fence : 1.65%
vegetation : 1.42%
trunk : 1.62%
terrain : 4.21%
pole : 0.37%
traffic-sign : 0.17%
Current val miou is 1.707 while the best val miou is 1.707
Current val loss is 3.895
epoch 6 iter 2610, loss: nan
The text was updated successfully, but these errors were encountered: