-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train on a custom dataset #5
Comments
Hi, in your mentioned repository they claim that they did successfully trained this model. If you've concerns - I suggest you to take a subset of ~100 images from your new dataset and try to overfit their segmentation masks to be confident that it works. Another issue that the '0' label here is "background"... Some people say it's not so smart to use background as a label, and some datasets, like DeepDrive don't even have this class, so you may want to re-train on VOC and change all background labels to be "void" label |
I have modified the values of PASCAL_VOC_classes in utils.py with 16 new values for my own dataset. When I try to train I get the error: Which I find strange. The tensor order appears backwards as well as mismatched? |
@swarmt did you manage to solve this problem please ? I am trying to retrain the network with just one class label and I'm having the same error that you mentioned. |
could you tell me your keras version and tf version?Thanks a lot! |
@swarmt If you check the masks, it has a label ID from 0 to 21. Since you want to train the model for 16 labels, so change the label Id for excluded classes as void i.e 21. For example np.where(mask==19,21,mask) . Or you can only remove those images and masks pair having a label ID you want to exclude. Download the mask dataset for Pascal Voc from here https://www.dropbox.com/s/oeu149j8qtbs1x0/SegmentationClassAug.zip |
Hello,
is it possible to train that model in a different dataset than VOC or at least fine-tune it? Have you tried something similar?
As mentioned in this repository, there is a problem with Keras implementations and deeplab model training/fine-tuning.
The text was updated successfully, but these errors were encountered: