Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when reducing the number of classes from 32 to 8 #2

Open
YasarL opened this issue Dec 11, 2019 · 1 comment
Open

Error when reducing the number of classes from 32 to 8 #2

YasarL opened this issue Dec 11, 2019 · 1 comment

Comments

@YasarL
Copy link

YasarL commented Dec 11, 2019

Hey there,
did someone try to reduce the number of classes? In my case it's about reducing from 32 to 8. Is that possible at all with this implementation?

When I adjust the label_codes, label_names dictionary to 8 entries and also change
from "model = get_small_unet(n_filters = 32)"
to model = get_small_unet(n_filters = 8)

I hope you understand my problem. If more information is required from my side please tell me so I can give it to you! I'm new to posting questions on GitHub ;)

The error I get when training the network:

Epoch 1/2
Found 3312 images belonging to 1 classes.
Found 3312 images belonging to 1 classes.

ValueError Traceback (most recent call last)
in ()
6 #result = model.fit_generator(TrainAugmentGenerator(), steps_per_epoch=18 ,
7 validation_data = ValAugmentGenerator(),
----> 8 validation_steps = validation_steps, epochs=num_epochs, callbacks=callbacks)
9 model.save_weights("camvid_model_150_epochs.h5", overwrite=True)

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1295 shuffle=shuffle,
1296 initial_epoch=initial_epoch,
-> 1297 steps_name='steps_per_epoch')
1298
1299 def evaluate_generator(self,

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training_generator.pyc in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
263
264 is_deferred = not model._is_compiled
--> 265 batch_outs = batch_function(*batch_data)
266 if not isinstance(batch_outs, list):
267 batch_outs = [batch_outs]

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.pyc in train_on_batch(self, x, y, sample_weight, class_weight, reset_metrics)
971 outputs = training_v2_utils.train_on_batch(
972 self, x, y=y, sample_weight=sample_weight,
--> 973 class_weight=class_weight, reset_metrics=reset_metrics)
974 outputs = (outputs['total_loss'] + outputs['output_losses'] +
975 outputs['metrics'])

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.pyc in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics)
251 x, y, sample_weights = model._standardize_user_data(
252 x, y, sample_weight=sample_weight, class_weight=class_weight,
--> 253 extract_tensors_from_dataset=True)
254 batch_size = array_ops.shape(nest.flatten(x, expand_composites=True)[0])[0]
255 # If model._distribution_strategy is True, then we are in a replica context

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2536 # Additional checks to avoid users mistakenly using improper loss fns.
2537 training_utils.check_loss_and_target_compatibility(
-> 2538 y, self._feed_loss_fns, feed_output_shapes)
2539
2540 # If sample weight mode has not been set and weights are None for all the

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training_utils.pyc in check_loss_and_target_compatibility(targets, loss_fns, output_shapes)
741 raise ValueError('A target array with shape ' + str(y.shape) +
742 ' was passed for an output of shape ' + str(shape) +
--> 743 ' while using as loss ' + loss_name + '. '
744 'This loss expects targets to have the same shape '
745 'as the output.')

ValueError: A target array with shape (5, 256, 256, 8) was passed for an output of shape (5, 256, 256, 32) while using as loss categorical_crossentropy. This loss expects targets to have the same shape as the output._

@YasarL
Copy link
Author

YasarL commented Dec 12, 2019

Solved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant