Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dim check while calculating mIoU&pixAcc for segmentation test #991

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/model_zoo/segmentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Table of pre-trained models for semantic segmentation and their performance.
The test script :download:`Download test.py<../../scripts/segmentation/test.py>` can be used for
evaluating the models (VOC results are evaluated using the official server). For example ``fcn_resnet50_ade``::

python test.py --dataset ade20k --model-zoo fcn_resnet50_ade --eval
python test.py --dataset ade20k --pretrained --model fcn --backbone resnet50 --eval
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why make this change?

Copy link
Author

@chunhanl chunhanl Oct 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello,
According in test.py

def parse_args():

The args defined in parse_args() no longer has args 'model-zoo'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add that back instead. Not sure who removed that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure whether this may have conflicts with these settings.

model_prefix = args.model + '_' + args.backbone
if 'pascal' in args.dataset:
model_prefix += '_voc'
withQuantization = True if (args.backbone in ['resnet101']) else withQuantization
elif args.dataset == 'coco':
model_prefix += '_coco'
withQuantization = True if (args.backbone in ['resnet101']) else withQuantization
elif args.dataset == 'ade20k':
model_prefix += '_ade'
elif args.dataset == 'citys':
model_prefix += '_citys'
else:
raise ValueError('Unsupported dataset {} used'.format(args.dataset))
if args.ngpus > 0:
withQuantization = False
if withQuantization and args.quantized:
model_prefix += '_int8'
if not args.deploy:
if args.calibration:
args.pretrained = True
# create network
if args.pretrained:
model = get_model(model_prefix, pretrained=True)
model.collect_params().reset_ctx(ctx=args.ctx)


The training commands work with the script: :download:`Download train.py<../../scripts/segmentation/train.py>`

Expand Down
16 changes: 13 additions & 3 deletions gluoncv/utils/metrics/segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,19 @@
__all__ = ['SegmentationMetric', 'batch_pix_accuracy', 'batch_intersection_union',
'pixelAccuracy', 'intersectionAndUnion']


class SegmentationMetric(EvalMetric):
"""Computes pixAcc and mIoU metric scores
"""


def __init__(self, nclass):
super(SegmentationMetric, self).__init__('pixAcc & mIoU')
self.nclass = nclass
self.lock = threading.Lock()
self.reset()


def update(self, labels, preds):
"""Updates the internal evaluation result.

Expand All @@ -27,6 +31,8 @@ def update(self, labels, preds):
preds : 'NDArray' or list of `NDArray`
Predicted values.
"""


def evaluate_worker(self, label, pred):
correct, labeled = batch_pix_accuracy(
pred, label)
Expand All @@ -38,18 +44,20 @@ def evaluate_worker(self, label, pred):
self.total_inter += inter
self.total_union += union


if isinstance(preds, mx.nd.NDArray):
evaluate_worker(self, labels, preds)
elif isinstance(preds, (list, tuple)):
threads = [threading.Thread(target=evaluate_worker,
args=(self, label, pred),
)
)
for (label, pred) in zip(labels, preds)]
for thread in threads:
thread.start()
for thread in threads:
thread.join()


def get(self):
"""Gets the current evaluation result.

Expand All @@ -63,13 +71,15 @@ def get(self):
mIoU = IoU.mean()
return pixAcc, mIoU


def reset(self):
"""Resets the internal evaluation result to initial state."""
self.total_inter = 0
self.total_union = 0
self.total_correct = 0
self.total_label = 0


def batch_pix_accuracy(output, target):
"""PixAcc"""
# inputs are NDarray, output 4D, target 3D
Expand All @@ -79,7 +89,7 @@ def batch_pix_accuracy(output, target):
target = target.asnumpy().astype('int64') + 1

pixel_labeled = np.sum(target > 0)
pixel_correct = np.sum((predict == target)*(target > 0))
pixel_correct = np.sum((predict == target) * (target > 0))

assert pixel_correct <= pixel_labeled, "Correct area should be smaller than Labeled"
return pixel_correct, pixel_labeled
Expand Down Expand Up @@ -119,7 +129,7 @@ def pixelAccuracy(imPred, imLab):
# Remove classes from unlabeled pixels in gt image.
# We should not penalize detections in unlabeled portions of the image.
pixel_labeled = np.sum(imLab > 0)
pixel_correct = np.sum((imPred == imLab)*(imLab > 0))
pixel_correct = np.sum((imPred == imLab) * (imLab > 0))
pixel_accuracy = 1.0 * pixel_correct / pixel_labeled
return (pixel_accuracy, pixel_correct, pixel_labeled)

Expand Down
2 changes: 1 addition & 1 deletion scripts/segmentation/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ def test(model, args, input_transform):
tbar = tqdm(test_data)
for i, (data, dsts) in enumerate(tbar):
if args.eval:
predicts = [pred[0] for pred in evaluator.parallel_forward(data)]
predicts = [pred for pred in evaluator.parallel_forward(data)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect. The output of gluoncv segmentation network is always a list

targets = [target.as_in_context(predicts[0].context) \
for target in dsts]
metric.update(targets, predicts)
Expand Down