Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anchors scaling for each feature map output #70

Open
rjtshrm opened this issue Oct 18, 2021 · 1 comment
Open

Anchors scaling for each feature map output #70

rjtshrm opened this issue Oct 18, 2021 · 1 comment

Comments

@rjtshrm
Copy link

rjtshrm commented Oct 18, 2021

anchors = [(a[0]/stride, a[1]/stride) for a in anchors]

I have a question since yolov3 has anchors for all three different scales. (as they mentioned in paper). Why again we need to down sample the anchors for each scale. It is kind of bit hard to understand the anchors scaling as I am new to anchor based detection.

@CHENHUI-X
Copy link

anchors = [(a[0]/stride, a[1]/stride) for a in anchors]

I have a question since yolov3 has anchors for all three different scales. (as they mentioned in paper). Why again we need to down sample the anchors for each scale. It is kind of bit hard to understand the anchors scaling as I am new to anchor based detection.

Hi, let me try to answer. Because the size of the anchor box we give corresponds to the original image, and when we predict, the obtained anchor box size corresponds to the feature map, so we need to multiply a coefficient to make the anchor box correspond to feature maps of different sizes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants