-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
predict bounding boxes on a new image ? #4
Comments
Hi, if you download an image from the internet, it should actually work with the code... Well the input data to the localizer is an RGB image, mostly resized to |
Hi , If the localizer crops a bigger region containing the target object , the assesor will output a ratio less than 1 and it helps the localizer to crop a smaller region containing more of the target object. This will give desired output. (PS : Any help to clarify this doubt is very much appreciated. please correct me if I understood it conceptually wrong. |
This case is handled by preparing the dataset in such a way that if the localizer crops only a portion of the object the assessor predicts a number lower than one. Remember: The objective of the assessor is to predict the intersection over union of the cropped image and the target object!
There are several reasons why this might be odd to you:
I hope this helps^^ |
Thanks @Bartzi for replying and your explanation really helped. But I would like to clarify about the explanation you gave about the issue with Lion image
|
Hmm, good questions 😉
|
Hi ,
I read the paper and it is really an interesting work.
I have a doubt regarding it. Will the network be able to predict bounding box on a new image (lets say a random image of figure skater downloaded from internet ) ?
If it is possible , is there any test script available which can use trained weights on your figure-skating dataset ?
Also can you share the details of how the input data to assessor and localizer looks like ?
Thanks
Rahul
The text was updated successfully, but these errors were encountered: