Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to measure the performance. #18

Open
ambl2357 opened this issue Feb 22, 2018 · 15 comments
Open

how to measure the performance. #18

ambl2357 opened this issue Feb 22, 2018 · 15 comments

Comments

@ambl2357
Copy link

I do not know how to measure the performance. Could you tell me a file or a method?

@nqanh
Copy link
Owner

nqanh commented Feb 22, 2018

You just need to merge all the outputted affordance maps into one map (please see the paper for details). Then use the Weighted F-measure code from here to evaluate the results with the groundtruths.

@ambl2357
Copy link
Author

ambl2357 commented Feb 22, 2018 via email

@ClaireTun
Copy link

HI, what does mean by"merge all the outputted affordance maps into one map"? How to calculate one kind of affordance like 'grasp'? There are more than one affordances on one pic? How to calculate F measure separately?

@nqanh
Copy link
Owner

nqanh commented May 2, 2018

The network outputs each affordance map for each detected object (as in the demo). Since the image may has many objects --> we need to merge them into "one predicted" image. Then the F measure will calculate the accuracy for each affordance (on this "merged" output).

@ClaireTun
Copy link

HI, Nqanh~ about the Fmeasure for each affordance, I still have questions. In the Fmeasure code , the type of GT is logical, which means the background and the foreground. But in foreground, we have different objects and affordances. Do I have to seperate all the affordances and make new GTs?

@nqanh
Copy link
Owner

nqanh commented May 8, 2018

We can save all the affordances in 1 map. Then, for each affordance, we select its id and the background --> it becomes logical when compares with the GT.

@ClaireTun
Copy link

Well,ok! Thank you for your answer and patience! But I am still a little confused. Would you plz share this part code,like how to save in 1 map and select id?

@nqanh
Copy link
Owner

nqanh commented May 10, 2018

Please see #19
(also please use the search function, most of the problems we have already be answered. Thanks!)

@ClaireTun
Copy link

Wow~ Thank you!!

@chaundm
Copy link

chaundm commented Aug 21, 2019

The network outputs each affordance map for each detected object (as in the demo). Since the image may has many objects --> we need to merge them into "one predicted" image. Then the F measure will calculate the accuracy for each affordance (on this "merged" output).

Hi nqanh,
Please help me by teaching me how to "merge affordance map" into one map. Could you tell me more details about that? For example, we have 3 file .pnp (contains affordance maps), and 3 file .sm, so which extension file (sm or pnp) we will use to merge and how to merge it, please?

@ambl2357
Copy link
Author

ambl2357 commented Aug 21, 2019 via email

@chaundm
Copy link

chaundm commented Aug 30, 2019

Wow~ Thank you!!

Hi ClaireTun,
I have the same trouble as you. Could you tell me how to "merge all of affordances into a map"? And which extension file is used for F_measure code (sm or png). I found that that the ground truth is sm file with black and white, while the prediction in demo_img.py code outputs the png file with each predicted masks with affordance colour. How to combine these files to run the F_measure code in matlab, please?

@ClaireTun
Copy link

ClaireTun commented Aug 30, 2019 via email

@Rechardgu
Copy link

Wow~ Thank you!!

Hi ClaireTun,
I have the same trouble as you. Could you tell me how to "merge all of affordances into a map"? And which extension file is used for F_measure code (sm or png). I found that that the ground truth is sm file with black and white, while the prediction in demo_img.py code outputs the png file with each predicted masks with affordance colour. How to combine these files to run the F_measure code in matlab, please?
Hi chaundm, did you have solved the problem?

@Rechardgu
Copy link

Well, I finished the project half a year ago. Some details,  I have already forgot. Next week,I will return to school .I can review the code and try to answer your questions 

------------------ Original ------------------ From: chaundm <[email protected]> Date: Fri,Aug 30,2019 2:54 PM To: nqanh/affordance-net <[email protected]> Cc: ClaireTun <[email protected]>, Comment <[email protected]> Subject: Re: [nqanh/affordance-net] how to measure the performance. (#18) Wow~ Thank you!! Hi ClaireTun, I have the same trouble as you. Could you tell me how to "merge all of affordances into a map"? And which extension file is used for F_measure code (sm or png). I found that that the ground truth is sm file with black and white, while the prediction in demo_img.py code outputs the png file with each predicted masks with affordance colour. How to combine these files to run the F_measure code in matlab, please? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Hi ClaireTun, my ground truth and predicted map are png files with each masks with affordance color. How to use the weighted F_measure?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants