Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to determine the boundary area of the object of interest? #22

Open
YH-2023 opened this issue Oct 17, 2023 · 3 comments
Open

How to determine the boundary area of the object of interest? #22

YH-2023 opened this issue Oct 17, 2023 · 3 comments

Comments

@YH-2023
Copy link

YH-2023 commented Oct 17, 2023

1.How to determine the boundary area of the object of interest?
2.The paper mentions: the class-agnostic activation maps can be used to extract class-agnostic object bounding boxes for localization. How to locate the code part that is not found?

@Sierkinhane
Copy link
Member

Hi, here are the codes for extracting object bounding boxes.

@YH-2023
Copy link
Author

YH-2023 commented Oct 18, 2023

@Sierkinhane Hi ,

  1. Can I not use the bounding box extraction method mentioned in the code, but can I only perform conventional target detection, classification and regression operations on the foreground features? Does this apply to ordinary target detection?
    In other words, If I want to use CCAM for target detection, should I use the separated foreground features for classification and regression, or should I use the obtained object boundary area for classification and regression?
  2. Can the CCAM algorithm locate all instance objects in the image, or only the known objects in the training set?
  3. I have a question about the code. Is the test in it used for verification?
    best_CorLoc, best_threshold = test(config, test_loader, model, criterion, epoch)

    Thanks

@Sierkinhane
Copy link
Member

Unfortunately, the proposed method currently only works well on foreground-background separation. We tried to use the learned features for classification but it didn't get better results than traditional classification methods. We argue the model cannot learn discrimative features via the proposed foreground-background contrastive loss. Maybe you can try to involve more activation channels and build additional contrastive losses upon the foreground-background contrast.

Exactly, it is for evalution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants