Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended reach goal science-motivated metrics #13

Open
4 tasks
aimalz opened this issue Mar 2, 2018 · 2 comments
Open
4 tasks

Extended reach goal science-motivated metrics #13

aimalz opened this issue Mar 2, 2018 · 2 comments

Comments

@aimalz
Copy link
Owner

aimalz commented Mar 2, 2018

Because people like thinking about ancillary, optional metrics for other potential challenge goals, I'm making this issue as a place for that discussion. Here are the ones that have come up already, and feel free to mention more in the comments:

  • Early lightcurve challenge (which may focus more on maximizing true positives)
  • Anomaly detection (which may focus more on minimizing false negatives)
  • Class-specific metrics (a "best in class" as opposed to "best in show" metric for those who only aim to classify one object type)
  • Hierarchical classes (distinguishing between sub-classes of a particular class)

However, this is just to get it out of your system -- please do not work on these until there is significant progress toward the main goal! We can implement them in follow-up challenges in the future*, but there won't even be a first version of the challenge unless we prioritize the single, agreed-upon goal of the full lightcurve challenge.

*We can also include at least some of them in the first version of the challenge on an opt-in basis, but we can't even progress with the Kaggle/Ramp process until we make a choice for the official metric, so please restrain yourselves for now.<\sub>

@aimalz
Copy link
Owner Author

aimalz commented Mar 27, 2018

Another that came up in today's call was to divide test set objects based on sky location and compare metrics between them to see how sensitive classifiers are to dense/sparse areas of the sky.

  • Sky sensitivity (classification quality as a function of region of sky, i.e. proximity to galactic plane)

@aimalz
Copy link
Owner Author

aimalz commented Jul 24, 2018

Here's another ideas from a dropped Slack discussion:

  • Per-class metric extrema (anticipate a better idea of how to do this after challenge results are in, but it can be considered for a future version)

@aimalz aimalz changed the title Reach goal bonus metrics Extended reach goal science-motivated metrics Jun 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant