Precision/Recall #418
Unanswered
xinwei-sher
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hi @xinwei-sher, the reported metrics are maximums with provided threshold, maybe it will be worth to add one more table with metrics that are computed at the same threshold as confusion matrix? What do you think? You can optimize other metrics by setting automl = AutoML(eval_metric="f1")
automl.fit(X, y) Supported evaluation metrics (
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, there:
Really nice tool to help find the best model!
In Readme.md file, precision and recall are completely different from what you can compute from the confusion matrix. I guess it's due to different threshold is used. My question is why not use a same threshold all of them?
AutoMl select the best model by logloss. Is it possible to use other metric, such as f1 or accuracy?
Metric details
Confusion matrix (at threshold=0.471619)
Beta Was this translation helpful? Give feedback.
All reactions