Skip to content

Commit

Permalink
fix: use proper eval default main eval metrics for text pair regressor
Browse files Browse the repository at this point in the history
  • Loading branch information
MattGPT-ai authored and helpmefindaname committed Aug 23, 2024
1 parent 3d8f078 commit ca73975
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion flair/models/pairwise_regression_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ def evaluate(
out_path: Union[str, Path, None] = None,
embedding_storage_mode: EmbeddingStorageMode = "none",
mini_batch_size: int = 32,
main_evaluation_metric: Tuple[str, str] = ("micro avg", "f1-score"),
main_evaluation_metric: Tuple[str, str] = ("correlation", "pearson"),
exclude_labels: Optional[List[str]] = None,
gold_label_dictionary: Optional[Dictionary] = None,
return_loss: bool = True,
Expand Down

0 comments on commit ca73975

Please sign in to comment.