Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FT] Is it possible to save the predictions to prevent rerunning expensive inference #396

Open
JoelNiklaus opened this issue Nov 19, 2024 · 1 comment
Labels
feature request New feature/request

Comments

@JoelNiklaus
Copy link
Contributor

Issue encountered

When evaluating large models, significant costs and delays can occur for inference, especially on larger datasets. Possibly I want to re-evaluate my predictions using different metrics.

Solution/Feature

I want the predictions to be saved in an inspectable cache which can be used when the evaluation is run again.

@JoelNiklaus JoelNiklaus added the feature request New feature/request label Nov 19, 2024
@clefourrier
Copy link
Member

Hi, thanks for the issue!

If you use the different saving parameters (as indicated in the doc), your predictions (results and/or details) are saved and can be used for reinspection later on. The quickest way to get what you need is therefore using the details file to recompute the metrics on them by hand.

Since not all metrics use the same generation methods, we have not prioritized a cache atm (to prevent risks such as running a greedy eval, then a sampling one, and accidentally using the same results for metric computations), but we'll add your suggestion to our todo!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature/request
Projects
None yet
Development

No branches or pull requests

2 participants