Where is this logged eval_causal_lm_metrics: ["perplexity"]
?
#1908
Replies: 2 comments 1 reply
-
I believe that setting this value in your config leads to a perplexity being logged over the training set -- here are a few reference wandb job screenshots from a run with this metric enabled: Additionally, the perplexity here seems oddly high, assuming that the perplexity is calculated as the exponential of the average cross entropy (loss value @ timestep) then we would have:
but at ~roughly the same timestep, the training perplexity is logged as I am new to this project and was wondering why the Is that the expected behavior, or perhaps is something awry in my setup of the repo? Ideally, we could track these casual lm metrics for specified subsets of the data with something like the following:
@winglian – would you be open to this becoming a part of the axolotl? |
Beta Was this translation helpful? Give feedback.
-
Hey @nazkhan-8451 , the perplexity should show up in there with the changes in #1952 . Don't forget to turn on |
Beta Was this translation helpful? Give feedback.
-
In my config to finetune llama-3.1, I put
eval_causal_lm_metrics: ["perplexity"].
Intrainer_state.json
here is output at eval step:Where is
perplexity
logged? I know I can calculate it from cross-entropy loss but wanted to know this.Beta Was this translation helpful? Give feedback.
All reactions