You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, one year later me again looking at your paper (Single-Stage Prediction Models Do Not Explain the
Magnitude of Syntactic Disambiguation Difficulty) :)
It seems that the soap opera LSTM got the best results, right? do you happen to have the pre-trained model available?
By the way, interestingly, the surprisal values from GPT2 were useless! (They aren't with many other datasets).
The text was updated successfully, but these errors were encountered:
Hi, one year later me again looking at your paper (Single-Stage Prediction Models Do Not Explain the
Magnitude of Syntactic Disambiguation Difficulty) :)
It seems that the soap opera LSTM got the best results, right? do you happen to have the pre-trained model available?
By the way, interestingly, the surprisal values from GPT2 were useless! (They aren't with many other datasets).
The text was updated successfully, but these errors were encountered: