Skip to content

clclab/ESSLLI2021

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ESSLLI2021

Welcome to the advanced course on "Analyzing the Cognitive Plausibility of Language Models" taught by Lisa Beinborn (Vrije Universiteit Amsterdam), Nora Hollenstein (University of Copenhagen), and Willem Zuidema (University of Amsterdam).

In this course, we analyze language models by comparing their internal representations with cognitive signals. We use interpretability methods in order to better understand the information flow in computational language models.

The course takes place every day from July 26 to July 30 at 4pm.

For the first day, we invite you to take a moment to think about the following question: “When is a language model cognitively plausible in your opinion?” You can post your answer in the ESSLLI slack channel (laco-beinborn-hollenstein-zuidema) or send it by e-mail. Please also add one sentence about your background (research discipline).

Monday:
Introduction: Cognitive Plausibility of Language Models (Lisa Beinborn)

Tuesday:
Lecture: "Cognitive Signals of Language Processing" (Nora Hollenstein)

Wednesday:
Tutorial: "Predicting Cognitive Signals with Language Models" (Nora Hollenstein)

Thursday:
Lecture: "Interpretability of Neural Language Models" (Willem Zuidema)

Friday:
Tutorial "Interpreting Neural Language Models" (Willem Zuidema)
General Discussion (all lecturers and participants)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.0%
  • Python 4.0%