-
Notifications
You must be signed in to change notification settings - Fork 80
/
model_interface.py
214 lines (166 loc) · 9.81 KB
/
model_interface.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
"""
The :class:`~brainscore_vision.model_interface.BrainModel` interface is the central communication point
between benchmarks and models.
"""
from typing import List, Tuple, Union
from brainio.assemblies import BehavioralAssembly, NeuroidAssembly
from brainio.stimuli import StimulusSet
class BrainModel:
"""
The BrainModel interface defines an API for models to follow.
Benchmarks will use this interface to treat models as an experimental subject
without needing to know about the details of the model implementation.
"""
@property
def identifier(self) -> str:
"""
The unique identifier for this model.
:return: e.g. `'CORnet-S'`, or `'alexnet'`
"""
raise NotImplementedError()
def visual_degrees(self) -> int:
"""
The visual degrees this model covers as a single scalar.
:return: e.g. `8`, or `10`
"""
raise NotImplementedError()
class Task:
""" task to perform """
passive = 'passive'
"""
Passive fixation, i.e. do not perform any task, but fixate on the center of the screen.
Does not output anything, but can be useful to fully specify the experimental setup.
Example:
Setting up passive fixation with `start_task(BrainModel.Task.passive)` and calling `look_at(...)` could output
.. code-block:: python
None
"""
label = 'label'
"""
Predict the label for each stimulus.
Output a :class:`~brainio.assemblies.BehavioralAssembly` with labels as the values.
The labeling domain can be specified in the second argument, e.g. `'imagenet'` for 1,000 ImageNet synsets,
or an explicit list of label strings. The model choices must be part of the labeling domain.
Example:
Setting up a labeling task for ImageNet synsets with `start_task(BrainModel.Task.label, 'imagenet')`
and calling `look_at(...)` could output
.. code-block:: python
<xarray.BehavioralAssembly (presentation: 3, choice: 1)>
array([['n02107574'], ['n02123045'], ['n02804414']]), # the ImageNet synsets
Coordinates:
* presentation (presentation) MultiIndex
- stimulus_id (presentation) object 'hash1' 'hash2' 'hash3'
- stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
- logit (presentation) int64 239 282 432
- synset (presentation) object 'n02107574' 'n02123045' 'n02804414'
Example:
Setting up a labeling task for 2 custom labels with `start_task(BrainModel.Task.label, ['dog', 'cat'])`
and calling `look_at(...)` could output
.. code-block:: python
<xarray.BehavioralAssembly (presentation: 3, choice: 1)>
array([['dog'], ['cat'], ['cat']]), # the labels
Coordinates:
* presentation (presentation) MultiIndex
- stimulus_id (presentation) object 'hash1' 'hash2' 'hash3'
- stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
"""
probabilities = 'probabilities'
"""
Predict the per-label probabilities for each stimulus.
Output a :class:`~brainio.assemblies.BehavioralAssembly` with probabilities as the values.
The model must be supplied with `fitting_stimuli` in the second argument which allow it to train a readout
for a particular set of labels and image distribution.
The `fitting_stimuli` are a :class:`~brainio.stimuli.StimulusSet` and must include an `image_label` column
which is used as the labels to fit to.
Example:
Setting up a probabilities task `start_task(BrainModel.Task.probabilities, <fitting_stimuli>)`
(where `fitting_stimuli` includes 5 distinct labels)
and calling `look_at(<test_stimuli>)` could output
.. code-block:: python
<xarray.BehavioralAssembly (presentation: 3, choice: 5)>
array([[0.9 0.1 0.0 0.0 0.0]
[0.0 0.0 0.8 0.0 0.2]
[0.0 0.0 0.0 1.0 0.0]]), # the probabilities
Coordinates:
* presentation (presentation) MultiIndex
- stimulus_id (presentation) object 'hash1' 'hash2' 'hash3'
- stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
- choice (choice) object 'dog' 'cat' 'chair' 'flower' 'plane'
"""
odd_one_out = 'odd_one_out'
"""
Predict the odd-one-out elements for a list of triplets of stimuli.
The model must be supplied with a list of stimuli where every three consecutive stimuli
are considered to form a triplet. The model is expected to output a one-dimensional
assembly with each value corresponding to the index (`0`, `1`, or `2`) of the triplet
element that is different from the other two.
Output a :class:`~brainio.assemblies.BehavioralAssembly` with the choices as the values.
Example:
Setting up an odd-one-out task for a list of triplets with `start_task(BrainModel.Task.odd_one_out)` and calling
.. code-block:: python
look_at(['image1.png', 'image2.png', 'image3.png', #triplet 1
'image1.png', 'image2.png', 'image4.png', #triplet 2
'image2.png', 'image3.png', 'image4.png', #triplet 3
...
'image4.png', 'image8.png', 'image10.png']) #triplet 50
with 50 triplet trials and 10 unique stimuli could output
.. code-block:: python
<xarray.BehavioralAssembly (presentation: 50, choice: 1)>
array([[0], [2], [2], ..., [1]]) # index of the odd-one-out per trial, i.e. 0, 1, or 2. (Each trial is one triplet of images.)
Coordinates:
* presentation (presentation) MultiIndex
- stimulus_id (presentation) ['image1', 'image2', 'image3'], ..., , ['image4', 'image8', 'image10']
- stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
"""
def start_task(self, task: Task, fitting_stimuli) -> None:
"""
Instructs the model to begin one of the tasks specified in
:data:`~brainscore_vision.model_interface.BrainModel.Task`.
For all followings call of :meth:`~brainscore_vision.model_interface.BrainModel.look_at`,
the model returns the expected outputs for the specified task.
:param task: The task the model should perform, and thus which outputs it should return
:param fitting_stimuli: A set of stimuli for the model to learn on, e.g. image-label pairs
"""
raise NotImplementedError()
class RecordingTarget:
""" location to record from """
V1 = 'V1'
V2 = 'V2'
V4 = 'V4'
IT = 'IT'
def start_recording(self, recording_target: RecordingTarget, time_bins: List[Tuple[int, int]]) -> None:
"""
Instructs the model to begin recording in a specified
:data:`~brainscore_vision.model_interface.BrainModel.RecordingTarget` and return the specified `time_bins`.
For all followings call of :meth:`~brainscore_vision.model_interface.BrainModel.look_at`, the model returns the
corresponding recordings. These recordings are a :class:`~brainio.assemblies.NeuroidAssembly` with exactly
3 dimensions:
- `presentation`: the presented stimuli (cf. stimuli argument of
:meth:`~brainscore_vision.model_interface.BrainModel.look_at`). If a :class:`~brainio.stimuli.StimulusSet`
was passed, the recordings should contain all of the :class:`~brainio.stimuli.StimulusSet` columns as
coordinates on this dimension. The `stimulus_id` coordinate is required in either case.
- `neuroid`: the recorded neuroids (neurons or mixtures thereof). They should all be part of the
specified :data:`~brainscore_vision.model_interface.BrainModel.RecordingTarget`. The coordinates of this
dimension should again include as much information as is available, at the very least a `neuroid_id`.
- `time_bins`: the time bins of each recording slice. This dimension should contain at least 2 coordinates:
`time_bin_start` and `time_bin_end`, where one `time_bin` is the bin between start and end.
For instance, a 70-170ms time_bin would be marked as `time_bin_start=70` and `time_bin_end=170`.
If only one time_bin is requested, the model may choose to omit this dimension.
:param recording_target: which location to record from
:param time_bins: which time_bins to record as a list of integer tuples,
e.g. `[(50, 100), (100, 150), (150, 200)]` or `[(70, 170)]`
"""
raise NotImplementedError()
def look_at(self, stimuli: Union[StimulusSet, List[str]], number_of_trials=1) \
-> Union[BehavioralAssembly, NeuroidAssembly]:
"""
Digest a set of stimuli and return requested outputs. Which outputs to return is instructed by the
:meth:`~brainscore_vision.model_interface.BrainMode.start_task` and
:meth:`~brainscore_vision.model_interface.BrainModel.start_recording` methods.
:param stimuli: A set of stimuli, passed as either a :class:`~brainio.stimuli.StimulusSet`
or a list of image file paths
:param number_of_trials: The number of repeated trials of the stimuli that the model should average over.
E.g. 10 or 35. Non-stochastic models can likely ignore this parameter.
:return: task behaviors or recordings as instructed
"""
raise NotImplementedError()