Skip to content

A framework for evaluating models on integrative brain and behavior measurements

License

Notifications You must be signed in to change notification settings

GaspTO/brain-score

 
 

Repository files navigation

Build Status Documentation Status

Brain-Score is a platform to evaluate computational models of brain function on their match to brain measurements in primate vision. The intent of Brain-Score is to adopt many (ideally all) the experimental benchmarks in the field for the purpose of model testing, falsification, and comparison. To that end, Brain-Score operationalizes experimental data into quantitative benchmarks that any model candidate following the BrainModel interface can be scored on.

See the Documentation for more details and the Tutorial and Examples for submitting a model to Brain-Score.

Brain-Score is made by and for the community. To contribute, please send in a pull request.

Local installation

You will need Python >= 3.7 and pip >= 18.1. Note that you can only access public benchmarks when running locally. To score a model on all benchmarks, submit it via the brain-score.org website.

pip install git+https://github.com/brain-score/brain-score

Score a model on a public benchmark:

from brainscore.benchmarks import public_benchmark_pool

benchmark = public_benchmark_pool['dicarlo.MajajHong2015public.IT-pls']
model = my_model()
score = benchmark(model)
#>  <xarray.Score (aggregation: 2)>
#>  array([0.32641998, 0.0207475])
#>  Coordinates:
#>    * aggregation  (aggregation) <U6 'center' 'error'
#>  Attributes:
#>      raw:                   <xarray.Score (aggregation: 2)>\narray([0.4278365 ...
#>      ceiling:               <xarray.Score (aggregation: 2)>\narray([0.7488407 ...
#>      model_identifier:      my-model
#>      benchmark_identifier:  dicarlo.MajajHong2015public.IT-pls

Some steps may take minutes because data has to be downloaded during first-time use.

For more details, see the Documentation and the Examples [1] [2].

Environment Variables

Variable Description
RESULTCACHING_HOME directory to cache results (benchmark ceilings) in, ~/.result_caching by default (see https://github.com/brain-score/result_caching)

License

MIT license

Troubleshooting

`ValueError: did not find HDF5 headers` during netcdf4 installation pip seems to fail properly setting up the HDF5_DIR required by netcdf4. Use conda: `conda install netcdf4`
repeated runs of a benchmark / model do not change the outcome even though code was changed results (scores, activations) are cached on disk using https://github.com/mschrimpf/result_caching. Delete the corresponding file or directory to clear the cache.

CI environment

Add CI related build commands to test_setup.sh. The script is executed in CI environment for unittests.

References

If you use Brain-Score in your work, please cite "Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?" (technical) and "Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence" (perspective) as well as the respective benchmark sources.

@article{SchrimpfKubilius2018BrainScore,
  title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
  author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Franziska Geiger and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
  journal={bioRxiv preprint},
  year={2018},
  url={https://www.biorxiv.org/content/10.1101/407007v2}
}

@article{Schrimpf2020integrative,
  title={Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence},
  author={Schrimpf, Martin and Kubilius, Jonas and Lee, Michael J and Murty, N Apurva Ratan and Ajemian, Robert and DiCarlo, James J},
  journal={Neuron},
  year={2020},
  url={https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-X}
}

About

A framework for evaluating models on integrative brain and behavior measurements

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 96.6%
  • Python 3.0%
  • Other 0.4%