Skip to content

Commit

Permalink
Merge pull request #62 from matthias-k/dev
Browse files Browse the repository at this point in the history
version 0.2.22
  • Loading branch information
matthias-k authored Apr 14, 2024
2 parents 0664dba + 607fa71 commit e87966a
Show file tree
Hide file tree
Showing 90 changed files with 6,189 additions and 6,084 deletions.
43 changes: 21 additions & 22 deletions .github/workflows/test-package-conda.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Tests

on: [push]
on: [push, pull_request]

jobs:
build-linux:
Expand All @@ -9,29 +9,31 @@ jobs:
max-parallel: 5
matrix:
python-version:
- "3.7"
# - "3.7" # conda takes forever to install the dependencies
- "3.8"
- "3.9"

- "3.10"
- "3.11"
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
- uses: conda-incubator/setup-miniconda@v2
with:
python-version: ${{ matrix.python-version }}
- name: Add conda to system path
run: |
# $CONDA is an environment variable pointing to the root of the miniconda directory
echo $CONDA/bin >> $GITHUB_PATH
channels: conda-forge
- name: Conda info
# the shell setting is necessary for loading profile etc which activates the conda environment
shell: bash -el {0}
run: conda info
- name: Install dependencies
shell: bash -el {0}
run: |
# conda env update --file environment.yml --name base
conda config --add channels conda-forge
conda install \
boltons \
cython \
deprecation \
dill \
diskcache \
h5py \
imageio \
natsort \
numba \
Expand All @@ -40,6 +42,7 @@ jobs:
pandas \
piexif \
pillow \
pip \
pkg-config \
pytorch \
requests \
Expand All @@ -48,23 +51,19 @@ jobs:
scipy \
setuptools \
sphinx \
theano \
torchvision \
tqdm
pip install h5py # https://github.com/h5py/h5py/issues/1880
# - name: Lint with flake8
# run: |
# conda install flake8
# # stop the build if there are Python syntax errors or undefined names
# flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Conda list
shell: bash -el {0}
run: conda list
- name: Test with pytest
shell: bash -el {0}
run: |
conda install pytest hypothesis
python setup.py build_ext --inplace
python -m pytest --nomatlab tests
python -m pytest --nomatlab --notheano --nodownload tests
- name: test build and install
shell: bash -el {0}
run: |
python setup.py sdist
pip install dist/*.tar.gz
Expand Down
28 changes: 27 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,32 @@
# Changelog

* 0.2.22 (dev):
* 0.2.22:
* Enhancement: New [Tutorial](notebooks/Tutorial.ipynb).
* Bugfix: `SaliencyMapModel.AUC` failed if some images didn't have any fixations.
* Feature: `StimulusDependentSaliencyMapModel`
* Bugfix: The NUSEF dataset scaled some fixations not correctly to image coordinates. Also, we now account for some typos in the
dataset source data.
* Feature: CrossvalMultipleRegularizations and GeneralMixtureKernelDensityEstimator in baseline utils (names might change!)
* Feature: DVAAwareScanpathModel
* Feature: ShuffledBaselineModel is now much more efficient and able to handle large numbers of stimuli.
hence, ShuffledSimpleBaselineModel is not necessary anymore and a deprecated alias to ShuffledBaselineModel
* Feature: ShuffledBaselineModel can now compute predictions for very large numbers of stimuli without needing
to have all individual predictions in memory due to a recursive reduce logsumexp implementation.
* Feature: `plotting.plot_scanpath` to visualize scanpaths and saccades. WIP, expect the API to change!
* Feature: DeepGaze I and DeepGazeIIE models
* Feature: COCO Freeview dataset
* Feature: `optimize_for_information_gain(framework='torch', ...) now supports a `cache_directory`,
where intermediate steps are cached. This supports resuming crashed optimization runs.
* Bugfix: fixed some edge cases in `optimize_for_information_gain(framework='torch')`
* Feature: COCO Seach18 dataset
* Feature: `FixationTrains.train_lengths`
* Feature: `FixationTrains.scanpath_fixation_attributes` allows handling of per-fixation attributes on scanpath level,
e.g. fixation durations. According attributes as in a Fixations instance are automatically created,
e.g. for durations there will be an attribute `durations` and an attribute `duration_hist`. Also
for scanpath_attributes (e.g., attributes applying to a whole scanpath, such as task) will also generate
an attribute for each fixation to make this information available in Fixations instance.
* Feature: `scanpaths_from_fixations` reconstructs a FixationTrains object from a Fixations instance
* Bugfix: `t_hist` got replaced with `y_hist` in Fixations instances (but luckily not in FixationTrains instances)
* Bugfix: torch code was broken due to changes in torch 1.11
* Bugfix: SALICON dataset download did not work anymore
* Bugfix: NUSEF datast links changed
Expand Down
67 changes: 35 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,37 +11,6 @@ Pysaliency can evaluate most commonly used saliency metrics, including AUC, sAUC
image-based KL divergence, fixation based KL divergence and SIM for saliency map models and
log likelihoods and information gain for probabilistic models.

Pysaliency provides several important datasets:

* MIT1003
* MIT300
* CAT2000
* Toronto
* Koehler
* iSUN
* SALICON (both the 2015 and the 2017 edition and each with both the original mouse traces and the inferred fixations)
* FIGRIM
* OSIE
* NUSEF (the part with public images)

and some influential models:
* AIM
* SUN
* ContextAwareSaliency
* BMS
* GBVS
* GBVSIttiKoch
* Judd
* IttiKoch
* RARE2012
* CovSal


These models are using the original code which is often matlab.
Therefore, a matlab licence is required to make use of these models, although quite some of them
work with octave, too (see below).


Installation
------------

Expand All @@ -54,7 +23,7 @@ Quickstart
----------

import pysaliency

dataset_location = 'datasets'
model_location = 'models'

Expand All @@ -72,6 +41,40 @@ If you already have saliency maps for some dataset, you can import them into pys
my_model = pysaliency.SaliencyMapModelFromDirectory(mit_stimuli, '/path/to/my/saliency_maps')
auc = my_model.AUC(mit_stimuli, mit_fixations)

Check out the [Tutorial](notebooks/Tutorial.ipynb) for a more detailed introduction!

Included datasets and models
----------------------------

Pysaliency provides several important datasets:

* MIT1003
* MIT300
* CAT2000
* Toronto
* Koehler
* iSUN
* SALICON (both the 2015 and the 2017 edition and each with both the original mouse traces and the inferred fixations)
* FIGRIM
* OSIE
* NUSEF (the part with public images)

and some influential models:
* AIM
* SUN
* ContextAwareSaliency
* BMS
* GBVS
* GBVSIttiKoch
* Judd
* IttiKoch
* RARE2012
* CovSal

These models are using the original code which is often matlab.
Therefore, a matlab licence is required to make use of these models, although quite some of them
work with octave, too (see below).


Using Octave
------------
Expand Down
Loading

0 comments on commit e87966a

Please sign in to comment.