Skip to content

Commit

Permalink
Merge branch 'release/2.23.1'
Browse files Browse the repository at this point in the history
  • Loading branch information
mayofaulkner committed Jun 15, 2023
2 parents e71deaa + 790053c commit fa94b1a
Show file tree
Hide file tree
Showing 8 changed files with 183 additions and 37 deletions.
109 changes: 106 additions & 3 deletions examples/atlas/atlas_swanson_flatmap.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,116 @@
"outputs": [],
"source": [
"import numpy as np\n",
"from ibllib.atlas.flatmaps import plot_swanson\n",
"from ibllib.atlas.flatmaps import swanson, plot_swanson\n",
"from ibllib.atlas import BrainRegions\n",
"\n",
"br = BrainRegions()\n",
"\n",
"plot_swanson(br=br, annotate=True)\n"
"# Plot Swanson map will default colors and acronyms\n",
"plot_swanson(br=br, annotate=True)"
]
},
{
"cell_type": "markdown",
"source": [
"### What regions are represented in the Swanson flatmap"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"The Swanson map holds 318 brain region acronyms, some of which are an aggregate of distinct brain regions in the Allen or Beryl parcellation.\n",
"To find the acronyms of the regions represented in Swanson, use:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"swanson_ac = np.sort(br.acronym[np.unique(swanson())])"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
{
"cell_type": "markdown",
"source": [
"Regions which are \"children\" of a Swanson region will not be included in the acronyms. For example `PTLp` is in Swanson, but its children `VISa` and `VISrl`(i.e. a finer parcellation of `PTLp`) are not:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# Example: Check if PTLp is in Swanson\n",
"np.isin(['PTLp'], swanson_ac)\n",
"# Example: Check if VISa and VISrl are in Swanson\n",
"np.isin(['VISa', 'VISrl'], swanson_ac)"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
{
"cell_type": "markdown",
"source": [
"As such, you can only plot value for a given region that is in Swanson. This was done to ensure there is no confusion about how data is aggregated and represented per region (for example, if you were to input values for both `VISa` and `VISrl`, it is unclear whether the mean, median or else should have been plotted onto the `PTLp` area - instead, we ask you to do the aggregation yourself and pass this into the plotting function).\n",
"\n",
"For example,"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from ibllib.atlas.flatmaps import plot_swanson_vector\n",
"\n",
"# 'PTLp', 'CA1', 'VPM' as in Swanson and all 3 are plotted\n",
"acronyms = ['PTLp', 'CA1', 'VPM']\n",
"values = np.array([1.5, 3, 4])\n",
"plot_swanson_vector( acronyms, values, annotate=True, annotate_list=['PTLp', 'CA1', 'VPM'],empty_color='silver')\n",
"\n",
"# 'VISa','VISrl' are not in Swanson, only 'CA1', 'VPM' are plotted\n",
"acronyms = ['VISa','VISrl', 'CA1', 'VPM']\n",
"values = np.array([1, 2, 3, 4])\n",
"plot_swanson_vector( acronyms, values, annotate=True, annotate_list=['VISa','VISrl', 'CA1', 'VPM'],empty_color='silver')\n"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -180,4 +283,4 @@
},
"nbformat": 4,
"nbformat_minor": 1
}
}
50 changes: 46 additions & 4 deletions examples/loading_data/loading_trials_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,45 @@
{
"cell_type": "markdown",
"source": [
"## Loading a session's trials"
"## Loading a single session's trials\n",
"\n",
"If you want to load the trials data for a single session, we recommend you use the `SessionLoader`:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"'''\n",
"RECOMMENDED\n",
"'''\n",
"from brainbox.io.one import SessionLoader\n",
"from one.api import ONE\n",
"one = ONE()\n",
"eid = '4ecb5d24-f5cc-402c-be28-9d0f7cb14b3a'\n",
"sl = SessionLoader(eid=eid, one=one)\n",
"sl.load_trials()\n",
"\n",
"# The datasets are attributes of the sl.trials, for example probabilityLeft :\n",
"probabilityLeft = sl.trials['probabilityLeft']\n",
"# Find all of them using:\n",
"sl.trials.keys()"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
{
"cell_type": "markdown",
"source": [
"For completeness, we present below how to load the trials object using the `one.load_object` method, however we recommend you use the code above and use the `SessionLoader` instead."
],
"metadata": {
"collapsed": false
Expand All @@ -56,6 +94,9 @@
"execution_count": null,
"outputs": [],
"source": [
"'''\n",
"ALTERNATIVE - NOT RECOMMENDED\n",
"'''\n",
"from one.api import ONE\n",
"one = ONE()\n",
"eid = '4ecb5d24-f5cc-402c-be28-9d0f7cb14b3a'\n",
Expand All @@ -73,9 +114,10 @@
"id": "0514237a",
"metadata": {},
"source": [
"## Loading a subject's trials\n",
"This loads all trials data for a given subject (all session trials concatenated) into a DataFrame.\n",
"The subjectTraining table contains the training statuses."
"## Loading all the sessions' trials for a single subject at once\n",
"If you want to study several sessions for a single subject, we recommend you use the `one.load_aggregate` method rather than downloading each trials data individually per session.\n",
"This methods loads all the trials data `subjectTrials` for a given subject into a single DataFrame (i.e. all session trials are concatenated).\n",
"You can use the same method to load the `subjectTraining` table, which contains the training statuses."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ibllib/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import logging
import warnings

__version__ = '2.23.0'
__version__ = '2.23.1'
warnings.filterwarnings('always', category=DeprecationWarning, module='ibllib')

# if this becomes a full-blown library we should let the logging configuration to the discretion of the dev
Expand Down
24 changes: 14 additions & 10 deletions ibllib/atlas/flatmaps.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,13 +137,16 @@ def swanson(filename="swanson2allen.npz"):

def swanson_json(filename="swansonpaths.json"):

OLD_MD5 = ['f848783954883c606ca390ceda9e37d2']
OLD_MD5 = ['97ccca2b675b28ba9b15ca8af5ba4111', # errored map with FOTU and CUL4, 5 mixed up
'56daa7022b5e03080d8623814cda6f38', # old md5 of swanson json without CENT and PTLp
# and CUL4 split (on s3 called swansonpaths_56daa.json)
'f848783954883c606ca390ceda9e37d2']

json_file = AllenAtlas._get_cache_dir().joinpath(filename)
if not json_file.exists() or md5(json_file) in OLD_MD5:
json_file.parent.mkdir(exist_ok=True, parents=True)
_logger.info(f'downloading swanson paths from {aws.S3_BUCKET_IBL} s3 bucket...')
aws.s3_download_file(f'atlas/{json_file.name}', json_file)
aws.s3_download_file(f'atlas/{json_file.name}', json_file, overwrite=True)

with open(json_file) as f:
sw_json = json.load(f)
Expand Down Expand Up @@ -198,44 +201,45 @@ def plot_swanson_vector(acronyms=None, values=None, ax=None, hemisphere=None, br
color = empty_color

coords = reg['coordsReg']
reg_id = reg['thisID']

if reg['hole']:
vertices, codes = coords_for_poly_hole(coords)
if orientation == 'portrait':
vertices[:, [0, 1]] = vertices[:, [1, 0]]
plot_polygon_with_hole(ax, vertices, codes, color, **kwargs)
plot_polygon_with_hole(ax, vertices, codes, color, reg_id, **kwargs)
if hemisphere is not None:
color_inv = color if hemisphere == 'mirror' else empty_color
vertices_inv = np.copy(vertices)
vertices_inv[:, 0] = -1 * vertices_inv[:, 0] + (sw.shape[0] * 2)
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, **kwargs)
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, reg_id, **kwargs)
else:
plot_polygon_with_hole(ax, vertices, codes, color, **kwargs)
plot_polygon_with_hole(ax, vertices, codes, color, reg_id, **kwargs)
if hemisphere is not None:
color_inv = color if hemisphere == 'mirror' else empty_color
vertices_inv = np.copy(vertices)
vertices_inv[:, 1] = -1 * vertices_inv[:, 1] + (sw.shape[0] * 2)
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, **kwargs)
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, reg_id, **kwargs)
else:
coords = [coords] if type(coords) == dict else coords
for c in coords:

if orientation == 'portrait':
xy = np.c_[c['y'], c['x']]
plot_polygon(ax, xy, color, **kwargs)
plot_polygon(ax, xy, color, reg_id, **kwargs)
if hemisphere is not None:
color_inv = color if hemisphere == 'mirror' else empty_color
xy_inv = np.copy(xy)
xy_inv[:, 0] = -1 * xy_inv[:, 0] + (sw.shape[0] * 2)
plot_polygon(ax, xy_inv, color_inv, **kwargs)
plot_polygon(ax, xy_inv, color_inv, reg_id, **kwargs)
else:
xy = np.c_[c['x'], c['y']]
plot_polygon(ax, xy, color, **kwargs)
plot_polygon(ax, xy, color, reg_id, **kwargs)
if hemisphere is not None:
color_inv = color if hemisphere == 'mirror' else empty_color
xy_inv = np.copy(xy)
xy_inv[:, 1] = -1 * xy_inv[:, 1] + (sw.shape[0] * 2)
plot_polygon(ax, xy_inv, color_inv, **kwargs)
plot_polygon(ax, xy_inv, color_inv, reg_id, **kwargs)

if orientation == 'portrait':
ax.set_ylim(0, sw.shape[1])
Expand Down
8 changes: 4 additions & 4 deletions ibllib/atlas/plots.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,14 @@ def get_bc_10():
return bc


def plot_polygon(ax, xy, color, edgecolor='k', linewidth=0.3, alpha=1):
p = Polygon(xy, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha)
def plot_polygon(ax, xy, color, reg_id, edgecolor='k', linewidth=0.3, alpha=1):
p = Polygon(xy, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha, gid=f'region_{reg_id}')
ax.add_patch(p)


def plot_polygon_with_hole(ax, vertices, codes, color, edgecolor='k', linewidth=0.3, alpha=1):
def plot_polygon_with_hole(ax, vertices, codes, color, reg_id, edgecolor='k', linewidth=0.3, alpha=1):
path = mpath.Path(vertices, codes)
patch = PathPatch(path, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha)
patch = PathPatch(path, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha, gid=f'region_{reg_id}')
ax.add_patch(patch)


Expand Down
16 changes: 2 additions & 14 deletions ibllib/oneibl/data_handlers.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import logging
import pandas as pd
import numpy as np
from pathlib import Path
import shutil
import os
Expand All @@ -11,7 +10,6 @@
from one.webclient import AlyxClient
from one.util import filter_datasets
from one.alf.files import add_uuid_string, session_path_parts
from iblutil.io.parquet import np2str
from ibllib.oneibl.registration import register_dataset, get_lab, get_local_data_repository
from ibllib.oneibl.patcher import FTPPatcher, SDSCPatcher, SDSC_ROOT_PATH, SDSC_PATCH_PATH

Expand Down Expand Up @@ -168,12 +166,7 @@ def setUp(self):
sess_path = Path(rel_sess_path).joinpath(d['rel_path'])
full_local_path = Path(self.globus.endpoints['local']['root_path']).joinpath(sess_path)
if not full_local_path.exists():

if one._index_type() is int:
uuid = np2str(np.r_[i[0], i[1]])
elif one._index_type() is str:
uuid = i

uuid = i
self.local_paths.append(full_local_path)
target_paths.append(sess_path)
source_paths.append(add_uuid_string(sess_path, uuid))
Expand Down Expand Up @@ -382,12 +375,7 @@ def setUp(self):
SDSC_TMP = Path(SDSC_PATCH_PATH.joinpath(self.task.__class__.__name__))
for i, d in df.iterrows():
file_path = Path(d['session_path']).joinpath(d['rel_path'])

if self.one._index_type() is int:
uuid = np2str(np.r_[i[0], i[1]])
elif self.one._index_type() is str:
uuid = i

uuid = i
file_uuid = add_uuid_string(file_path, uuid)
file_link = SDSC_TMP.joinpath(file_path)
file_link.parent.mkdir(exist_ok=True, parents=True)
Expand Down
4 changes: 3 additions & 1 deletion ibllib/pipes/training_status.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@
import seaborn as sns


TRAINING_STATUS = {'not_computed': (-2, (0, 0, 0, 0)),
TRAINING_STATUS = {'untrainable': (-4, (0, 0, 0, 0)),
'unbiasable': (-3, (0, 0, 0, 0)),
'not_computed': (-2, (0, 0, 0, 0)),
'habituation': (-1, (0, 0, 0, 0)),
'in training': (0, (0, 0, 0, 0)),
'trained 1a': (1, (195, 90, 80, 255)),
Expand Down
7 changes: 7 additions & 0 deletions release_notes.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
## Release Notes 2.23
### Release Notes 2.23.1 2023-06-15
### features
- split swanson areas
### bugfixes
- trainig plots
- fix datahandler on SDSC for ONEv2

### Release Notes 2.23.0 2023-05-19
- quiescence period extraction
- ONEv2 requirement
Expand Down

0 comments on commit fa94b1a

Please sign in to comment.