Skip to content

Commit

Permalink
one.alf.path.ALFPath class - a pathlib-like object with ALF methods
Browse files Browse the repository at this point in the history
(issue #149)
  • Loading branch information
k1o0 committed Nov 11, 2024
1 parent a233b4b commit 2e07edb
Show file tree
Hide file tree
Showing 16 changed files with 1,912 additions and 622 deletions.
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,19 @@
# Changelog
## [Latest](https://github.com/int-brain-lab/ONE/commits/main) [3.0.0]
This version drops support for python 3.9 and below, and ONE is now in remote mode by default.
Also adds a new ALFPath class to replace alf path functions.

### Modified

- supports python >= 3.10 only
- OneAlyx uses remote mode by default, instead of auto
- OneAlyx.search now updates the cache tables in remote mode as paginated sessions are accessed
- datasets table file_size column nullable by default
- one.alf.io.save_metadata now returns the saved filepath
- paths returned by One methods and functions in one.alf.io are now ALFPath instances
- bugfix: one.alf.path.full_path_parts didn't always raise when invalid path passed
- one.alf.path module containing ALFPath class
- one.alf.exceptions.InvalidALF exception

### Added

Expand All @@ -18,6 +24,8 @@ This version drops support for python 3.9 and below, and ONE is now in remote mo
### Removed

- setup.py
- one.alf.files; use one.alf.path instead
- one.alf.io.remove_uuid_file

## [2.11.1]

Expand Down
166 changes: 86 additions & 80 deletions docs/notebooks/datasets_and_types.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,37 +16,43 @@
},
{
"cell_type": "code",
"execution_count": 13,
"outputs": [],
"source": [
"from pprint import pprint\n",
"from one.alf import spec\n",
"from one.alf.files import filename_parts"
],
"execution_count": null,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
"outputs": [],
"source": [
"from pprint import pprint\n",
"from one.alf import spec\n",
"from one.alf.path import ALFPath"
]
},
{
"cell_type": "markdown",
"source": [
"## Datasets\n",
"\n",
"Print information about ALF objects"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
}
},
"source": [
"## Datasets\n",
"\n",
"Print information about ALF objects"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
Expand All @@ -73,83 +79,83 @@
],
"source": [
"spec.describe('object')"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
]
},
{
"cell_type": "markdown",
"source": [
"Check the file name is ALF compliant"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
}
},
"source": [
"Check the file name is ALF compliant"
]
},
{
"cell_type": "code",
"execution_count": 15,
"outputs": [],
"source": [
"assert spec.is_valid('spikes.times.npy')"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
"outputs": [],
"source": [
"assert spec.is_valid('spikes.times.npy')"
]
},
{
"cell_type": "markdown",
"source": [
"Safely construct an ALF dataset using the 'to_alf' function. This will ensure the correct\n",
"case and format"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
}
},
"source": [
"Safely construct an ALF dataset using the 'to_alf' function. This will ensure the correct\n",
"case and format"
]
},
{
"cell_type": "code",
"execution_count": 16,
"outputs": [],
"source": [
"filename = spec.to_alf('spikes', 'times', 'npy',\n",
" namespace='ibl', timescale='ephys clock', extra='raw')"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
},
"outputs": [],
"source": [
"filename = spec.to_alf('spikes', 'times', 'npy',\n",
" namespace='ibl', timescale='ephys clock', extra='raw')"
]
},
{
"cell_type": "markdown",
"source": [
"Parsing a new file into its constituent parts ensures the dataset is correct"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
}
},
"source": [
"Parsing a new file into its constituent parts ensures the dataset is correct"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
Expand All @@ -165,18 +171,18 @@
}
],
"source": [
"parts = filename_parts('_ibl_spikes.times_ephysClock.raw.npy', as_dict=True, assert_valid=True)\n",
"parts = ALFPath('_ibl_spikes.times_ephysClock.raw.npy').parse_alf_name()\n",
"pprint(parts)"
],
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
"name": "#%% md\n"
}
}
},
{
"cell_type": "markdown",
},
"source": [
"## Dataset types\n",
"<div class=\"alert alert-info\">\n",
Expand All @@ -197,17 +203,17 @@
"\n",
"When registering files they must match exactly 1 dataset type.\n",
"</div>"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
}
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
Expand All @@ -218,7 +224,13 @@
},
{
"data": {
"text/plain": "{'id': '1427b6ba-6535-4f8f-9058-e3df63f0261e',\n 'name': 'spikes.times',\n 'created_by': None,\n 'description': '[nspi]. Times of spikes (seconds, relative to experiment onset). Note this includes spikes from all probes, merged together',\n 'filename_pattern': 'spikes.times*.npy'}"
"text/plain": [
"{'id': '1427b6ba-6535-4f8f-9058-e3df63f0261e',\n",
" 'name': 'spikes.times',\n",
" 'created_by': None,\n",
" 'description': '[nspi]. Times of spikes (seconds, relative to experiment onset). Note this includes spikes from all probes, merged together',\n",
" 'filename_pattern': 'spikes.times*.npy'}"
]
},
"execution_count": 18,
"metadata": {},
Expand All @@ -229,29 +241,29 @@
"from one.api import ONE\n",
"one = ONE(base_url='https://openalyx.internationalbrainlab.org')\n",
"one.describe_dataset('spikes.times') # Requires online version (an Alyx database connection)"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
]
},
{
"cell_type": "markdown",
"source": [
"Datasets and their types can be interconverted using the following functions (online mode only):"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
}
},
"source": [
"Datasets and their types can be interconverted using the following functions (online mode only):"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
Expand All @@ -269,13 +281,7 @@
"\n",
"dset_list = '\", \"'.join(datasets)\n",
"print(f'the dataset type \"{dataset_type}\" for {eid} comprises the datasets: \\n\"{dset_list}\"')"
],
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
}
]
}
],
"metadata": {
Expand All @@ -299,4 +305,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}
Loading

0 comments on commit 2e07edb

Please sign in to comment.