Skip to content

Commit

Permalink
Merge branch 'dev' into config
Browse files Browse the repository at this point in the history
  • Loading branch information
mavaylon1 authored Jan 31, 2024
2 parents 6a9c956 + 825e31f commit 9aa468d
Show file tree
Hide file tree
Showing 38 changed files with 529 additions and 152 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ body:
Please copy and paste the code you were trying to run that caused the error.
Feel free to include as little or as much as you think is relevant. This section will be automatically formatted into code, so no need for backticks.
render: shell
render: python
validations:
required: true
- type: textarea
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
name: Check Sphinx external links
name: Check Sphinx links
on:
pull_request:
schedule:
- cron: '0 5 * * *' # once per day at midnight ET
workflow_dispatch:

jobs:
check-external-links:
check-sphinx-links:
runs-on: ubuntu-latest
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Expand All @@ -29,5 +29,5 @@ jobs:
python -m pip install -r requirements-doc.txt -r requirements-opt.txt
python -m pip install .
- name: Check Sphinx external links
run: sphinx-build -b linkcheck ./docs/source ./test_build
- name: Check Sphinx internal and external links
run: sphinx-build -W -b linkcheck ./docs/source ./test_build
2 changes: 1 addition & 1 deletion .github/workflows/deploy_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
run: |
python -m pip install twine
ls -1 dist
# twine upload --repository-url https://test.pypi.org/legacy/ -u ${{ secrets.BOT_PYPI_USER }} -p ${{ secrets.BOT_PYPI_PASSWORD }} --skip-existing dist/*
# twine upload --repository-url https://test.pypi.org/legacy/ -u ${{ secrets.BOT_PYPI_USER }} -p ${{ secrets.BOT_TEST_PYPI_PASSWORD }} --skip-existing dist/*
twine upload -u ${{ secrets.BOT_PYPI_USER }} -p ${{ secrets.BOT_PYPI_PASSWORD }} --skip-existing dist/*
- name: Publish wheel and source distributions as a GitHub release
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ repos:
# hooks:
# - id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.1
rev: v0.1.14
hooks:
- id: ruff
# - repo: https://github.com/econchick/interrogate
Expand Down
16 changes: 12 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,33 @@
# HDMF Changelog

## HDMF 3.12.0 (Upcoming)
## HDMF 3.12.1 (Upcoming)

### Enhancements
- Add Data.set_data_io(), which allows for setting a `DataIO` to a data object after-the-fact. @bendichter and @CodyCBakerPhD [#1013](https://github.com/hdmf-dev/hdmf/pull/1013)
### Bug fixes
- Fixed retrieving the correct path for a `HERD` zip file on read. [#1046](https://github.com/hdmf-dev/hdmf/pull/1046)
- Fixed internal links in docstrings and tutorials. @stephprince [#1031](https://github.com/hdmf-dev/hdmf/pull/1031)
- Fixed issue with creating documentation links to classes in docval arguments. @rly [#1036](https://github.com/hdmf-dev/hdmf/pull/1036)

## HDMF 3.12.0 (January 16, 2024)

### Enhancements
- Add Data.set_data_io(), which allows for setting a `DataIO` to a data object after-the-fact. @bendichter and @CodyCBakerPhD [#1013](https://github.com/hdmf-dev/hdmf/pull/1013)
- Added `add_ref_termset`, updated helper methods for `HERD`, revised `add_ref` to support validations prior to populating the tables
and added `add_ref_container`. @mavaylon1 [#968](https://github.com/hdmf-dev/hdmf/pull/968)
- Use `stacklevel` in most warnings. @rly [#1027](https://github.com/hdmf-dev/hdmf/pull/1027)
- Fixed broken links in documentation and added internal link checking to workflows. @stephprince [#1031](https://github.com/hdmf-dev/hdmf/pull/1031)

### Minor Improvements
- Updated `__gather_columns` to ignore the order of bases when generating columns from the super class. @mavaylon1 [#991](https://github.com/hdmf-dev/hdmf/pull/991)
- Update `get_key` to return all the keys if there are multiple within a `HERD` instance. @mavaylon1 [#999](https://github.com/hdmf-dev/hdmf/pull/999)
- Improve HTML rendering of tables. @bendichter [#998](https://github.com/hdmf-dev/hdmf/pull/998)
- Improved issue and PR templates. @rly [#1004](https://github.com/hdmf-dev/hdmf/pull/1004)
- Added check during validation for if a variable length dataset is empty. @bendichter, @oruebel [#789](https://github.com/hdmf-dev/hdmf/pull/789)

### Bug fixes
- Fixed issue with custom class generation when a spec has a `name`. @rly [#1006](https://github.com/hdmf-dev/hdmf/pull/1006)
- Fixed issue with usage of deprecated `ruamel.yaml.safe_load` in `src/hdmf/testing/validate_spec.py`. @rly [#1008](https://github.com/hdmf-dev/hdmf/pull/1008)

- Fixed issue where `ElementIdentifiers` data could be set to non-integer values. @rly [#1009](https://github.com/hdmf-dev/hdmf/pull/1009)
- Fixed issue where string datasets/attributes with isodatetime-formatted values failed validation against a text spec. @rly [#1026](https://github.com/hdmf-dev/hdmf/pull/1026)

## HDMF 3.11.0 (October 30, 2023)

Expand Down
2 changes: 1 addition & 1 deletion Legal.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
“hdmf” Copyright (c) 2017-2023, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
“hdmf” Copyright (c) 2017-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Innovation & Partnerships Office at [email protected].

Expand Down
4 changes: 2 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Citing HDMF
LICENSE
=======

"hdmf" Copyright (c) 2017-2023, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
"hdmf" Copyright (c) 2017-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

(1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Expand All @@ -110,7 +110,7 @@ You are under no obligation whatsoever to provide any bug fixes, patches, or upg
COPYRIGHT
=========

"hdmf" Copyright (c) 2017-2023, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
"hdmf" Copyright (c) 2017-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Innovation & Partnerships Office at [email protected].

NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so.
2 changes: 1 addition & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ changes:
@echo "The overview file is in $(BUILDDIR)/changes."

linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
$(SPHINXBUILD) -W -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
Expand Down
4 changes: 2 additions & 2 deletions docs/gallery/plot_external_resources.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,8 +153,8 @@ def __init__(self, **kwargs):
# ------------------------------------------------------
# It is important to keep in mind that when adding and :py:class:`~hdmf.common.resources.Object` to
# the :py:class:~hdmf.common.resources.ObjectTable, the parent object identified by
# :py:class:`~hdmf.common.resources.Object.object_id` must be the closest parent to the target object
# (i.e., :py:class:`~hdmf.common.resources.Object.relative_path` must be the shortest possible path and
# ``Object.object_id`` must be the closest parent to the target object
# (i.e., ``Object.relative_path`` must be the shortest possible path and
# as such cannot contain any objects with a ``data_type`` and associated ``object_id``).
#
# A common example would be with the :py:class:`~hdmf.common.table.DynamicTable` class, which holds
Expand Down
8 changes: 4 additions & 4 deletions docs/gallery/plot_generic_data_chunk_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,10 +119,10 @@ def _get_dtype(self):
# optimal performance (typically 1 MB or less). In contrast, a :py:class:`~hdmf.data_utils.DataChunk` in
# HDMF acts as a block of data for writing data to dataset, and spans multiple HDF5 chunks to improve performance.
# This is achieved by avoiding repeat
# updates to the same `Chunk` in the HDF5 file, :py:class:`~hdmf.data_utils.DataChunk` objects for write
# should align with `Chunks` in the HDF5 file, i.e., the :py:class:`~hdmf.data_utils.DataChunk.selection`
# should fully cover one or more `Chunks` in the HDF5 file to avoid repeat updates to the same
# `Chunks` in the HDF5 file. This is what the `buffer` of the :py:class`~hdmf.data_utils.GenericDataChunkIterator`
# updates to the same ``Chunk`` in the HDF5 file, :py:class:`~hdmf.data_utils.DataChunk` objects for write
# should align with ``Chunks`` in the HDF5 file, i.e., the ``DataChunk.selection``
# should fully cover one or more ``Chunks`` in the HDF5 file to avoid repeat updates to the same
# ``Chunks`` in the HDF5 file. This is what the `buffer` of the :py:class`~hdmf.data_utils.GenericDataChunkIterator`
# does, which upon each iteration returns a single
# :py:class:`~hdmf.data_utils.DataChunk` object (by default > 1 GB) that perfectly spans many HDF5 chunks
# (by default < 1 MB) to help reduce the number of small I/O operations
Expand Down
2 changes: 1 addition & 1 deletion docs/gallery/plot_term_set.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@
######################################################
# Viewing TermSet values
# ----------------------------------------------------
# :py:class:`~hdmf.term_set.TermSet` has methods to retrieve terms. The :py:func:`~hdmf.term_set.TermSet:view_set`
# :py:class:`~hdmf.term_set.TermSet` has methods to retrieve terms. The :py:func:`~hdmf.term_set.TermSet.view_set`
# method will return a dictionary of all the terms and the corresponding information for each term.
# Users can index specific terms from the :py:class:`~hdmf.term_set.TermSet`. LinkML runtime will need to be installed.
# You can do so by first running ``pip install linkml-runtime``.
Expand Down
2 changes: 1 addition & 1 deletion docs/make.bat
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ if "%1" == "changes" (
)

if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
%SPHINXBUILD% -W -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
Expand Down
11 changes: 10 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@
"matplotlib": ("https://matplotlib.org/stable/", None),
"h5py": ("https://docs.h5py.org/en/latest/", None),
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
"zarr": ("https://zarr.readthedocs.io/en/stable/", None),
}

# these links cannot be checked in github actions
Expand All @@ -84,6 +85,14 @@
"https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request",
]

nitpicky = True
nitpick_ignore = [('py:class', 'Intracomm'),
('py:class', 'h5py.RegionReference'),
('py:class', 'h5py._hl.dataset.Dataset'),
('py:class', 'function'),
('py:class', 'unittest.case.TestCase'),
]

# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]

Expand All @@ -99,7 +108,7 @@

# General information about the project.
project = "HDMF"
copyright = "2017-2023, Hierarchical Data Modeling Framework"
copyright = "2017-2024, Hierarchical Data Modeling Framework"

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
Expand Down
12 changes: 6 additions & 6 deletions docs/source/overview_software_architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,19 +81,19 @@ Spec
* Interface for writing extensions or custom specification
* There are several main specification classes:

* :py:class:`~hdmf.spec.AttributeSpec` - specification for metadata
* :py:class:`~hdmf.spec.GroupSpec` - specification for a collection of
* :py:class:`~hdmf.spec.spec.AttributeSpec` - specification for metadata
* :py:class:`~hdmf.spec.spec.GroupSpec` - specification for a collection of
objects (i.e. subgroups, datasets, link)
* :py:class:`~hdmf.spec.DatasetSpec` - specification for dataset (like
* :py:class:`~hdmf.spec.spec.DatasetSpec` - specification for dataset (like
and n-dimensional array). Specifies data type, dimensions, etc.
* :py:class:`~hdmf.spec.LinkSpec` - specification for link (like a POSIX
* :py:class:`~hdmf.spec.spec.LinkSpec` - specification for link (like a POSIX
soft link)
* :py:class:`~hdmf.spec.spec.RefSpec` - specification for references
(References are like links, but stored as data)
* :py:class:`~hdmf.spec.DtypeSpec` - specification for compound data
* :py:class:`~hdmf.spec.spec.DtypeSpec` - specification for compound data
types. Used to build complex data type specification, e.g., to define
tables (used only in :py:class:`~hdmf.spec.spec.DatasetSpec` and
correspondingly :py:class:`~hdmf.spec.DatasetSpec`)
correspondingly :py:class:`~hdmf.spec.spec.DatasetSpec`)

* **Main Modules:** :py:class:`hdmf.spec`

Expand Down
2 changes: 1 addition & 1 deletion docs/source/software_process.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ inconsistencies.
There are badges in the README_ file which shows the current condition of the dev branch.

.. _GitHub Actions: https://github.com/hdmf-dev/hdmf/actions
.. _README: https://github.com/hdmf-dev/hdmf#readme
.. _README: https://github.com/hdmf-dev/hdmf/blob/dev/README.rst


--------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Validating HDMF Data
====================

Validation of NWB files is available through :py:mod:`~pynwb`. See the `PyNWB documentation
Validation of NWB files is available through ``pynwb``. See the `PyNWB documentation
<https://pynwb.readthedocs.io/en/stable/validation.html>`_ for more information.

--------
Expand Down
2 changes: 1 addition & 1 deletion license.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
“hdmf” Copyright (c) 2017-2023, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
“hdmf” Copyright (c) 2017-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Expand Down
19 changes: 11 additions & 8 deletions src/hdmf/backends/hdf5/h5_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ def append(self, dataset, data):
Append a value to the queue
:param dataset: The dataset where the DataChunkIterator is written to
:type dataset: Dataset
:type dataset: :py:class:`~h5py.Dataset`
:param data: DataChunkIterator with the data to be written
:type data: AbstractDataChunkIterator
"""
Expand All @@ -86,7 +86,8 @@ def append(self, dataset, data):

class H5Dataset(HDMFDataset):
@docval({'name': 'dataset', 'type': (Dataset, Array), 'doc': 'the HDF5 file lazily evaluate'},
{'name': 'io', 'type': 'HDF5IO', 'doc': 'the IO object that was used to read the underlying dataset'})
{'name': 'io', 'type': 'hdmf.backends.hdf5.h5tools.HDF5IO',
'doc': 'the IO object that was used to read the underlying dataset'})
def __init__(self, **kwargs):
self.__io = popargs('io', kwargs)
super().__init__(**kwargs)
Expand Down Expand Up @@ -175,7 +176,8 @@ def get_object(self, h5obj):
class AbstractH5TableDataset(DatasetOfReferences):

@docval({'name': 'dataset', 'type': (Dataset, Array), 'doc': 'the HDF5 file lazily evaluate'},
{'name': 'io', 'type': 'HDF5IO', 'doc': 'the IO object that was used to read the underlying dataset'},
{'name': 'io', 'type': 'hdmf.backends.hdf5.h5tools.HDF5IO',
'doc': 'the IO object that was used to read the underlying dataset'},
{'name': 'types', 'type': (list, tuple),
'doc': 'the IO object that was used to read the underlying dataset'})
def __init__(self, **kwargs):
Expand Down Expand Up @@ -499,7 +501,7 @@ def __init__(self, **kwargs):
# Check for possible collision with other parameters
if not isinstance(getargs('data', kwargs), Dataset) and self.__link_data:
self.__link_data = False
warnings.warn('link_data parameter in H5DataIO will be ignored')
warnings.warn('link_data parameter in H5DataIO will be ignored', stacklevel=2)
# Call the super constructor and consume the data parameter
super().__init__(**kwargs)
# Construct the dict with the io args, ignoring all options that were set to None
Expand All @@ -523,7 +525,7 @@ def __init__(self, **kwargs):
self.__iosettings.pop('compression', None)
if 'compression_opts' in self.__iosettings:
warnings.warn('Compression disabled by compression=False setting. ' +
'compression_opts parameter will, therefore, be ignored.')
'compression_opts parameter will, therefore, be ignored.', stacklevel=2)
self.__iosettings.pop('compression_opts', None)
# Validate the compression options used
self._check_compression_options()
Expand All @@ -537,7 +539,8 @@ def __init__(self, **kwargs):
# Check possible parameter collisions
if isinstance(self.data, Dataset):
for k in self.__iosettings.keys():
warnings.warn("%s in H5DataIO will be ignored with H5DataIO.data being an HDF5 dataset" % k)
warnings.warn("%s in H5DataIO will be ignored with H5DataIO.data being an HDF5 dataset" % k,
stacklevel=2)

self.__dataset = None

Expand Down Expand Up @@ -594,7 +597,7 @@ def _check_compression_options(self):
if self.__iosettings['compression'] not in ['gzip', h5py_filters.h5z.FILTER_DEFLATE]:
warnings.warn(str(self.__iosettings['compression']) + " compression may not be available "
"on all installations of HDF5. Use of gzip is recommended to ensure portability of "
"the generated HDF5 files.")
"the generated HDF5 files.", stacklevel=3)

@staticmethod
def filter_available(filter, allow_plugin_filters):
Expand All @@ -603,7 +606,7 @@ def filter_available(filter, allow_plugin_filters):
:param filter: String with the name of the filter, e.g., gzip, szip etc.
int with the registered filter ID, e.g. 307
:type filter: String, int
:type filter: str, int
:param allow_plugin_filters: bool indicating whether the given filter can be dynamically loaded
:return: bool indicating whether the given filter is available
"""
Expand Down
Loading

0 comments on commit 9aa468d

Please sign in to comment.