All notable changes to this project will be documented in this file.
- Add channelcmb parameter to connectivityanalysis frontend to allow computing connectivity measures for a subset of the channels only, #563
- Add 'channelcmb' parameter to connectivityanalysis frontend. This allows users to compute connectivity measures only between a subset of the channels instead of all channels, which can reduce the required computational cost dramatically. See #565 for details.
- Improvements to serializable dict: better error messages, add _serialize_value helper to try to turn some non-serializable data structures into compatible, serializable ones. #569
- Security fix: increase minimal required tornado version to 6.3.3 to address an issue in tornado.
- Add spike plotting, implements #434
- Conversion to and from mne's RawArray and EpochsArray
- Add export of to NWB format for AnalogData, TimeLockData, and SpikeData. #508
- Add support for reading NWB files containing SpikeData, related to #508
- Add support for concatenating Syncopy data objects (e.g., to add channels) with new
spy.concat
(PR 522)
- When reading/importing NWB files, support trials (in addition to epochs, which were interpreted as trials). Epochs are still supported.
- Fix SpikeData string rep, #511
- corrected slepian/dpss default taper settings, #559
- created dedicated
syncopy.synthdata
module - FIR filters work around NaNs in the input via slower direct convolutions
red_noise
simulation as a 1/f surrogate
- synthetic data routines use generators instead of lists
.time
property returns either an iterable or a single time array when indexed- substantial performance gains for selections with/from many (>1000) trials
- catch and delete virtual datasets from storage directory
- channel assignment for ContinuousData
- frontend
redefinetrial
to cut trials/move time axes - add PPC connectivity measure
- add Jackknifing for coherence and Granger analysis
- add logging functionality and respective developer documentation, #208
- add waveform extra dataset to DiscreteData to store raw data, #238
- create syncopy data objects from Python generators (yeah!)
- concatenation of syncopy data objects along trials
- spectral power for
mtmfft
now independent of padding as originally intended - support unequal trial sizes for
load_ft_raw
- major performance improvements for DiscreteData #403 #418, #424
- fix bug #394 'Copying a spy.StructDict returns a dict'.
- serializable
.cfg
#392 - single trial cross-corr bug #446
- fix bug #457, Syncopy does not warn about temp storage dir size exceeding reporting threshold at startup
- time dependent coherence analysis
- basic statistics (
spy.mean
,spy.std
,spy.var
andspy.median
) for Syncopy data objects spy.timelockanalysis
and newTimeLockData
data type- PSTH method for SpikeData -
spy.spike_psth
- Welch's method for
spy.freqanalysis
- inter trial coherence measure
spy.itc
- support for performing connectivity analysis from SpectralData (#364).
- additional .info entries for Granger analysis, indicating details about the computation.
- additional .info entries for FoooF results, e.g. Gaussian fit parameters.
- selectdata now has 'frequency' and 'latency' parameters instead of toi/toilim and foi/foilim
- a 'latency' selection will always either return a timelocked data selection or an error
- maximal brute force regularization parameter for Granger increased to 1e-1
- improved memory footprint of trial averaging (#380)
- bug #365, plotting supports custom dimords now
- support Python >=3.8
- Added down- and resampling algorithms for the new meta-function
resampledata
- Added FOOOF method as a post-processing option for the freqanalysis method mtmfft.
- Added
load_tdt
to import data from the TDT system, thanks to @kajal5888 - Added
.info
attribute for all data classes to store auxiliary meta information - Added
zscore
normalization topreprocessing
- new global
spy.copy()
function which copies entire Syncopy objects on disk
- the
out.cfg
attached to an analysis result now allows to replay all analysis methods connectivityanalysis
now has FT compliant output support for the coherencespy.cleanup
now has exposedinteractive
parameter- removed keyword
deep
fromcopy()
, all our copies are in fact deep - demeaning after tapering for granger analysis
- detrending is now possible without filtering in
preprocessing
out.cfg
global side-effects (sorry again @kajal5888)CrossSpectralData
plotting- mixing of explicit keywords and
cfg
to control analysis - fixed error on initializing SpikeData with empty ndarray (#257)
Bugfixes and features additions for EventData
objects.
- Added support for flexible columns in
EventData
(thanks to @KatharineShapcott)
- Include specific example how to create an "all-to-all"
trialdefinition
array by invokingdefinetrial
without arguments in the function's docstring. - Modified versioning scheme: use a date-based scheme instead of increasing version numbers
- Aligned padding API to FieldTrip in both
freqanalysis
andconnectivityanalysis
: usepad
instead ofpad_to_length
with three supported modes ('maxperlen', float, 'nextpow2').
- Removed support for calling
freqanalysis
with atoi
array as well as an input dataset that has an active in-place time-selection attached
- Improved legibility of
spy.__version__
for non-release installations - Correctly process equidistant
toi
arrays with large spacing infreqanalysis
- Corrected
trialtime
forDiscreteData
objects (thanks to @KatharineShapcott)
Feature update and bugfixes.
- Added preprocessing functionality
- Added experimental loading functionality for NWB 2.0 files
- Added experimental loading functionality for Matlab mat files
- Added support for "scalar" selections, i.e., things like
selectdata(trials=0)
ordata.selectdata(channels='mychannel')
- Added command line argument "--full" for more granular testing: the new default for running the testing pipeline is to execute a trimmed-down testing suite that does not probe all possible input permutations but focuses on the core functionality without sacrificing coverage.
- New meta-function
taper_opt
parameter to control arbitrary taper (e.g. kaiser) parameters
- Renamed
_selection
class property toselection
- Reworked plotting framework and made it matplotlib 3.5 compatible
- The output of
show
is now automatically squeezed (i.e., singleton dimensions are removed from the returned array). - Enhanced online documentation, now also covering connectivity analysis
- Multi-tapering (
freqanalysis
,connectivityanalysis
) now is switched on by only specifying thetapsmofrq
parameter, removed the need for the additional and redundant setting oftaper='dpss'
- Granger-Geweke algorithm now matches the reference implementation (Dhamala 2008) with machine precision
- Do not parse scalars using
numbers.Number
, usenumpy.number
instead to catch Boolean values - Do not raise a
SPYTypeError
if an arithmetic operation is performed using objects of different numerical types (real/complex; closes #199)
- Removed loading code for ESI binary format that is no longer supported
- Repaired top-level imports: renamed
connectivity
toconnectivityanalysis
and the "connectivity" module is now called "nwanalysis" - Included
conda clean
in CD pipeline to avoid disk fillup by unused conda packages/cache - Inverted
selectdata
messaging policy: only actual on-disk copy operations trigger aSPYInfo
message (closes #197) - Matched selector keywords and class attribute names, i.e., selecting channels
is now done by using a
select
dictionary with key'channel'
(not'channels'
as before). See the documentation ofselectdata
for details. - Retired Travis CI tests since free test runs are exhausted. Migrated to GitHub actions (and re-included codecov)
- The
trialdefinition
arrays constructed by theSelector
class were incorrect forSpectralData
objects without time-axis, resulting in "empty" trials. This has been fixed (closes #207) - Repaired
array_parser
to adequately complain about mixed-type arrays (closes #211) - The
show
routine now consistently returns a list of trials if and only if multiple trials are selected
Major Release
- Added Connectivity submodule with
csd
,granger
andcoh
measures - Added new
CrossSpectralData
class for connectivity data - Added Superlet spectral estimation method to
freqanalysis
- Added arithmetic operator overloading for SyNCoPy objects: it is now possible
to perform simple arithmetic operations directly, e.g.,
data1 + data2
. - Added equality operator for SyNCoPy objects: two objects can be parsed for identical contents using the "==" operator
- Added full object padding functionality
- Added support for user-controlled in-place selections
- Added
show
class method for easy data access in all SyNCoPy objects - Added de-trending suppport in
freqanalysis
via thepolyremoval
keyword - New interface for synthetic data generation: using a list of NumPy arrays for
instantiation interprets each array as
nChannels
xnSamples
trial data which are combined to generate aAnalogData
object - Made SyNCoPy PEP 517 compliant: added pyproject.toml and modified setup.py accordingly
- Added IBM POWER testing pipeline (via dedicated GitLab Runner)
- Multi-tapering now works with smoothing frequencies in Hz
- Streamlined padding interface
- Retired tox in
slurmtest
CI pipeline in favor of a "simple" pytest testing session due to file-locking problems of tox environments on NFS mounts
- Removed ACME from source repository: the submodule setup proved to be too unreliable and hard to maintain. ACME is now an optional (but recommended) dependency of SyNCoPy
- Non-standard
dimord
objects are now parsed and processed byComputationalRoutine
- Impromptu padding performed by
freqanalysis
is done in a more robust way - Stream-lined GitLab Runner setup: use cluster-wide conda instead of local
installations (that differ slightly across runners) and leverage
tox-conda
to fetch pre-built dependencies
Housekeeping and maintenance release
- Included ACME as SyNCoPy submodule: all ESI-HPC cluster specific code has been migrated to the new ACME package, see https://github.com/esi-neuroscience/acme
- Better late than never: added this CHANGELOG file
- Modified GitLab CI Pipeline Setup + version handling: use
setuptools_scm
to populatespy.__version__
instead of hard-coding a version string in the package__init__.py
; this makes test-uploads to PyPI-Test infinitely easier sincesetuptools_scm
takes care of generating non-conflicting package versions. - Modified packaging setup and adapted modular layout to account for new submodule ACME
- Deleted ESI-specific
dask_helpers.py
module (migrated to ACME)
- Cleaned up dependencies: removed all
jupyter
-packages from depencency list to not cause (unnecessary) conflicts in existing Python environments
- Repaired CI pipelines
- Repaired h5py version mismatch: pin SyNCoPy to
hypy
versions greater than 2.9 but less than 3.x - Pin SyNCoPy to Python 3.8.x (Python 3.9 currently triggers too many dependency conflicts)
First public pre-release of SyNCoPy on PyPI and GitHub.
- Included
selectdata
as acomputeFunction
that uses the parallelization framework inComputationalRoutine
to perform arbitrary data-selection tasks (including but not limited to unordered lists, repetitions and removals). - Included time-frequency analysis routines
mtmconvol
andwavelet
- Added plotting functionality: functions
singlepanelplot
andmultiplanelplot
allow quick visual inspection ofAnalogData
andSpectralData
objects - Added support to process multiple SyNCoPy objects in a single meta-function call (all decorators have been modified accordingly)
- Introduced standardized warning messages via new class
SPYWarning
- Included (more or less) extensive developer docs
- Added Travis CI and included badges on GitHub landing page
- New convenience scripts to ease developing/testing
- New conda.yml file + script for consolidating conda/pip requirements: all
of SyNCoPy's dependencies are now collected in
syncopy.yml
, the respective pip-specific requirements.txt and requirements-test.txt files are generated on the fly by a new functionconda2pip
that relies on ruamel.yaml (new required dependency for building SyNCoPy) - New GitLab CI directive for uploading SyNCoPy to PyPI
- Included GitHub templates for new issues/pull requests
- SyNCoPy docu is now hosted on readthedocs (re-directed from syncopy.org)
- New logo + icon
- Made
cluster_cleanup
more robust (works withLocalCluster
objects now) - Made data-parser more feature-rich: check for emptiness, parse non-data datasets etc.
- Made
generate_artificial_data
more robust: change usage of random number seed to allow persistent comparisons across testing runs - Updated CI dependencies (SyNCoPy now requires NumPy 1.18 and Matplotlib 3.3.x)
- All *.py-file headers have been removed
- Removed examples sub-module from main package (examples will be part of a separate repo)
- Wiped all hand-crafted array-matching routines; use
best_match
instead - Do not use
pbr
in the build system any more; rely instead on up-to-date setuptools functionality - Retired memory map support and raw binary data reading routines
- Improved temporary storage handling so that dask workers that import the package do not repeat all temp-dir I/O tests (and potentially run into dead-locks or race conditions)
Preview alpha release of SyNCoPy for first ESI-internal tryout workshop.
- Added routines
esi_cluster_setup
andcluster_cleanup
to facilitate using SLURM from within SyNCoPy - Included new
FauxTrial
class and_preview_trial
class methods to permit quick and performant compute dry-runs - Included a
select
keyword to allow for in-place selections that are applied on the fly in any meta-function via a new decorator. The heavy lifting is performed by a newSelector
class - Re-worked the
specest
package:mtmfft
is now fully functional - Overhauled HTML documentation
- New layout of SyNCoPy objects on disk: introduction of Spy-containers supporting multiple datasets/objects within the same folder
- First working implementation of
spy.load
andspy.save
- Use dask bags instead of arrays in parallelization engine to permit more flexible distribution of data across workers
- Re-worked
trialdefinition
mechanics: attach the fulltrialdefinition
array to objects and fetch relevant information on the fly:BaseData._trialdefinition
unifiessampleinfo
,t0
andtrialinfo
and callsdefinetrial
- Removed
dimlabels
property
- Retired Dask arrays in
ComputationalRoutine
; use dask bags instead
- Flipped sign of offsets in
trialdefinition
to be compatible w/FieldTrip - Enforced PEP8 compliance
- Cleaned up constructor of
BaseData
to prohibit accessing uninitialized attributes
Internal pre-alpha release of SyNCoPy. Prototypes of data format, user-interface and parallelization framework are in place.
- Class structure is laid out, meta-functions are present but mostly place-holders
- Support FieldTrip-style calling syntax via
cfg
"structures" (the keys of which are "unwrapped" by a corresponding decorator) - Preliminary I/O capabilities implemented, objects can be written/read to/from HDF5
- First prototype of parallelization framework based on Dask
- Custom traceback that is enabeld as soon as SyNCoPy is imported: do not spill hundreds of lines to STDOUT, instead highlight most probable cause of error and explain how to get to full traceback if wanted
- Basic session management to ensure concurrent SyNCoPy sessions only access their own data