Skip to content

Commit

Permalink
Merge pull request #22 from jsheunis/docs-edits
Browse files Browse the repository at this point in the history
Documentation cleaning
  • Loading branch information
LMBooth authored Oct 28, 2023
2 parents ae39891 + 120398a commit fb724b1
Show file tree
Hide file tree
Showing 25 changed files with 74 additions and 66 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@

pybci/version.py
docs/build
2 changes: 1 addition & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ build:

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
configuration: docs/source/conf.py

# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,16 @@
==================
Contributing to PyBCI
==================
=====================

Thank you for your interest in contributing to PyBCI! We value your contribution and aim to make the process of contributing as smooth as possible. Here are the guidelines:

Getting Started
----------------
---------------

- **Communication:** For general questions or discussions, please open an issue on the `GitHub repository <https://github.com/LMBooth/pybci>`_.
- **Code of Conduct:** Please follow the `Code of Conduct <https://github.com/LMBooth/pybci/blob/main/CODE_OF_CONDUCT.md>`_ to maintain a respectful and inclusive environment.

Contribution Process
------------------------
--------------------

1. **Fork the Repository:** Fork the `PyBCI repository <https://github.com/LMBooth/pybci>`_ on GitHub to your own account.
2. **Clone the Forked Repository:** Clone your fork locally on your machine.
Expand All @@ -24,7 +23,7 @@ Contribution Process
9. **Submit a Pull Request:** Submit a pull request from your fork to the PyBCI repository.

Development Environment
----------------------------
-----------------------

Ensure that you have installed the necessary dependencies by running:

Expand All @@ -33,7 +32,7 @@ Ensure that you have installed the necessary dependencies by running:
pip install -r requirements.txt
Running Tests
--------------------
-------------

To run the tests, execute:

Expand All @@ -42,27 +41,27 @@ To run the tests, execute:
pytest
Coding Standards and Conventions
-----------------------------------------
--------------------------------

Please adhere to the coding standards and conventions used throughout the PyBCI project. This includes naming conventions, comment styles, and code organization.

Documentation
--------------------
-------------

We use Sphinx with ReadTheDocs for documentation. Ensure that you update the documentation if you change the API or introduce new features.

Continuous Integration
-------------------------------
----------------------

We use AppVeyor for continuous integration to maintain the stability of the codebase. Ensure that your changes pass the build on AppVeyor before submitting a pull request. The configuration is located in the ``appveyor.yml`` file in the project root.

Licensing
-------------
---------

By contributing to PyBCI, you agree that your contributions will be licensed under the same license as the project, as specified in the LICENSE file.

Acknowledgements
-----------------------
----------------

Contributors will be acknowledged in a dedicated section of the documentation or project README.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
.. _epoch_timing:

Epoch Timing
############

Expand All @@ -11,8 +12,9 @@ In relation to training models on set actions for brain computer interfaces, it
Setting the :py:data:`globalEpochSettings` with the :class:`GlobalEpochSettings()` class sets the target window length and overlap for the training time windows. It is desirable to have a single global window length that all epochs are sliced to match, this gives a uniform array when passing to the classifier. When in testing mode a having a continuous rolling window of data is sliced to this size and overlapped based on the windowOverlap, see :ref:`set_custom_epoch_times` for more info.

.. _set_custom_epoch_times:

Setting Custom Epoch Times
------------------------
--------------------------

The figure below illustrates when you may have epochs of differing lengths received on the LSL marker stream. A baseline marker may signify an extended period, in this case 10 seconds, and our motor task is only 1 second long. To account for this set :py:data:`customEpochSettings` and :py:data:`globalEpochSettings` accordingly:

Expand All @@ -39,7 +41,7 @@ Highlighting these epochs on some psuedo emg data looks like the following:


Overlapping Epoch Windows
------------------------
-------------------------

By setting splitCheck to True for :py:data:`baselineSettings.splitCheck` and :py:data:`gs.windowOverlap` to 0 we can turn one marker into 10 epochs, shown below:

Expand All @@ -58,5 +60,5 @@ By setting :py:data:`gs.windowOverlap` to 0.5 we can overlap 1 second epochs by


Debugging Timing Errors
------------------------
When initialising the :class:`PyBCI()` class set :py:data:`loggingLevel` to “TIMING” to time the feature extraction time for each data inlet as well as classification testing and training times. These are the most computationally intensive tasks and will induce the most lag in the the system. Each printed time must be shorter then :py:data:`globalEpochSettings.windowLength`*(1- :py:data:`globalEpochSettings.windowOverlap`) to minimise delays from input data action to classification output.
-----------------------
When initialising the :class:`PyBCI()` class set :py:data:`loggingLevel` to “TIMING” to time the feature extraction time for each data inlet as well as classification testing and training times. These are the most computationally intensive tasks and will induce the most lag in the the system. Each printed time must be shorter then :py:data:`globalEpochSettings.windowLength` * ( 1- :py:data:`globalEpochSettings.windowOverlap` ) to minimise delays from input data action to classification output.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
Feature Selection
############
#################

.. _feature-debugging:

Recommended Debugging
--------------------------------
When initialisaing the :class:`PyBCI()` class we can set :py:data:`loggingLevel` to "TIMING" to time our feature extraction time, note a warning will be produced if the feature extraction time is longer then the :py:data:`globalEpochSettings.windowLength`*(1-:py:data:`globalEpochSettings.windowOverlap`), if this is the case a delay will continuously grow as data builds in the queues. To fix this reduce channel count, feature count, feature complexity, or sample rate until the feature extraction time is acceptable, this will help create near-real-time classification.
---------------------
When initialisaing the :class:`PyBCI()` class we can set :py:data:`loggingLevel` to "TIMING" to time our feature extraction time, note a warning will be produced if the feature extraction time is longer then the :py:data:`globalEpochSettings.windowLength` * ( 1- :py:data:`globalEpochSettings.windowOverlap` ), if this is the case a delay will continuously grow as data builds in the queues. To fix this reduce channel count, feature count, feature complexity, or sample rate until the feature extraction time is acceptable, this will help create near-real-time classification.


.. _generic-extractor:

Generic Time-Series Feature Extractor
--------------------------------
-------------------------------------

The `generic feature extractor class found here <https://github.com/LMBooth/pybci/blob/main/pybci/Utils/FeatureExtractor.py>`_ is the default feature extractor for obtaining generic time-series features for classification, note this is used if nothing is passed to :class:`streamCustomFeatureExtract` for its respective datastream. See :ref:`custom-extractor` and :ref:`raw-extractor` for other feature extraction methods.

The available features can be found below, each optional with a boolean operator. The `FeatureSettings class GeneralFeatureChoices <https://github.com/LMBooth/pybci/blob/main/pybci/Configuration/FeatureSettings.py>`_ gives a quick method for selecting the time and/or frequency based feature extraction techniques - useful for reducing stored data and computational complexity.

The features can be selected by setting the respective attributes in the :class:`GeneralFeatureChoices` class to True. When initialising :class:`PyBCI()` we can pass :class:`GeneralFeatureChoices()` to :py:data:`featureChoices` which offers a list of booleans to decide the following features, not all options are set by default to reduce computation time:
The features can be selected by setting the respective attributes in the :class:`GeneralFeatureChoices` class to :py:data:`True`. When initialising :class:`PyBCI()` we can pass :class:`GeneralFeatureChoices()` to :py:data:`featureChoices` which offers a list of booleans to decide the following features, not all options are set by default to reduce computation time:

.. code-block:: python
Expand All @@ -39,8 +42,9 @@ If :class:`psdBand == True` we can also pass custom :class:`freqbands` when init
The `FeatureExtractor.py <https://github.com/LMBooth/pybci/blob/main/pybci/Utils/FeatureExtractor.py>`_ file is part of the pybci project and is used to extract various features from time-series data, such as EEG, EMG, EOG or other consistent data with a consistent sample rate. The type of features to be extracted can be specified during initialisation, and the code supports extracting various types of entropy features, average power within specified frequency bands, root mean square, mean and median of power spectral density (PSD), variance, mean absolute value, waveform length, zero-crossings, and slope sign changes.

.. _custom-extractor:

Passing Custom Feature Extractor classes
--------------------------------
----------------------------------------
Due to the idiosyncratic nature of each LSL data stream and the potential pre-processing/filtering that may be required before data is passed to the machine learning classifier, it can be desirable to have custom feature extraction classes passed to :class:`streamCustomFeatureExtract` When initialising :class:`PyBCI()`.

:class:`streamCustomFeatureExtract` is a dict where the key is a string for the LSL datastream name and the value is the custom created class that will be used for data on that LSL type, example:
Expand All @@ -64,16 +68,17 @@ Due to the idiosyncratic nature of each LSL data stream and the potential pre-pr
NOTE: Every custom class for processing features requires the features to be processed in a function labelled with corresponding arguements as above, namely :class:`def ProcessFeatures(self, epochData, sr, epochNum):`, the epochNum may be handy for distinguishing baseline information and holding that baseline information in the class to use with features from other markers (pupil data: baseline diameter change compared to stimulus, ECG: resting heart rate vs stimulus, heart rate variability, etc.). Look at :ref:`examples` for more inspiriation of custom class creation and integration.

:class:`epochData` is a 2D array in the shape of [samps,chs] where chs is the number of channels on the LSL datastream after any are dropped with the variable :class:`streamChsDropDict` and samps is the number of samples captured in the epoch time window depending on the :class:`globalEpochSettings` and :class:`customEpochSettings` - see :ref:`_epoch_timing` for more information on epoch time windows.
:class:`epochData` is a 2D array in the shape of [samps,chs] where chs is the number of channels on the LSL datastream after any are dropped with the variable :class:`streamChsDropDict` and samps is the number of samples captured in the epoch time window depending on the :class:`globalEpochSettings` and :class:`customEpochSettings` - see :ref:`epoch_timing` for more information on epoch time windows.

The above example returns a 1d array of features, but the target model may specify greater dimensions. More dimensions may be desirable for some pytorch and tensorflow models, but less applicable for sklearn classifiers, this is specific to the model selected.

A practical example of custom datastream decoding can be found in the `Pupil Labs example <https://github.com/LMBooth/pybci/tree/main/pybci/Examples/PupilLabsRightLeftEyeClose>`_, where in the `bciGazeExample.py <https://github.com/LMBooth/pybci/blob/main/pybci/Examples/PupilLabsRightLeftEyeClose/bciGazeExample.py>`_ file there is a custom class; :class:`PupilGazeDecode()`, which is a very simply example getting the mean pupil diameter of the left, right and both eyes as feature data, then this is used to classify whether someone has their right or left eye closed or both eyes open.


.. _raw-extractor:

Raw time-series
----------------
---------------
If the raw time-series data is wanted to be the input for the classifier we can pass a custom class which will allow us to retain a 2d array of [samples, channels] as the input for our model, example given below:

.. code-block:: python
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Getting Started
################
###############



Python Package Dependencies Version Minimums
=============================================
============================================
PyBCI is tested on Python versions 3.9, 3.10 and 3.11 (`defined via appveyor.yml <https://github.com/LMBooth/pybci/blob/main/appveyor.yml>`__)

The following package versions define the minimum supported by PyBCI, also defined in setup.py:
The following package versions define the minimum supported by PyBCI, also defined in ``setup.py``:

.. code-block:: console
Expand All @@ -27,7 +27,7 @@ If you are not using windows then there is a prerequisite stipulated on the `pyl

.. _installation:
Installation
===================
============

For stable releases use: :code:`pip install pybci-package`

Expand All @@ -41,7 +41,7 @@ For development versions use: :code:`pip install git+https://github.com/LMBooth/
Optional: Virtual Environment
----------------------------
-----------------------------
Or optionally, install and run in a virtual environment:

Windows:
Expand All @@ -67,11 +67,11 @@ Linux/MaxOS:
.. _simpleimplementation:

Simple Implementation:
===================
Simple Implementation
=====================
PyBCI requires an LSL marker stream for defining when time series data should be attributed to an action/marker/epoch and an LSL data stream to create time-series data.

If the user has no available LSL hardware to hand they can set `createPseudoDevice=True` when instantiating the PyBCI object to enable a pseudo LSL data stream to generate time-series data and LSL marker stream for epoching the data. More information on PyBCI's Pseudo Device class can be found here: :ref:`what-pseudo-device`.
If the user has no available LSL hardware to hand they can set ``createPseudoDevice=True`` when instantiating the PyBCI object to enable a pseudo LSL data stream to generate time-series data and LSL marker stream for epoching the data. More information on PyBCI's Pseudo Device class can be found here: :ref:`what-pseudo-device`.

The `example scripts <https://pybci.readthedocs.io/en/latest/BackgroundInformation/Examples.html>`_ illustrate various applied ML libraries (SKLearn, Tensorflow, PyTorch) or provide examples of how to integrate LSL hardware.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,28 +1,29 @@
Pseudo Device
############
#############

.. _what-pseudo-device:

What is the Pseudo Device?
=========================================================
==========================
For ease of use the variable bool :py:data:`createPseudoDevice` can be set to True when instantiating :class:`PyBCI()` so the default PseudoDevice is run in another process enabling examples to be run without the need of LSL enabled hardware.

The PseudoDevice class and PseudoDeviceController can be used when the user has no available LSL marker or data streams, allowing for quick and simple execution of the examples. The Pseudo Device enables testing pipelines without the need of configuring and running LSL enabled hardware.
The :class:`PseudoDevice` class and :py:data:`PseudoDeviceController` can be used when the user has no available LSL marker or data streams, allowing for quick and simple execution of the examples. The Pseudo Device enables testing pipelines without the need of configuring and running LSL enabled hardware.

The PseudoDevice class holds marker information and generates signal data based on the given configuration set in :ref:`configuring-pseudo-device`.
The :class:`PseudoDevice` class holds marker information and generates signal data based on the given configuration set in :ref:`configuring-pseudo-device`.

The PseudoDeviceController can have the string "process" or "thread" set to decide whether the pseudo device should be a multiprocessed or threaded operation respectively, by default it is set to "process", then passes all the same configuration arguments to PseudoDevice.
The :py:data:`PseudoDeviceController` can have the string "process" or "thread" set to decide whether the pseudo device should be a multiprocessed or threaded operation respectively, by default it is set to "process", then passes all the same configuration arguments to :class:`PseudoDevice`.

Any generic LSLViewer can be used to view the generated data, `example viewers found on this link. <https://labstreaminglayer.readthedocs.io/info/viewers.html>`_

.. _configuring-pseudo-device:

Configuring the Pseudo Device
=========================================================
=============================

By default the PseudoDevice has 4 markers, "baseline", "Marker1", "Marker2", "Marker3" and "Marker4", each with peak frequencies of 3, 8, 10 and 12 Hz respectively.
By default the :class:`PseudoDevice` has 4 markers, "baseline", "Marker1", "Marker2", "Marker3" and "Marker4", each with peak frequencies of 3, 8, 10 and 12 Hz respectively.
Each signal is modified for 1 second after the marker has occurred, and the seconds between the markers are spaced by 5 seconds.

Upon creating PyBCI object a dict of the following kwargs can be passed to dictate the behaviour of the pseudo device:
Upon creating PyBCI object a dict of the following ``kwargs`` can be passed to dictate the behaviour of the pseudo device:

.. code-block::
Expand Down
Loading

0 comments on commit fb724b1

Please sign in to comment.