diff --git a/.gitignore b/.gitignore index b0ce9b4..40e1ae6 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,3 @@ pybci/version.py +docs/build \ No newline at end of file diff --git a/.readthedocs.yml b/.readthedocs.yml index a83fae5..40b0875 100644 --- a/.readthedocs.yml +++ b/.readthedocs.yml @@ -17,7 +17,7 @@ build: # Build documentation in the docs/ directory with Sphinx sphinx: - configuration: docs/conf.py + configuration: docs/source/conf.py # If using Sphinx, optionally build your docs in additional formats such as PDF # formats: diff --git a/docs/BackgroundInformation/Contributing.rst b/docs/source/BackgroundInformation/Contributing.rst similarity index 93% rename from docs/BackgroundInformation/Contributing.rst rename to docs/source/BackgroundInformation/Contributing.rst index cf8fc4a..22258cd 100644 --- a/docs/BackgroundInformation/Contributing.rst +++ b/docs/source/BackgroundInformation/Contributing.rst @@ -1,17 +1,16 @@ -================== Contributing to PyBCI -================== +===================== Thank you for your interest in contributing to PyBCI! We value your contribution and aim to make the process of contributing as smooth as possible. Here are the guidelines: Getting Started ----------------- +--------------- - **Communication:** For general questions or discussions, please open an issue on the `GitHub repository `_. - **Code of Conduct:** Please follow the `Code of Conduct `_ to maintain a respectful and inclusive environment. Contribution Process ------------------------- +-------------------- 1. **Fork the Repository:** Fork the `PyBCI repository `_ on GitHub to your own account. 2. **Clone the Forked Repository:** Clone your fork locally on your machine. @@ -24,7 +23,7 @@ Contribution Process 9. **Submit a Pull Request:** Submit a pull request from your fork to the PyBCI repository. Development Environment ----------------------------- +----------------------- Ensure that you have installed the necessary dependencies by running: @@ -33,7 +32,7 @@ Ensure that you have installed the necessary dependencies by running: pip install -r requirements.txt Running Tests --------------------- +------------- To run the tests, execute: @@ -42,27 +41,27 @@ To run the tests, execute: pytest Coding Standards and Conventions ------------------------------------------ +-------------------------------- Please adhere to the coding standards and conventions used throughout the PyBCI project. This includes naming conventions, comment styles, and code organization. Documentation --------------------- +------------- We use Sphinx with ReadTheDocs for documentation. Ensure that you update the documentation if you change the API or introduce new features. Continuous Integration -------------------------------- +---------------------- We use AppVeyor for continuous integration to maintain the stability of the codebase. Ensure that your changes pass the build on AppVeyor before submitting a pull request. The configuration is located in the ``appveyor.yml`` file in the project root. Licensing -------------- +--------- By contributing to PyBCI, you agree that your contributions will be licensed under the same license as the project, as specified in the LICENSE file. Acknowledgements ------------------------ +---------------- Contributors will be acknowledged in a dedicated section of the documentation or project README. diff --git a/docs/BackgroundInformation/Epoch_Timing.rst b/docs/source/BackgroundInformation/Epoch_Timing.rst similarity index 93% rename from docs/BackgroundInformation/Epoch_Timing.rst rename to docs/source/BackgroundInformation/Epoch_Timing.rst index e91c59c..f3b543c 100644 --- a/docs/BackgroundInformation/Epoch_Timing.rst +++ b/docs/source/BackgroundInformation/Epoch_Timing.rst @@ -1,4 +1,5 @@ .. _epoch_timing: + Epoch Timing ############ @@ -11,8 +12,9 @@ In relation to training models on set actions for brain computer interfaces, it Setting the :py:data:`globalEpochSettings` with the :class:`GlobalEpochSettings()` class sets the target window length and overlap for the training time windows. It is desirable to have a single global window length that all epochs are sliced to match, this gives a uniform array when passing to the classifier. When in testing mode a having a continuous rolling window of data is sliced to this size and overlapped based on the windowOverlap, see :ref:`set_custom_epoch_times` for more info. .. _set_custom_epoch_times: + Setting Custom Epoch Times ------------------------- +-------------------------- The figure below illustrates when you may have epochs of differing lengths received on the LSL marker stream. A baseline marker may signify an extended period, in this case 10 seconds, and our motor task is only 1 second long. To account for this set :py:data:`customEpochSettings` and :py:data:`globalEpochSettings` accordingly: @@ -39,7 +41,7 @@ Highlighting these epochs on some psuedo emg data looks like the following: Overlapping Epoch Windows ------------------------- +------------------------- By setting splitCheck to True for :py:data:`baselineSettings.splitCheck` and :py:data:`gs.windowOverlap` to 0 we can turn one marker into 10 epochs, shown below: @@ -58,5 +60,5 @@ By setting :py:data:`gs.windowOverlap` to 0.5 we can overlap 1 second epochs by Debugging Timing Errors ------------------------- -When initialising the :class:`PyBCI()` class set :py:data:`loggingLevel` to “TIMING” to time the feature extraction time for each data inlet as well as classification testing and training times. These are the most computationally intensive tasks and will induce the most lag in the the system. Each printed time must be shorter then :py:data:`globalEpochSettings.windowLength`*(1- :py:data:`globalEpochSettings.windowOverlap`) to minimise delays from input data action to classification output. +----------------------- +When initialising the :class:`PyBCI()` class set :py:data:`loggingLevel` to “TIMING” to time the feature extraction time for each data inlet as well as classification testing and training times. These are the most computationally intensive tasks and will induce the most lag in the the system. Each printed time must be shorter then :py:data:`globalEpochSettings.windowLength` * ( 1- :py:data:`globalEpochSettings.windowOverlap` ) to minimise delays from input data action to classification output. diff --git a/docs/BackgroundInformation/Examples.rst b/docs/source/BackgroundInformation/Examples.rst similarity index 100% rename from docs/BackgroundInformation/Examples.rst rename to docs/source/BackgroundInformation/Examples.rst diff --git a/docs/BackgroundInformation/Feature_Selection.rst b/docs/source/BackgroundInformation/Feature_Selection.rst similarity index 88% rename from docs/BackgroundInformation/Feature_Selection.rst rename to docs/source/BackgroundInformation/Feature_Selection.rst index 8a54bd8..eed38ba 100644 --- a/docs/BackgroundInformation/Feature_Selection.rst +++ b/docs/source/BackgroundInformation/Feature_Selection.rst @@ -1,20 +1,23 @@ Feature Selection -############ +################# + .. _feature-debugging: + Recommended Debugging --------------------------------- -When initialisaing the :class:`PyBCI()` class we can set :py:data:`loggingLevel` to "TIMING" to time our feature extraction time, note a warning will be produced if the feature extraction time is longer then the :py:data:`globalEpochSettings.windowLength`*(1-:py:data:`globalEpochSettings.windowOverlap`), if this is the case a delay will continuously grow as data builds in the queues. To fix this reduce channel count, feature count, feature complexity, or sample rate until the feature extraction time is acceptable, this will help create near-real-time classification. +--------------------- +When initialisaing the :class:`PyBCI()` class we can set :py:data:`loggingLevel` to "TIMING" to time our feature extraction time, note a warning will be produced if the feature extraction time is longer then the :py:data:`globalEpochSettings.windowLength` * ( 1- :py:data:`globalEpochSettings.windowOverlap` ), if this is the case a delay will continuously grow as data builds in the queues. To fix this reduce channel count, feature count, feature complexity, or sample rate until the feature extraction time is acceptable, this will help create near-real-time classification. .. _generic-extractor: + Generic Time-Series Feature Extractor --------------------------------- +------------------------------------- The `generic feature extractor class found here `_ is the default feature extractor for obtaining generic time-series features for classification, note this is used if nothing is passed to :class:`streamCustomFeatureExtract` for its respective datastream. See :ref:`custom-extractor` and :ref:`raw-extractor` for other feature extraction methods. The available features can be found below, each optional with a boolean operator. The `FeatureSettings class GeneralFeatureChoices `_ gives a quick method for selecting the time and/or frequency based feature extraction techniques - useful for reducing stored data and computational complexity. -The features can be selected by setting the respective attributes in the :class:`GeneralFeatureChoices` class to True. When initialising :class:`PyBCI()` we can pass :class:`GeneralFeatureChoices()` to :py:data:`featureChoices` which offers a list of booleans to decide the following features, not all options are set by default to reduce computation time: +The features can be selected by setting the respective attributes in the :class:`GeneralFeatureChoices` class to :py:data:`True`. When initialising :class:`PyBCI()` we can pass :class:`GeneralFeatureChoices()` to :py:data:`featureChoices` which offers a list of booleans to decide the following features, not all options are set by default to reduce computation time: .. code-block:: python @@ -39,8 +42,9 @@ If :class:`psdBand == True` we can also pass custom :class:`freqbands` when init The `FeatureExtractor.py `_ file is part of the pybci project and is used to extract various features from time-series data, such as EEG, EMG, EOG or other consistent data with a consistent sample rate. The type of features to be extracted can be specified during initialisation, and the code supports extracting various types of entropy features, average power within specified frequency bands, root mean square, mean and median of power spectral density (PSD), variance, mean absolute value, waveform length, zero-crossings, and slope sign changes. .. _custom-extractor: + Passing Custom Feature Extractor classes --------------------------------- +---------------------------------------- Due to the idiosyncratic nature of each LSL data stream and the potential pre-processing/filtering that may be required before data is passed to the machine learning classifier, it can be desirable to have custom feature extraction classes passed to :class:`streamCustomFeatureExtract` When initialising :class:`PyBCI()`. :class:`streamCustomFeatureExtract` is a dict where the key is a string for the LSL datastream name and the value is the custom created class that will be used for data on that LSL type, example: @@ -64,7 +68,7 @@ Due to the idiosyncratic nature of each LSL data stream and the potential pre-pr NOTE: Every custom class for processing features requires the features to be processed in a function labelled with corresponding arguements as above, namely :class:`def ProcessFeatures(self, epochData, sr, epochNum):`, the epochNum may be handy for distinguishing baseline information and holding that baseline information in the class to use with features from other markers (pupil data: baseline diameter change compared to stimulus, ECG: resting heart rate vs stimulus, heart rate variability, etc.). Look at :ref:`examples` for more inspiriation of custom class creation and integration. -:class:`epochData` is a 2D array in the shape of [samps,chs] where chs is the number of channels on the LSL datastream after any are dropped with the variable :class:`streamChsDropDict` and samps is the number of samples captured in the epoch time window depending on the :class:`globalEpochSettings` and :class:`customEpochSettings` - see :ref:`_epoch_timing` for more information on epoch time windows. +:class:`epochData` is a 2D array in the shape of [samps,chs] where chs is the number of channels on the LSL datastream after any are dropped with the variable :class:`streamChsDropDict` and samps is the number of samples captured in the epoch time window depending on the :class:`globalEpochSettings` and :class:`customEpochSettings` - see :ref:`epoch_timing` for more information on epoch time windows. The above example returns a 1d array of features, but the target model may specify greater dimensions. More dimensions may be desirable for some pytorch and tensorflow models, but less applicable for sklearn classifiers, this is specific to the model selected. @@ -72,8 +76,9 @@ A practical example of custom datastream decoding can be found in the `Pupil Lab .. _raw-extractor: + Raw time-series ----------------- +--------------- If the raw time-series data is wanted to be the input for the classifier we can pass a custom class which will allow us to retain a 2d array of [samples, channels] as the input for our model, example given below: .. code-block:: python diff --git a/docs/BackgroundInformation/Getting_Started.rst b/docs/source/BackgroundInformation/Getting_Started.rst similarity index 90% rename from docs/BackgroundInformation/Getting_Started.rst rename to docs/source/BackgroundInformation/Getting_Started.rst index b16acc6..2f13da7 100644 --- a/docs/BackgroundInformation/Getting_Started.rst +++ b/docs/source/BackgroundInformation/Getting_Started.rst @@ -1,13 +1,13 @@ Getting Started -################ +############### Python Package Dependencies Version Minimums -============================================= +============================================ PyBCI is tested on Python versions 3.9, 3.10 and 3.11 (`defined via appveyor.yml `__) -The following package versions define the minimum supported by PyBCI, also defined in setup.py: +The following package versions define the minimum supported by PyBCI, also defined in ``setup.py``: .. code-block:: console @@ -27,7 +27,7 @@ If you are not using windows then there is a prerequisite stipulated on the `pyl .. _installation: Installation -=================== +============ For stable releases use: :code:`pip install pybci-package` @@ -41,7 +41,7 @@ For development versions use: :code:`pip install git+https://github.com/LMBooth/ Optional: Virtual Environment ----------------------------- +----------------------------- Or optionally, install and run in a virtual environment: Windows: @@ -67,11 +67,11 @@ Linux/MaxOS: .. _simpleimplementation: -Simple Implementation: -=================== +Simple Implementation +===================== PyBCI requires an LSL marker stream for defining when time series data should be attributed to an action/marker/epoch and an LSL data stream to create time-series data. -If the user has no available LSL hardware to hand they can set `createPseudoDevice=True` when instantiating the PyBCI object to enable a pseudo LSL data stream to generate time-series data and LSL marker stream for epoching the data. More information on PyBCI's Pseudo Device class can be found here: :ref:`what-pseudo-device`. +If the user has no available LSL hardware to hand they can set ``createPseudoDevice=True`` when instantiating the PyBCI object to enable a pseudo LSL data stream to generate time-series data and LSL marker stream for epoching the data. More information on PyBCI's Pseudo Device class can be found here: :ref:`what-pseudo-device`. The `example scripts `_ illustrate various applied ML libraries (SKLearn, Tensorflow, PyTorch) or provide examples of how to integrate LSL hardware. diff --git a/docs/BackgroundInformation/Pseudo_Device.rst b/docs/source/BackgroundInformation/Pseudo_Device.rst similarity index 70% rename from docs/BackgroundInformation/Pseudo_Device.rst rename to docs/source/BackgroundInformation/Pseudo_Device.rst index 8eb503f..16ee1c2 100644 --- a/docs/BackgroundInformation/Pseudo_Device.rst +++ b/docs/source/BackgroundInformation/Pseudo_Device.rst @@ -1,28 +1,29 @@ Pseudo Device -############ +############# .. _what-pseudo-device: What is the Pseudo Device? -========================================================= +========================== For ease of use the variable bool :py:data:`createPseudoDevice` can be set to True when instantiating :class:`PyBCI()` so the default PseudoDevice is run in another process enabling examples to be run without the need of LSL enabled hardware. -The PseudoDevice class and PseudoDeviceController can be used when the user has no available LSL marker or data streams, allowing for quick and simple execution of the examples. The Pseudo Device enables testing pipelines without the need of configuring and running LSL enabled hardware. +The :class:`PseudoDevice` class and :py:data:`PseudoDeviceController` can be used when the user has no available LSL marker or data streams, allowing for quick and simple execution of the examples. The Pseudo Device enables testing pipelines without the need of configuring and running LSL enabled hardware. -The PseudoDevice class holds marker information and generates signal data based on the given configuration set in :ref:`configuring-pseudo-device`. +The :class:`PseudoDevice` class holds marker information and generates signal data based on the given configuration set in :ref:`configuring-pseudo-device`. -The PseudoDeviceController can have the string "process" or "thread" set to decide whether the pseudo device should be a multiprocessed or threaded operation respectively, by default it is set to "process", then passes all the same configuration arguments to PseudoDevice. +The :py:data:`PseudoDeviceController` can have the string "process" or "thread" set to decide whether the pseudo device should be a multiprocessed or threaded operation respectively, by default it is set to "process", then passes all the same configuration arguments to :class:`PseudoDevice`. Any generic LSLViewer can be used to view the generated data, `example viewers found on this link. `_ .. _configuring-pseudo-device: + Configuring the Pseudo Device -========================================================= +============================= -By default the PseudoDevice has 4 markers, "baseline", "Marker1", "Marker2", "Marker3" and "Marker4", each with peak frequencies of 3, 8, 10 and 12 Hz respectively. +By default the :class:`PseudoDevice` has 4 markers, "baseline", "Marker1", "Marker2", "Marker3" and "Marker4", each with peak frequencies of 3, 8, 10 and 12 Hz respectively. Each signal is modified for 1 second after the marker has occurred, and the seconds between the markers are spaced by 5 seconds. -Upon creating PyBCI object a dict of the following kwargs can be passed to dictate the behaviour of the pseudo device: +Upon creating PyBCI object a dict of the following ``kwargs`` can be passed to dictate the behaviour of the pseudo device: .. code-block:: diff --git a/docs/BackgroundInformation/Theory_Operation.rst b/docs/source/BackgroundInformation/Theory_Operation.rst similarity index 82% rename from docs/BackgroundInformation/Theory_Operation.rst rename to docs/source/BackgroundInformation/Theory_Operation.rst index af9615b..28f4d89 100644 --- a/docs/BackgroundInformation/Theory_Operation.rst +++ b/docs/source/BackgroundInformation/Theory_Operation.rst @@ -1,20 +1,20 @@ Theory of Operation -############ +################### Requirements Prior Initialising with `bci = PyBCI()` -========================================================= -The bci must have ==1 LSL marker stream selected and >=1 LSL data stream/s selected - if more then one LSL marker stream is on the system it is recommended to set the desired ML training marker stream with :py:data:`markerStream` to :py:class:`PyBCI()`, otherwise the first in list is selected. If no set datastreams are selected with :py:data:`dataStreams` to :py:class:`PyBCI()` all available datastreams will be used and decoded with the :ref:`generic-extractor`. +==================================================== +The BCI must have exactly 1 LSL marker stream selected and one or more LSL data stream/s selected - if more than one LSL marker stream is on the system it is recommended to set the desired ML training marker stream with :py:data:`markerStream` to :py:class:`PyBCI()`, otherwise the first in list is selected. If no set datastreams are selected with :py:data:`dataStreams` to :py:class:`PyBCI()` all available datastreams will be used and decoded with the :ref:`generic-extractor`. Thread Creation -========================================================= +=============== Once configuration settings are set 4 types of threaded operations are created; one classifier, one marker thread, with a feature and a data thread for each accepted LSL datastream. Marker Thread -********************************************** +************* The marker stream has its own thread which receives markers from the target LSL marker stream and when in train mode, the marker thread pushed the marker to all available data threads informing when to slice the data based on the markers timestamp, see :ref:`set_custom_epoch_times`. Set the desired ML training marker stream with :py:data:`markerStream` to :py:class:`PyBCI()`. Data Threads -********************************************** +************ Each data stream has two threads created, one data and one feature extractor. The data thread is responsible for setting pre-allocated numpy arrays for each data stream inlet which pulls chunks of data from the LSL. When in training mode it gathers data so many seconds before and after a marker to prepare for feature extraction, with the option of slicing and overlapping so many seconds before and after the marker appropriately based on the classes `GlobalEpochSettings `_ and `IndividualEpochSettings `_, set with :py:data:`globalEpochSettings` and :py:data:`customEpochSettings` when initialising :py:class:`PyBCI()`. Add desired dataStreams by passing a list of strings containing accepted data stream names with:py:data:`dataStreams`. By setting :py:data:`dataStreams` all other data inlets will be ignored except those in the list. @@ -22,29 +22,29 @@ Add desired dataStreams by passing a list of strings containing accepted data st Note: Data so many seconds before and after the LSL marker timestamp is decided by the corresponding LSL data timestamps. If the LSL data stream pushes chunks infrequently (>[:py:data:`globalEpochSettings.windowLength`*(1-:py:data:`globalEpochSettings.windowOverlap`)]) and doesn't overwrite each sample with linear equidistant timestamps, errors in classification output will occur - Kept legacy data threads AsyncDataReceiver and DataReceiver in threads folder in case modifications needed based on so many samples before and after decided by expected sample rate if people find this becomes an issue for certain devices. Feature Extractor Threads -********************************************** +************************* The feature extractor threads receive data from their corresponding data thread and prepares epoch data for re-unification in the classification thread with other devices in the same epoch. The feature extraction techniques used can vary drastically between devices, to resolve this custom classes can be created to deal with specific stream types and passed to :py:data:`streamCustomFeatureExtract` when initialising :py:class:`PyBCI()`, discussed more in :ref:`custom-extractor`. -The default feature extraction used is :ref:`GenericFeatureExtractor` found in `FeatureSettings.py `_, with :ref:`GeneralFeatureChoices` found in `FeatureSettings.py `_, see :ref:`generic-extractor` for more details. +The default feature extraction used is :py:data:`GenericFeatureExtractor` found in `FeatureExtractor.py `_, with :py:data:`GeneralFeatureChoices` found in `FeatureSettings.py `_, see :ref:`generic-extractor` for more details. Classifier Thread -********************************************** +***************** The Classifier thread is responsible for receiving data from the various feature extraction threads, synchronising based on the number of data streams, then using the features and target marker values for testing and training the selected machine learning pytorch, tensorflow or scikit-learn model or classifier. If a valid marker stream and datastream/s are available :py:class:`PyBCI()` can start machine learning training by calling :py:meth:`TrainMode()`. In training mode strings are received on the selected LSL marker stream which signify a machine learning target value has occured. A minimum number of each type of string type are required before classification beings, which can be modied with :py:data:`minimumEpochsRequired` to :py:class:`PyBCI()` on initialisation. Only after this number has been received of each and a suitable classification accuracy has been obtained should the bci start test mode. Call :py:meth:`TestMode()` on the :py:class:`PyBCI()` object to start testing the machine learning model. -Once in test mode the data threads continuously slice time windows of data based on :py:data:`globalEpochSettings`.windowLength and optionally overlaps these windows according to :py:data:`globalEpochSettings`.windowOverlap when initialising :py:class:`PyBCI()`. These windows have features extracted the same as in test mode, then the extracted features are applied to the model/classifier to predict the current target. +Once in test mode the data threads continuously slice time windows of data based on :py:data:`globalEpochSettings.windowLength` and optionally overlaps these windows according to :py:data:`globalEpochSettings.windowOverlap` when initialising :py:class:`PyBCI()`. These windows have features extracted the same as in test mode, then the extracted features are applied to the model/classifier to predict the current target. If the model is not performing well the user can always swap back to training model to gather more data with :py:meth:`TestMode`. It could also be worth to record your setup and view it in post to adjust yout epoch classifier timing windows accordingly. If the classifier output seem laggy look at :ref:`feature-debugging`, setting :py:data:`loggingLevel` to "TIMING" when initialising :class:`PyBCI()` prints classification testing and training times. Custom Sci-Kit-Learn classifier and Pytorch models can be used, see the examples found `here for sklearn `_, and `here for PyTorch `_. -Tensorflow can also be used `found here `_, (Should be noted in PyBCI there is currently no suppression for tensorflow text prompts and the model training and tsting time can be substantially longer then pytorch and sklearn. Any recommendations are welcome in the issues on the git!) +Tensorflow can also be used `found here `_, (Should be noted in PyBCI there is currently no suppression for tensorflow text prompts and the model training and tsting time can be substantially longer then pytorch and sklearn. Any recommendations are welcome in the issues on the GitHub repository!) Thread Overview -********************************************** +*************** The figure below illustrates the general flow of data between threads on initialisation: .. image:: ../Images/flowchart/Flowchart.svg @@ -57,12 +57,12 @@ Another representation is given here for data flor operation between processes: :alt: Pybci data connections Testing and Training the Model -========================================================= +============================== Training -********************************************** +******** Retrieiving current estimate ------------------------------------------ +---------------------------- Before the classifier can be run a minimum number of marker strings must be received for each type of target marker, set with the :py:data:`minimumEpochsRequired` variable (default: 10) to :py:class:`PyBCI()`. An sklearn classifier of the users choosing can be passed with the :py:data:`clf` variable, Pytorch with :py:attr:`torchModel` or a tensorflow model with passed to :py:data:`model` when instantiating with :py:class:`PyBCI()`, only one should be passed the others will default to :class:`None`. @@ -89,9 +89,9 @@ When in test mode data is captured :class:`tmin` seconds before the training mar Testing -********************************************** +******* Retrieiving current estimate ------------------------------------------------ +---------------------------- When in test mode the data threads will continously pass time windows to the respective feature extractor threads. It is recommended to periodically query the current estimated marker with: @@ -103,5 +103,5 @@ It is recommended to periodically query the current estimated marker with: where :class:`classGuess` is an integer relating to the marker value in the marker dict returned with :py:meth:`ReceivedMarkerCount()`. See the :ref:`examples` for reference on how to setup sufficient training before switching to test mode and quering live classification esitmation. Resetting or Adding to Train mode Feature Data ------------------------------------------------ +---------------------------------------------- The user can call :func:`PyBCI.TrainMode()` again to go back to training the model and add to the existing feature data with new LSL markers signifying new epochs to be processed. diff --git a/docs/BackgroundInformation/What_is_PyBCI.rst b/docs/source/BackgroundInformation/What_is_PyBCI.rst similarity index 68% rename from docs/BackgroundInformation/What_is_PyBCI.rst rename to docs/source/BackgroundInformation/What_is_PyBCI.rst index e2a2940..248e18c 100644 --- a/docs/BackgroundInformation/What_is_PyBCI.rst +++ b/docs/source/BackgroundInformation/What_is_PyBCI.rst @@ -1,24 +1,24 @@ What is PyBCI? -################ +############## Statement of need -========================== +================= PyBCI addresses the growing need for a real-time Brain-Computer Interface (BCI) software capable of handling diverse physiological sensor data streams. By leveraging robust machine learning libraries such as PyTorch, SKLearn, and TensorFlow, alongside the Lab Streaming Layer protocol, PyBCI facilitates the integration of real-time data analysis and model training. This opens up avenues for researchers and practitioners to not only receive and analyze physiological sensor data but also develop, test, and deploy machine learning models seamlessly, fostering innovation in the rapidly evolving field of BCIs. General Overview -========================== +================ PyBCI is a python based brain computer interface software designed to receive a varying number, be it singular or multiple, Lab Streaming Layer enabled physiological sensor data streams. An understanding of time-series data analysis, the lab streaming layer protocol, and machine learning techniques are a must to integrate innovative ideas with this interface. An LSL marker stream is required to train the model, where a received marker epochs the data received on the accepted datastreams based on a configurable time window around certain markers - custom marker strings can optionally be split and overlapped to count for more then one marker, example: -A baseline marker may have one marker sent for a 60 second window, where as target actions may only be ~0.5s long, so to conform when testing the model and giving a standardised window length would be desirable to split the 60s window after the received baseline marker in to ~0.5s windows. By overlapping windows we try to account for potential missed signal patterns/aliasing, as a rule of thumb it would be advised when testing a model to have an overlap >= than 50%, see Shannon nyquist criterion. `See here for more information on epoch timing `_. +A baseline marker may have one marker sent for a 60 second window, where as target actions may only be ~0.5s long, so to conform when testing the model and giving a standardised window length would be desirable to split the 60s window after the received baseline marker in to ~0.5s windows. By overlapping windows we try to account for potential missed signal patterns/aliasing, as a rule of thumb it would be advised when testing a model to have an overlap of larger than or equal to 50%, see Shannon nyquist criterion. `See here for more information on epoch timing `_. Once the data has been epoched it is sent for feature extraction, there is a general feature extraction class which can be configured for general time and/or frequency analysis based features, ideal for data stream types like "EEG" and "EMG". Since data analysis, preprocessing and feature extraction techniques can vary greatly between devices, a custom feature extraction class can be created for each data stream maker type. `See here for more information on feature extraction `_. -Finally a passable, customisable sklearn or tensorflow classifier can be given to the bci class, once a defined number of epochs have been obtained for each received epoch/marker type the classifier can begin to fit the model. It's advised to use :py:meth:`ReceivedMarkerCount()` to get the number of received training epochs received, once the min num epochs received of each type is >= :py:attr:`minimumEpochsRequired` (default 10 of each epoch) the model will begin to fit. Once fit classifier info can be queried with :py:meth:`CurrentClassifierInfo()`, when a desired accuracy is met or number of epochs :py:meth:`TestMode()` can be called. Once in test mode you can query what pybci estimates the current bci epoch is (typically a "baseline" marker is given in the training period for no state). `Review the examples for sklearn and model implementations `_. +Finally a passable, customisable sklearn or tensorflow classifier can be given to the bci class, once a defined number of epochs have been obtained for each received epoch/marker type the classifier can begin to fit the model. It's advised to use :py:meth:`ReceivedMarkerCount()` to get the number of received training epochs received, once the min num epochs received of each type is larger than or equal to :py:attr:`minimumEpochsRequired` (default 10 of each epoch) the model will begin to fit. Once fit classifier info can be queried with :py:meth:`CurrentClassifierInfo()`, when a desired accuracy is met or number of epochs :py:meth:`TestMode()` can be called. Once in test mode you can query what pybci estimates the current bci epoch is (typically a "baseline" marker is given in the training period for no state). `Review the examples for sklearn and model implementations `_. -Finally a passable pytorch, sklearn or tensorflow classifier can be given to the bci class, once a defined number of epochs have been obtained for each received epoch/marker type the classifier can begin to fit the model. It's advised to use :py:meth:`ReceivedMarkerCount()` to get the number of received training epochs received, once the min num epochs received of each type is >= :py:attr:`minimumEpochsRequired` (default 10 of each epoch) the model will begin to fit. Once fit the classifier info can be queried with :py:meth:`CurrentClassifierInfo()`, this returns the model used and accuracy. If enough epochs are received or high enough accuracy is obtained :py:meth:`TestMode()` can be called. Once in test mode you can query what pybci estimates the current bci epoch is(typically baseline is used for no state). `Review the examples for sklearn and model implementations `_. +Finally a passable pytorch, sklearn or tensorflow classifier can be given to the bci class, once a defined number of epochs have been obtained for each received epoch/marker type the classifier can begin to fit the model. It's advised to use :py:meth:`ReceivedMarkerCount()` to get the number of received training epochs received, once the min num epochs received of each type is larger than or equal to :py:attr:`minimumEpochsRequired` (default 10 of each epoch) the model will begin to fit. Once fit the classifier info can be queried with :py:meth:`CurrentClassifierInfo()`, this returns the model used and accuracy. If enough epochs are received or high enough accuracy is obtained :py:meth:`TestMode()` can be called. Once in test mode you can query what pybci estimates the current bci epoch is (typically baseline is used for no state). `Review the examples for sklearn and model implementations `_. -All the `examples `__ found on the github not in a dedicated folder have a pseudo LSL data generator enabled by default, by setting :py:attr:`createPseudoDevice=True` so the examples can run without the need of LSL capable hardware. +All the `examples `__ found on the github not in a dedicated folder have a pseudo LSL data generator enabled by default, by setting ``createPseudoDevice=True`` so the examples can run without the need of LSL capable hardware. diff --git a/docs/Images/flowchart/Flowchart.svg b/docs/source/Images/flowchart/Flowchart.svg similarity index 100% rename from docs/Images/flowchart/Flowchart.svg rename to docs/source/Images/flowchart/Flowchart.svg diff --git a/docs/Images/operation.svg b/docs/source/Images/operation.svg similarity index 100% rename from docs/Images/operation.svg rename to docs/source/Images/operation.svg diff --git a/docs/Images/pyBCI.png b/docs/source/Images/pyBCI.png similarity index 100% rename from docs/Images/pyBCI.png rename to docs/source/Images/pyBCI.png diff --git a/docs/Images/pyBCITitle.png b/docs/source/Images/pyBCITitle.png similarity index 100% rename from docs/Images/pyBCITitle.png rename to docs/source/Images/pyBCITitle.png diff --git a/docs/Images/pyBCITitle.svg b/docs/source/Images/pyBCITitle.svg similarity index 100% rename from docs/Images/pyBCITitle.svg rename to docs/source/Images/pyBCITitle.svg diff --git a/docs/Images/splitEpochs/example1.png b/docs/source/Images/splitEpochs/example1.png similarity index 100% rename from docs/Images/splitEpochs/example1.png rename to docs/source/Images/splitEpochs/example1.png diff --git a/docs/Images/splitEpochs/example1split0.png b/docs/source/Images/splitEpochs/example1split0.png similarity index 100% rename from docs/Images/splitEpochs/example1split0.png rename to docs/source/Images/splitEpochs/example1split0.png diff --git a/docs/Images/splitEpochs/example1split50.png b/docs/source/Images/splitEpochs/example1split50.png similarity index 100% rename from docs/Images/splitEpochs/example1split50.png rename to docs/source/Images/splitEpochs/example1split50.png diff --git a/docs/api/Configurations.rst b/docs/source/api/Configurations.rst similarity index 100% rename from docs/api/Configurations.rst rename to docs/source/api/Configurations.rst diff --git a/docs/api/LSLScanner.rst b/docs/source/api/LSLScanner.rst similarity index 100% rename from docs/api/LSLScanner.rst rename to docs/source/api/LSLScanner.rst diff --git a/docs/api/PseudoDevice.rst b/docs/source/api/PseudoDevice.rst similarity index 100% rename from docs/api/PseudoDevice.rst rename to docs/source/api/PseudoDevice.rst diff --git a/docs/api/PseudoDeviceController.rst b/docs/source/api/PseudoDeviceController.rst similarity index 100% rename from docs/api/PseudoDeviceController.rst rename to docs/source/api/PseudoDeviceController.rst diff --git a/docs/api/PyBCI.rst b/docs/source/api/PyBCI.rst similarity index 100% rename from docs/api/PyBCI.rst rename to docs/source/api/PyBCI.rst diff --git a/docs/conf.py b/docs/source/conf.py similarity index 100% rename from docs/conf.py rename to docs/source/conf.py diff --git a/docs/index.rst b/docs/source/index.rst similarity index 95% rename from docs/index.rst rename to docs/source/index.rst index 8cf79a5..e989424 100644 --- a/docs/index.rst +++ b/docs/source/index.rst @@ -3,7 +3,7 @@ Welcome to the PyBCI documentation! **PyBCI** is a Python package to create a Brain Computer Interface (BCI) with data synchronisation and pipelining handled by the `Lab Streaming Layer `_, machine learning with `Pytorch `_, `scikit-learn `_ or `TensorFlow `_, leveraging packages like `Antropy `_, `SciPy `_ and `NumPy `_ for generic time and/or frequency based feature extraction or optionally have the users own custom feature extraction class used. -The goal of PyBCI is to enable quick iteration when creating pipelines for testing human machine and brain computer interfaces, namely testing applied data processing and feature extraction techniques on custom machine learning models. Training the BCI requires LSL enabled devices and an LSL marker stream for training stimuli. All the `examples `__ found on the github not in a dedicated folder have a pseudo LSL data generator enabled by default, by setting `createPseudoDevice=True` so the examples can run without the need of LSL capable hardware. +The goal of PyBCI is to enable quick iteration when creating pipelines for testing human machine and brain computer interfaces, namely testing applied data processing and feature extraction techniques on custom machine learning models. Training the BCI requires LSL enabled devices and an LSL marker stream for training stimuli. All the `examples `__ found on the github not in a dedicated folder have a pseudo LSL data generator enabled by default, by setting ``createPseudoDevice=True`` so the examples can run without the need of LSL capable hardware. `Github repo here! `_