- To use this template, click the green
Use this template
button andCreate new repository
. - After github initially creates the new repository, please wait an extra minute for the initialization scripts to finish organizing the repo.
- To enable the automatic semantic version increments: in the repository go to
Settings
andCollaborators and teams
. Click the greenAdd people
button. Addsvc-aindscicomp
as an admin. Modify the file in.github/workflows/tag_and_publish.yml
and remove the if statement in line 10. The semantic version will now be incremented every time a code is committed into the main branch. - To publish to PyPI, enable semantic versioning and uncomment the publish block in
.github/workflows/tag_and_publish.yml
. The code will now be published to PyPI every time the code is committed into the main branch. - The
.github/workflows/test_and_lint.yml
file will run automated tests and style checks every time a Pull Request is opened. If the checks are undesired, thetest_and_lint.yml
can be deleted. The strictness of the code coverage level, etc., can be modified by altering the configurations in thepyproject.toml
file and the.flake8
file.
To use the software, in the root directory, run
pip install -e .
To develop the code, run
pip install -e .[dev]
To create a dataframe of licks that has been annotated with licking bout starts/stops, cue responsive licks, reward triggered licks, and intertrial choices.
import aind_dynamic_foraging_basic_analysis.licks.annotation as annotation
df_licks = annotation.annotate_licks(nwb)
You can then plot interlick interval analyses with:
import aind_dynamic_foraging_basic_analysis.licks.plot_interlick_interval as pii
#Plot interlick interval of all licks
pii.plot_interlick_interval(df_licks)
#plot interlick interval for left and right licks separately
pii.plot_interlick_interval(df_licks, categories='event')
To create a figure with several licking pattern analyses:
import aind_dynamic_foraging_basic_analysis.licks.lick_analysis as lick_analysis
lick_analysis.plot_lick_analysis(nwb)
To annotate the trials dataframe with trial by trial metrics:
import aind_dynamic_foraging_basic_analysis.metrics.trial_metrics as tm
df_trials = tm.compute_all_trial_metrics(nwb)
import aind_dynamic_foraging_basic_analysis.plot.plot_session_scroller as pss
pss.plot_session_scroller(nwb)
To disable lick bout and other annotations:
pss.plot_session_scroller(nwb,plot_bouts=False)
This function will automatically plot FIP data if available. To change the processing method plotted use:
pss.plot_session_scroller(nwb, processing="bright")
To change which trial by trial metrics plotted:
pss.plot_session_scroller(nwb, metrics=['response_rate'])
You can use the plot_fip
module to compute and plot PSTHs for the FIP data.
To compare one channel to multiple event types
from aind_dynamic_foraging_basic_analysis.plot import plot_fip as pf
channel = 'G_1_dff-poly'
rewarded_go_cues = nwb.df_trials.query('earned_reward == 1')['goCue_start_time_in_session'].values
unrewarded_go_cues = nwb.df_trials.query('earned_reward == 0')['goCue_start_time_in_session'].values
pf.plot_fip_psth_compare_alignments(
nwb,
{'rewarded goCue':rewarded_go_cues,'unrewarded goCue':unrewarded_go_cues},
channel,
censor=True
)
To compare multiple channels to the same event type:
pf.plot_fip_psth(nwb, 'goCue_start_time')
There are several libraries used to run linters, check documentation, and run tests.
- Please test your changes using the coverage library, which will run the tests and log a coverage report:
coverage run -m unittest discover && coverage report
- Use interrogate to check that modules, methods, etc. have been documented thoroughly:
interrogate .
- Use flake8 to check that code is up to standards (no unused imports, etc.):
flake8 .
- Use black to automatically format the code into PEP standards:
black .
- Use isort to automatically sort import statements:
isort .
For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use Angular style for commit messages. Roughly, they should follow the pattern:
<type>(<scope>): <short summary>
where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:
- build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
- ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
- docs: Documentation only changes
- feat: A new feature
- fix: A bugfix
- perf: A code change that improves performance
- refactor: A code change that neither fixes a bug nor adds a feature
- test: Adding missing tests or correcting existing tests
The table below, from semantic release, shows which commit message gets you which release type when semantic-release
runs (using the default configuration):
Commit message | Release type |
---|---|
fix(pencil): stop graphite breaking when too much pressure applied |
|
feat(pencil): add 'graphiteWidth' option |
|
perf(pencil): remove graphiteWidth option BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reasons. |
(Note that the BREAKING CHANGE: token must be in the footer of the commit) |
To generate the rst files source files for documentation, run
sphinx-apidoc -o doc_template/source/ src
Then to create the documentation HTML files, run
sphinx-build -b html doc_template/source/ doc_template/build/html
More info on sphinx installation can be found here.