All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Remove numba dependency, PR #139 @benkrikler
- Fix for uproot 3.13.0, PR #138 @benkrikler
- Fix call to pandas sys-info function for book-keeping, PR #130 @bundocka
- Put a tarball with book-keeping data in output directory, PR #131 @BenKrikler
- Add Dask and Parsl backends from Coffea and support execution configuration via YAML, PR #129 @BenKrikler and @asnaylor
- Pin alphatwirl explicitly to 0.25.5, PR #130
- Removed help module from carpenter and moved to fast-flow, PR #131 @BenKrikler
- Add option to BinnedDataframe to apply weights to data like MC, PR #127 @BenKrikler
- Fix binned_dataframe.explode for object-level and non-initial data chunks, PR #125
- Add broadcasting between variables of different jaggedness in expressions, PR #122 @BenKrikler
- Testing against Python <= 3.5, PR #124
- Fix handling of empty data chunks in BinnedDataframe stage, PR #124 @BenKrikler
- Add
observed
option to BinnedDataframe for speed boost with many bins, PR #118 @BenKrikler
- Pin the version for the Mantichora package that AlphaTwirl depends on
pad_missing
was replacing bin contents when set to True, PR #116 @BenKrikler
- Allow SystematicWeights stage to be used twice, PR #115 @BenKrikler
- Upgrade atsge to v0.2.1
- Updated version in setup.py to mark as a new version
- Upgrade atsge to v0.2.0
- New stage: Event-by-Event dataframes (like a skim), PR #108 @davignon
- Unit test that was broken by Pandas >1.0.0
- Bug in explode function when an dimension contains strings, issue #109, PR #110 @BenKrikler
- An API for different "backends" which run the actual data processing
- Add a backed for Coffea with local-multiprocessing executor. PR #102 @BenKrikler
- Refactor AlphaTwirl code to use the "backend" API. PR #101 @BenKrikler
- Added support for variables with multiple dots in the name (nested branches). Issue #95, PR #97 @kreczko
- Fix JaggedNth to work with arbitrary depth jagged arrays, Issue #87, PR #99 @benkrikler
- Add protection against multiple dimensions using the same output name in a BinnedDataframe stage, Issue #92, PR #100 @benkrikler
- Existing collectors may now return their results within python, in addition to writing them to disk PR #90 @lgray.
- This behavior can, at present, only be controlled within python, and is meant for exposing certain aspects of FAST-carpenter plumbing to coffea.
- Fix bug in BinnedDataframe stage, issue #89, PR #93 @benkrikler
- Pin atuproot to v0.1.13, PR #91
- Tidy the print out at the end of processing, PR #94.
- Give fast-flow the "fast_carpenter" backend, issue #84, PR #85 @lorenafreitas
- JaggedNth supports negative indexing, PR #81 @pmk21
- Added version flag to CLI, PR #79. @maikefischer
- Prohibit ncores < 1, PR #76 @annakau
- Binned dataframes can now be produced from ND jagged arrays
- Support for ND jagged array in expressions, PR #73
- Automatic conversion of ObjectArrays from uproot to JaggedArrays, PR #73
- Fixed interpretation of user-defined variables for uproot, issue #67, PR #71 @benkrikler
- Add changes to support uproot 3.9.1 and greater, issue #68 @benkrikler
- Protect against overwriting branches and add tests, pull request #66 @benkrikler
- Adds support for masking variables in their definition, issue #59 @benkrikler
- Adds several constants for variable expressions, issue #21 @benkrikler
- Added this changelog @benkrikler
- Use
pandas.groupby(..).counts()
to make binned dataframes, issue #51 @benkrikler
- Add support for multiple output file types for binned dataframes, issue #57 @asnaylor
- Fix issue with binned dataframe expressions and multiple similar branch names, issue #60 @benkrikler
- Enable multiple cut-flow stages and make the input chunk.tree have consistent array and arrays methods, issue #61 @benkrikler