Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parse table metadata.configuration as TableProperties #453

Merged
merged 27 commits into from
Nov 25, 2024

Conversation

zachschuermann
Copy link
Collaborator

@zachschuermann zachschuermann commented Nov 5, 2024

What changes are proposed in this pull request?

New TableProperties struct parses the Metadata action's configuration into a strongly-typed configuration struct. Instead of previously manually querying specific keys (e.g. checking ColumnMapping key) we can leverage the configuration to appropriately handle DVs, column mapping, etc. in a structured manner.

Pragmatically, the changes are:

  1. New TableProperties struct along with new Snapshot::get_table_properties() API
    a. Note that every field is optional and additionally there is an unknown_properties field which includes a hashmap of all fields that did not parse to a known table property.
  2. table_properties module including deserialization submodule to implement functions to deserialize the fields of TableProperties

Future work

This PR affects the following public APIs

  • New TableProperties and Snapshot::get_table_properties API

How was this change tested?

new ut

Credit: @roeap for the original PR which this is based on #222

@github-actions github-actions bot added the breaking-change Change that will require a version bump label Nov 5, 2024
Copy link

codecov bot commented Nov 5, 2024

Codecov Report

Attention: Patch coverage is 94.67085% with 17 lines in your changes missing coverage. Please review.

Please upload report for BASE (main@266e9b4). Learn more about missing BASE report.
Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
kernel/src/table_properties.rs 91.58% 7 Missing and 2 partials ⚠️
kernel/src/table_properties/deserialize.rs 97.01% 1 Missing and 5 partials ⚠️
kernel/src/expressions/column_names.rs 0.00% 1 Missing ⚠️
kernel/src/snapshot.rs 75.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main     #453   +/-   ##
=======================================
  Coverage        ?   78.51%           
=======================================
  Files           ?       57           
  Lines           ?    11914           
  Branches        ?    11914           
=======================================
  Hits            ?     9354           
  Misses          ?     2053           
  Partials        ?      507           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@zachschuermann
Copy link
Collaborator Author

need to make all properties Options. why? for propagating unchanged in write. e.g. if appendOnly is omitted we want to omit in the write (not write its default)

kernel/src/actions/mod.rs Outdated Show resolved Hide resolved
kernel/src/expressions/column_names.rs Outdated Show resolved Hide resolved
@@ -95,7 +95,7 @@ impl ScanBuilder {
let (all_fields, read_fields, have_partition_cols) = get_state_info(
logical_schema.as_ref(),
&self.snapshot.metadata().partition_columns,
self.snapshot.column_mapping_mode,
self.snapshot.table_properties().get_column_mapping_mode(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't normally use get_ for field accessors?

Suggested change
self.snapshot.table_properties().get_column_mapping_mode(),
self.snapshot.table_properties().column_mapping_mode(),

(rust can handle structs with both foo: Foo and fn foo(&self) -> &Foo -- even if both are public)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious to hear everyone's thoughts on this.. I think we need two things here (and the API above was attempting to answer the second part)

  1. a struct to parse the table properties into (TableProperties struct) which lets users (or engines) examine the table properties that are set
  2. an API for checking table properties/feature enablement. This likely needs to check if table features are enabled in the protocol and check table properties being set and perhaps check on other dependent features. (i.e. we need somewhere that we can embed this logic)

this probably isn't the PR for tackling (2) - but would love to hear some thoughts so we could get started on a follow-up. For now I'll just do column_mapping_mode() to unblock this and we can iterate?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes a lot of sense. These are checks that are going to be made for every table feature, so it helps to have all of it in one place.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Likely we don't need a method for each one and should rather check a struct of some sort. The API design is a bit subtle though, as some features can be set to multiple modes (i.e. column mapping) and others can just be on/off.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just gonna take it as a follow up #508

and as @nicklan suggested: probably just wait a little longer until we have more use cases and then can inform how we want to build something that unifies protocol/table properties

kernel/src/snapshot.rs Outdated Show resolved Hide resolved
kernel/src/table_properties.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
Comment on lines +146 to +153
"nanosecond" | "nanoseconds" => Duration::from_nanos(number),
"microsecond" | "microseconds" => Duration::from_micros(number),
"millisecond" | "milliseconds" => Duration::from_millis(number),
"second" | "seconds" => Duration::from_secs(number),
"minute" | "minutes" => Duration::from_secs(number * SECONDS_PER_MINUTE),
"hour" | "hours" => Duration::from_secs(number * SECONDS_PER_HOUR),
"day" | "days" => Duration::from_secs(number * SECONDS_PER_DAY),
"week" | "weeks" => Duration::from_secs(number * SECONDS_PER_WEEK),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this list coming from somewhere specific that we need to stay in sync with?
(I wonder e.g. if we need to support e.g. nanos and micros)

Probably worth documenting the connection.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this came from delta-rs (recycled from Robert's old PR) but I've documented that in the code and pointed out the differences between this and delta-spark's behavior. also created a follow-up to improve this but i think we can leave this here for now and improve in follow-up? #507

kernel/src/actions/mod.rs Outdated Show resolved Hide resolved
kernel/src/snapshot.rs Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
}
let mut it = value.split_whitespace();
let _ = it.next(); // skip "interval"
let number = parse_int(it.next().ok_or_else(not_an_interval)?)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a particular thing we're trying to catch by parsing as a signed number and then returning an error if it's negative? would it not make more sense to just parse as a u64 and fail if we fail to parse?

I guess the error message is a little nicer in this case. Not sure it's worth the extra code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea i made even nicer errors, lmk what you think

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

happy to delete if not worth it

kernel/src/actions/mod.rs Outdated Show resolved Hide resolved
/// existing records cannot be deleted, and existing values cannot be updated.
#[serde(rename = "delta.appendOnly")]
#[serde(deserialize_with = "deserialize_bool")]
pub append_only: Option<bool>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite a bit of work, but it may be worth linking to the protocol directly here. We do it elsewhere in the codebase, and I think it's useful. Ex:

/// true for this Delta table to be append-only. If append-only,
/// existing records cannot be deleted, and existing values cannot be updated.
///
/// [Append-only Tables]: https://github.com/delta-io/delta/blob/master/PROTOCOL.md#append-only-tables

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still need to go through and do this.. will try to finish this in the morning

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm leaving this for future work lol - it'll be good first issue #519

@@ -113,6 +114,11 @@ impl Metadata {
pub fn schema(&self) -> DeltaResult<StructType> {
Ok(serde_json::from_str(&self.schema_string)?)
}

/// Parse the metadata configuration HashMap<String, String> into a TableProperties struct.
pub fn parse_table_properties(&self) -> DeltaResult<TableProperties> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should just eagerly parse TableProperties instead doing the HashMap<String,String> to TableProperties separately. Do we split it up because of the Schema derive?

#[derive(Debug, Default, Clone, PartialEq, Eq, Schema)]
pub struct Metadata {

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup exactly. for now this seemed like the best way to separate the schema (Metadata struct) from the actual parsing of TableProperties. perhaps in the future we can look into unifying these? could just omit the derive and impl Schema ourselves (or add some new fancy mechanism that lets us annotate fields with [derive(Schema)]

kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
@@ -95,7 +95,7 @@ impl ScanBuilder {
let (all_fields, read_fields, have_partition_cols) = get_state_info(
logical_schema.as_ref(),
&self.snapshot.metadata().partition_columns,
self.snapshot.column_mapping_mode,
self.snapshot.table_properties().get_column_mapping_mode(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Likely we don't need a method for each one and should rather check a struct of some sort. The API design is a bit subtle though, as some features can be set to multiple modes (i.e. column mapping) and others can just be on/off.

kernel/src/table_properties.rs Outdated Show resolved Hide resolved
pub mod schema;
pub mod snapshot;
pub mod table;
pub mod table_properties;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only meaningful change is adding pub mod table_properties, other changes are shifting to colocate module declarations

Copy link

codecov bot commented Nov 18, 2024

Codecov Report

Attention: Patch coverage is 91.86992% with 30 lines in your changes missing coverage. Please review.

Project coverage is 81.02%. Comparing base (db29db0) to head (437b8db).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
kernel/src/table_properties/deserialize.rs 91.44% 0 Missing and 19 partials ⚠️
kernel/src/table_properties.rs 93.68% 2 Missing and 4 partials ⚠️
kernel/src/snapshot.rs 50.00% 3 Missing ⚠️
ffi/src/lib.rs 0.00% 1 Missing ⚠️
kernel/src/table_features/column_mapping.rs 97.61% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #453      +/-   ##
==========================================
+ Coverage   80.58%   81.02%   +0.44%     
==========================================
  Files          63       65       +2     
  Lines       13762    14110     +348     
  Branches    13762    14110     +348     
==========================================
+ Hits        11090    11433     +343     
+ Misses       2113     2096      -17     
- Partials      559      581      +22     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@@ -12,6 +12,11 @@ fn reader_test(path: &Path) -> datatest_stable::Result<()> {
path.parent().unwrap().to_str().unwrap()
);

// TODO(zach): skip iceberg_compat_v1 test until DAT is fixed
Copy link
Collaborator Author

@zachschuermann zachschuermann Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

@nicklan nicklan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! just please do fix how you skip the iceberg test, and use the existing stuff

acceptance/tests/dat_reader.rs Outdated Show resolved Hide resolved
Copy link
Collaborator

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly nits, but the question fallback behavior for properties that exist but fail to parse seems important?

kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these custom deserializers have a fallback for cases where the table property exists but cannot parse to the expected type? Ideally, those should land in the unknown_properties map along with everything else, with the parsed version being None.

Needed because many table properties are not necessarily in the Delta spec, and so different clients may use them differently?

Copy link
Collaborator Author

@zachschuermann zachschuermann Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I currently implemented that as failure: see fail_known_keys test. You're saying the behavior should instead be something like this?

    #[test]
    fn known_key_unknown_val() {
        let properties = HashMap::from([("delta.appendOnly".to_string(), "wack".to_string())]);
        let de = MapDeserializer::<_, de::value::Error>::new(properties.clone().into_iter());
        let table_properties = TableProperties::deserialize(de).unwrap();
        let unknown_properties = HashMap::from([("delta.appendOnly", "wack")]);
        let expected = TableProperties {
            unknown_properties,
            ..Default::default()
        };
        assert_eq!(table_properties, expected);
    }

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly. Otherwise I fear we'll eventually add yet another parsing failure ticket to the pile, along with all the CommitInfo issues.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yea fair point.. let me see how to do this :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

welp this was the straw that broke serde's back.. instead of implementing a custom Deserialize I just decided to do the naive thing and implement a big match by hand. Actually isn't that gross and we can always try serde again or just write our own macro to make everything nice

Comment on lines +98 to +104
/// This value should be large enough to ensure that:
///
/// * It is larger than the longest possible duration of a job if you run VACUUM when there are
/// concurrent readers or writers accessing the Delta table.
/// * If you run a streaming query that reads from the table, that query does not stop for
/// longer than this value. Otherwise, the query may not be able to restart, as it must still
/// read old files.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside: will we have a check that blocks too-small intervals? ie less than one day?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could! is that in the protocol/implement in delta-spark? (i'll check too)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's in the delta-spark impl? You have to "break glass" to vacuum with an interval less than X?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps take this as a follow-up? (not implementing vacuum currently and other implementations that consume this could check themselves?)

created #522

kernel/src/table_properties.rs Outdated Show resolved Hide resolved
Comment on lines 90 to 91
.unknown_properties
.insert(k.as_ref().to_string(), v.as_ref().to_string());
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now that I think about it maybe we could actually consume the original hashmap and consume items we parse from it and store the remainder in unknown_properties.. but gonna leave that for future work/probably not really worth optimizing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comment above -- consuming the original allows a pretty significant code simplification

Copy link
Collaborator

@nicklan nicklan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

restamping. lgtm

kernel/src/actions/mod.rs Outdated Show resolved Hide resolved
kernel/src/error.rs Outdated Show resolved Hide resolved
Comment on lines 34 to 38
"delta.autoOptimize.optimizeWrite" => {
parse_bool(v.as_ref()).map(|val| props.optimize_write = Some(val))
}
"delta.checkpointInterval" => parse_positive_int(v.as_ref())
.map(|val| props.checkpoint_interval = Some(val)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside: I'm confused.. how does cargo fmt decide to only sometimes use {} for these structurally identical match arms?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea I noticed the same thing and was unhappy haha

kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
Comment on lines 25 to 28
for (k, v) in unparsed {
let parsed =
match k.as_ref() {
"delta.appendOnly" => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this loop have quadratic behavior because the match arms are checked sequentially?
Or is rust smart enough to build a perfect hash of some kind?

With 23 table properties so far, a linear string of comparisons translates to 23*22/2 = 253 expected string comparisons in the worst case where all the properties are set. And a bunch more are coming because almost every table feature has a table property that decides whether it's active.

Still tho... 256 short string comparisons would be a few microseconds at most, so prob not worth the trouble to be smarter.

Copy link
Collaborator Author

@zachschuermann zachschuermann Nov 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah haha this is funny that I had the same thought and dismissed it as I felt like I was being too performance-concerned! Glad I'm not crazy :)

I think you're correct and the rust compiler doesn't (at least in 1.73 stable) implement anything fancy here - according to quick chatGPT inquiry

really i think our options are:

  • leverage custom Deserialize from serde for constant-time lookup (/linear-time loop).
    • pros: well-known efficient solution, elegant annotations on the struct for e.g. renaming the field for deserialization
    • cons: Extra overhead from serde considering we have such a constrained case of only ever deserializing from a HashMap<String, String>, code complexity of custom Deserialize
  • leverage HashMap-based lookup instead of the match for constant-time. could even use a crate like phf to get compile-time generation of perfect hash functions for string matching.
    • pros: relatively well-known efficient solution, straightforward/understandable
    • cons: increasing code size/complexity increase (similar to serde approach above)
  • leave it as-is
    • pros: simplest/fastest
    • cons: worst performance

given these options I propose we move forward as-is since this isn't really a blocker but we open an issue to implement option (1) or (2) in the future - this would be a great 'first-issue' for someone to tackle and ramp up on a little piece of the project.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
Comment on lines 90 to 91
.unknown_properties
.insert(k.as_ref().to_string(), v.as_ref().to_string());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comment above -- consuming the original allows a pretty significant code simplification

kernel/src/table_properties.rs Show resolved Hide resolved

#[test]
fn allow_unknown_keys() {
let properties = [("some_random_unknown_key".to_string(), "test".to_string())];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be "fun" to do:

Suggested change
let properties = [("some_random_unknown_key".to_string(), "test".to_string())];
let properties = [("unknown_properties".to_string(), "messing with your mind".to_string())];

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haha will add!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did something slightly shorter to keep on one line

Copy link
Collaborator

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Just a couple questions (but could be follow-up items if you want)


/// Modes of column mapping a table can be in
#[derive(Serialize, Deserialize, Debug, Copy, Clone, PartialEq)]
#[derive(Debug, Default, EnumString, Serialize, Deserialize, Copy, Clone, PartialEq, Eq)]
#[strum(serialize_all = "camelCase")]
#[serde(rename_all = "camelCase")]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside: Do these classes still need serde support?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately yea, since it is still a field in GlobalScanState (which derives Serialize/Deserialize)

kernel/src/table_properties.rs Show resolved Hide resolved
kernel/src/table_properties/deserialize.rs Outdated Show resolved Hide resolved
@zachschuermann zachschuermann merged commit d2bb3b1 into delta-io:main Nov 25, 2024
22 checks passed
@zachschuermann zachschuermann deleted the table-properties branch November 25, 2024 17:02
@@ -99,7 +102,9 @@ pub(crate) fn parse_bool(s: &str) -> Option<bool> {
/// Deserialize a comma-separated list of column names into an `Option<Vec<ColumnName>>`. Returns
/// `Some` if successfully parses, and `None` otherwise.
pub(crate) fn parse_column_names(s: &str) -> Option<Vec<ColumnName>> {
ColumnName::parse_column_name_list(s).ok()
ColumnName::parse_column_name_list(s)
.map_err(|e| warn!("column name list failed to parse: {e}"))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a job for inspect_err?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yep good catch thanks - I'll just squeeze that into the next small PR

Comment on lines +196 to +205
fn test_parse_column_names() {
assert_eq!(
parse_column_names("`col 1`, col.a2,col3").unwrap(),
vec![
ColumnName::new(["col 1"]),
column_name!("col.a2"),
column_name!("col3")
]
);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_parse_column_name_list in column_names.rs already covers this pretty thoroughly?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm yea maybe that was unnecessary - i'll remove since this is basically jsut a pass-through to the column name parsing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking-change Change that will require a version bump
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants