You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
8102: Bug fix: binlog heartbeat nextLogPosition field
Heartbeat binlog events must have a correct NextLogPosition field that matches up with the previous events that have been sent in the stream. If not, the replica will shutdown the binlog stream. #8087 fixed this issue, but didn't account for when a heartbeat event is sent after the initial Format Description event, but before any user initiated requests. The way to trigger this is to start replica; on the replica, then don't run any commands on the primary and let the first heartbeat go out after the binlog stream has been up for 30s.
8101: go.mod: Migrate from gopkg.in/square/go-jose.v2 to gopkg.in/go-jose/go-jose.v2. Bump version. Picks up fix for CVE-2024-28180.
8088: [no-release-info] Add additional tests for manipulating large JSON documents and fix corner case bugs in JSON_LOOKUP and JSON_INSERT
This PR adds additional tests for calling JSON_INSERT on large JSON documents. It also fixes three issues with IndexedJsonDocuments:
Some operations are not supported by the new optimized implementation for JSON_LOOKUP, such as wildcards on array paths (eg $[*]). Instead of returning an error, we detect the error and fall back on the original implementation.
Attempting to insert a value into a document could cause an infinite loop.
We would fail to read some keys from an IndexedJsonDocument's StaticMap if the document contained arrays.
go-mysql-server
2583: [stats] Disable histogram bucket merging for now because it mutated shared memory
Merging buckets in the current format is unsafe:
we collect statistics for an index where two buckets have overlapping values
we execute a join using the index with overlapping values, and use a merge algorithm to combine those buckets. The merged bucket is synthetic, but the statistics used for the join is also synthetic, so this all works as expected.
a future indexscan selects the compressed range from before, accessing one of the synthetic buckets created by the join
we error invalid bucket type: *stats.Bucket at the end of the indexscan when adding the filtered histogram with a synthetic back to the implementor-type statistic
Edited mergeOverlappingBuckets to not share memory, but also I'm not sure if merging buckets is a common performance win in most cases, so disabling for now
2580: Remove a duplicate column from information_schema
Just what it says on the tin. This duplicate column causes problems for DuckDB when attempting to connect to doltdb databases.