You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rather than having a limit on a number of blocks that can be part of a bundle, we should limit the total acceptable output size of the bundle. This way we have more consistent throughput in times of high traffic on fuel.
Consider changing what we accumulate to be bytes instead of blocks -- i.e. start the bundling process when we have, e.g. 2MB of block data instead of 3600 blocks or 1h passes since the last bundle.
So, we might start configuring the committer to:
Wait for 4MB of block data or 1h, generate bundles only up to 2MB in size (11-12 blobs).
The text was updated successfully, but these errors were encountered:
Rather than having a limit on a number of blocks that can be part of a bundle, we should limit the total acceptable output size of the bundle. This way we have more consistent throughput in times of high traffic on fuel.
Consider changing what we accumulate to be bytes instead of blocks -- i.e. start the bundling process when we have, e.g. 2MB of block data instead of 3600 blocks or 1h passes since the last bundle.
So, we might start configuring the committer to:
Wait for 4MB of block data or 1h, generate bundles only up to 2MB in size (11-12 blobs).
The text was updated successfully, but these errors were encountered: