Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client: add getBlobsV1 to the client to support CL blob import #3711

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

g11tech
Copy link
Contributor

@g11tech g11tech commented Sep 29, 2024

Copy link
Member

@jochem-brouwer jochem-brouwer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strong support for the API since we need to bring bandwidth requirements of a running a node down 😄

Two comments.

pruneBlobsAndProofsCache() {
const pruneLength = this.blobsAndProofsByHash.size - BLOBS_AND_PROOFS_CACHE_LENGTH
let pruned = 0
for (const versionedHash of this.blobsAndProofsByHash.keys()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Map.keys() returns in insertion order right? Can this get a comment that this deletes on insertion-order? This is not directly clear, it looks like it "randomly" deletes the order, but since keys() is sorted by insertion order this is not the case (it actually deletes based on first-seen).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it deletes based on first seen

@@ -34,6 +35,8 @@ const MIN_GAS_PRICE = BigInt(100000000) // .1 GWei
const TX_MAX_DATA_SIZE = 128 * 1024 // 128KB
const MAX_POOL_SIZE = 5000
const MAX_TXS_PER_ACCOUNT = 100
// keep for 1 epoch deep to handle reorgs
const BLOBS_AND_PROOFS_CACHE_LENGTH = 6 * 32 * 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment seems to imply we cache blobs based upon one epoch, but this is not the case, we cache the txpool blob pool based on 6 (max blobs per block) times 32 (epoch length) transactions. This can thus include both pending or included transactions (in the future).

Also slight nit: maybe define (if we don't have any) 32 as epoch length, and pull the 6 from max blobs per block param?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants