Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: cost tracking #146

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open

feat: cost tracking #146

wants to merge 30 commits into from

Conversation

MujkicA
Copy link
Contributor

@MujkicA MujkicA commented Oct 24, 2024

Closes #110

This PR introduces cost tracking and retrieval of bundle cost information through the api.

Implementation

The cost contributions of each transaction belonging to a bundle are incrementally accumulated each time a transaction is finalized. This is done to avoid computing the total cost of transactions on every api call and avoid inefficient response times.

Response Object

pub struct BundleCost {
    pub cost: u128,                // total cost of the bundle
    pub size: u64,                // total size of the data contained in the bundle
    pub da_block_height: u64,    // da height of the final transaction
    pub start_height: u64,      // starting height of the block contained block range
    pub end_height: u64,       // ending height of the block contained block range
}

Method

async fn get_costs(&self, from_height: u32, limit: usize) -> Result<Vec<BundleCost>>;

Endpoint

GET /v1/costs?from_block_height=<HEIGHT>&limit=<LIMIT>

@MujkicA MujkicA self-assigned this Oct 24, 2024
@MujkicA MujkicA marked this pull request as ready for review October 28, 2024 00:22
committer/src/api.rs Outdated Show resolved Hide resolved
e2e/src/committer.rs Outdated Show resolved Hide resolved
e2e/src/committer.rs Outdated Show resolved Hide resolved
committer/src/api.rs Outdated Show resolved Hide resolved
e2e/src/committer.rs Outdated Show resolved Hide resolved
packages/storage/src/postgres.rs Outdated Show resolved Hide resolved
packages/storage/src/postgres.rs Outdated Show resolved Hide resolved
packages/storage/src/postgres.rs Outdated Show resolved Hide resolved
packages/storage/src/postgres.rs Outdated Show resolved Hide resolved
@MujkicA
Copy link
Contributor Author

MujkicA commented Oct 28, 2024

@rymnc let me know if this will work on the core side.

Regarding the fields total_cost and total_size, I assume this wasn't a hard requirement. It would require some additional effort on our side to be able to reconstruct these fields in case we loose the db and guarantee they are always accurate. From what I've got, they don't directly play into the pricing algorithm so I omitted them for now.

Instead of requesting the n last bundles I've went with a different approach where you tell us up until which height you have data and the committer respond with whatever came after.

@rymnc
Copy link
Member

rymnc commented Oct 28, 2024

@rymnc let me know if this will work on the core side.

Regarding the fields total_cost and total_size, I assume this wasn't a hard requirement. It would require some additional effort on our side to be able to reconstruct these fields in case we loose the db and guarantee they are always accurate. From what I've got, they don't directly play into the pricing algorithm so I omitted them for now.

Instead of requesting the n last bundles I've went with a different approach where you tell us up until which height you have data and the committer respond with whatever came after.

hmm, we could track total_cost and total_size on our own, but we would hope that the block committer keeps track of this information because we use it as a data source for new fuel-core nodes, especially the ones that need this pricing information.

cc'ing: @MitchTurner @xgreenx

@MitchTurner
Copy link
Member

Instead of requesting the n last bundles I've went with a different approach where you tell us up until which height you have data and the committer respond with whatever came after.

@MujkicA
This seems fine to me. Since when we rehydrate the algorithm from an old state, we are going to have to request "old" da costs that we have possibly received before.

On a connected note, it sounds like you aren't making any guarantees to the decompression guys about the order in which the blocks are recorded. I'm not sure what our conversation was before, but I think it might (with some small modifications) be fine for us to receive things out of order... however, that complicates what we query for.
i.e. if we are up-to-date up to block 30. We can tell you 30 and you might return a bundle of [32, 33, 36, 37] or something. We could record that, but we will still be saying 30 after because we need 31 still, which means you might send us two bundles next [31, 34, 35] and [32, 33, 36, 37]. Which seems like a little bit of a waste, but maybe we don't care!

@MujkicA
Copy link
Contributor Author

MujkicA commented Oct 29, 2024

@MitchTurner
Currently, the only way that fragments can happen to be posted out of order is if the L2 block lookback window was increased (e.g. because of misconfiguration) such that blocks that were ignored earlier are now targets for posting.
There are also some optimisations that we plan to tackle in the near future that could lead to out of order fragments if an L1 re-org occurs.

Note that both of these scenarios are edge-cases and since bundles usually consist of multiple fragments they're even less likely to be finalised out of order. The api will only report the cost for a block range if the corresponding bundle was marked as finalized (meaning all transactions carrying it's fragments have settled). I guess this doesn't absolve you from having to implement support for it but i maybe the details have some relevance to you.

Alternatively, we could only report back costs up until we have a contiguous range. This would cause a delay in reporting more recent costs in those edge cases so it's up to you.

At any rate, I'll deploy whatever version you prefer to the test networks so that we can test the integration.

@MitchTurner
Copy link
Member

Alternatively, we could only report back costs up until we have a contiguous range. This would cause a delay in reporting more recent costs in those edge cases so it's up to you.

I'd rather receive them ASAP than in order.

@MujkicA MujkicA added the enhancement New feature or request label Oct 30, 2024
@@ -678,6 +685,136 @@ impl Postgres {
Ok(())
}

pub(crate) async fn _update_costs(
&self,
cost_per_tx: Vec<TransactionCostUpdate>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd change this fn so it is called for each TransactionCostUpdate separately. Reason being that now that you're no longer doing db transactions, one bundle could pass, another could revert and you'd be left in a partially updated state.

If you do a tx by tx then if it fails it won't update anything on the db side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another suggestion to make it even more robust is to merge together the tx state updating and the cost updating into one method seeing as they're always called together in the state listener.

That way you can wrap everything in one transaction and you cannot have a situation where you've updated the state but failed to update the costs. If anything fails you'll get another chance the next time the state listener runs.

The longer db tx locking time might be worth the retry-ability we'd get.

@digorithm
Copy link
Member

Clippy seems to be complaining about something 👀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide endpoint for querying L1 costs
5 participants