Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update then burn in verifreg's universal_receiver_hook #1219

Open
Stebalien opened this issue Feb 23, 2023 · 0 comments
Open

Update then burn in verifreg's universal_receiver_hook #1219

Stebalien opened this issue Feb 23, 2023 · 0 comments
Labels

Comments

@Stebalien
Copy link
Member

This should be safe right now as burn calls a trusted actor, but it could become an issue if we're not careful. Ideally we'd:

  1. Update the state in a transaction.
  2. Burn, rolling back the state update on failure.

See:

pub fn universal_receiver_hook(
rt: &mut impl Runtime,
params: UniversalReceiverParams,
) -> Result<AllocationsResponse, ActorError> {
// Accept only the data cap token.
rt.validate_immediate_caller_is(&[DATACAP_TOKEN_ACTOR_ADDR])?;
let my_id = rt.message().receiver().id().unwrap();
let curr_epoch = rt.curr_epoch();
// Validate receiver hook payload.
let tokens_received = validate_tokens_received(&params, my_id)?;
let client = tokens_received.from;
// Extract and validate allocation request from the operator data.
let reqs: AllocationRequests =
deserialize(&tokens_received.operator_data, "allocation requests")?;
let mut datacap_total = DataCap::zero();
// Construct new allocation records.
let mut new_allocs = Vec::with_capacity(reqs.allocations.len());
for req in &reqs.allocations {
validate_new_allocation(req, rt.policy(), curr_epoch)?;
// Require the provider for new allocations to be a miner actor.
// This doesn't matter much, but is more ergonomic to fail rather than lock up datacap.
check_miner_id(rt, req.provider)?;
new_allocs.push(Allocation {
client,
provider: req.provider,
data: req.data,
size: req.size,
term_min: req.term_min,
term_max: req.term_max,
expiration: req.expiration,
});
datacap_total += DataCap::from(req.size.0);
}
let st: State = rt.state()?;
let mut claims = st.load_claims(rt.store())?;
let mut updated_claims = Vec::<(ClaimID, Claim)>::new();
let mut extension_total = DataCap::zero();
for req in &reqs.extensions {
// Note: we don't check the client address here, by design.
// Any client can spend datacap to extend an existing claim.
let claim = state::get_claim(&mut claims, req.provider, req.claim)?
.with_context_code(ExitCode::USR_NOT_FOUND, || {
format!("no claim {} for provider {}", req.claim, req.provider)
})?;
let policy = rt.policy();
validate_claim_extension(req, claim, policy, curr_epoch)?;
// The claim's client is not changed to be the address of the token sender.
// It remains the original allocation client.
updated_claims.push((req.claim, Claim { term_max: req.term_max, ..*claim }));
datacap_total += DataCap::from(claim.size.0);
extension_total += DataCap::from(claim.size.0);
}
// Allocation size must match the tokens received exactly (we don't return change).
let tokens_as_datacap = tokens_to_datacap(&tokens_received.amount);
if datacap_total != tokens_as_datacap {
return Err(actor_error!(
illegal_argument,
"total allocation size {} must match data cap amount received {}",
datacap_total,
tokens_as_datacap
));
}
// Burn the received datacap tokens spent on extending existing claims.
// The tokens spent on new allocations will be burnt when claimed later, or refunded.
burn(rt, &extension_total)?;
// Partial success isn't supported yet, but these results make space for it in the future.
let allocation_results = BatchReturn::ok(new_allocs.len() as u32);
let extension_results = BatchReturn::ok(updated_claims.len() as u32);
// Save new allocations and updated claims.
let ids = rt.transaction(|st: &mut State, rt| {
let ids = st.insert_allocations(rt.store(), client, new_allocs)?;
st.put_claims(rt.store(), updated_claims)?;
Ok(ids)
})?;
Ok(AllocationsResponse { allocation_results, extension_results, new_allocations: ids })
}

@anorth anorth added the P2 label Feb 26, 2023
@github-project-automation github-project-automation bot moved this to 📋 Backlog in Network nv19 Feb 26, 2023
@anorth anorth added the good first issue Good for newcomers label Mar 2, 2023
@anorth anorth added the cleanup label May 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Status: 📋 Backlog
Development

No branches or pull requests

2 participants