Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #6102 #6384

Merged
merged 8 commits into from
Nov 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 39 additions & 58 deletions polkadot/runtime/parachains/src/assigner_coretime/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -236,17 +236,9 @@ pub mod pallet {
#[pallet::error]
pub enum Error<T> {
AssignmentsEmpty,
/// Assignments together exceeded 57600.
OverScheduled,
/// Assignments together less than 57600
UnderScheduled,
/// assign_core is only allowed to append new assignments at the end of already existing
/// ones.
/// ones or update the last entry.
DisallowedInsert,
/// Tried to insert a schedule for the same core and block number as an existing schedule
DuplicateInsert,
/// Tried to add an unsorted set of assignments
AssignmentsNotSorted,
}
}

Expand Down Expand Up @@ -387,67 +379,56 @@ impl<T: Config> Pallet<T> {

/// Append another assignment for a core.
///
/// Important only appending is allowed. Meaning, all already existing assignments must have a
/// begin smaller than the one passed here. This restriction exists, because it makes the
/// insertion O(1) and the author could not think of a reason, why this restriction should be
/// causing any problems. Inserting arbitrarily causes a `DispatchError::DisallowedInsert`
/// error. This restriction could easily be lifted if need be and in fact an implementation is
/// available
/// [here](https://github.com/paritytech/polkadot-sdk/pull/1694/commits/c0c23b01fd2830910cde92c11960dad12cdff398#diff-0c85a46e448de79a5452395829986ee8747e17a857c27ab624304987d2dde8baR386).
/// The problem is that insertion complexity then depends on the size of the existing queue,
/// which makes determining weights hard and could lead to issues like overweight blocks (at
/// least in theory).
/// Important: Only appending is allowed or insertion into the last item. Meaning,
/// all already existing assignments must have a `begin` smaller or equal than the one passed
/// here.
/// Updating the last entry is supported to allow for making a core assignment multiple calls to
/// assign_core. Thus if you have too much interlacing for e.g. a single UMP message you can
/// split that up into multiple messages, each triggering a call to `assign_core`, together
/// forming the total assignment.
///
/// Inserting arbitrarily causes a `DispatchError::DisallowedInsert` error.
// With this restriction this function allows for O(1) complexity. It could easily be lifted, if
// need be and in fact an implementation is available
// [here](https://github.com/paritytech/polkadot-sdk/pull/1694/commits/c0c23b01fd2830910cde92c11960dad12cdff398#diff-0c85a46e448de79a5452395829986ee8747e17a857c27ab624304987d2dde8baR386).
// The problem is that insertion complexity then depends on the size of the existing queue,
// which makes determining weights hard and could lead to issues like overweight blocks (at
// least in theory).
pub fn assign_core(
core_idx: CoreIndex,
begin: BlockNumberFor<T>,
assignments: Vec<(CoreAssignment, PartsOf57600)>,
mut assignments: Vec<(CoreAssignment, PartsOf57600)>,
end_hint: Option<BlockNumberFor<T>>,
) -> Result<(), DispatchError> {
// There should be at least one assignment.
ensure!(!assignments.is_empty(), Error::<T>::AssignmentsEmpty);

// Checking for sort and unique manually, since we don't have access to iterator tools.
// This way of checking uniqueness only works since we also check sortedness.
assignments.iter().map(|x| &x.0).try_fold(None, |prev, cur| {
if prev.map_or(false, |p| p >= cur) {
Err(Error::<T>::AssignmentsNotSorted)
} else {
Ok(Some(cur))
}
})?;

// Check that the total parts between all assignments are equal to 57600
let parts_sum = assignments
.iter()
.map(|assignment| assignment.1)
.try_fold(PartsOf57600::ZERO, |sum, parts| {
sum.checked_add(parts).ok_or(Error::<T>::OverScheduled)
eskimor marked this conversation as resolved.
Show resolved Hide resolved
})?;
ensure!(parts_sum.is_full(), Error::<T>::UnderScheduled);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the schedule is split up, but we only receive one message in time? Will it break any assumptions if we don't have the full parts scheduled?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I checked, the code does not really care whether the schedule is full or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably worth a unit test in case this behaviour is changed in future.
Should be very quick to add - just an additional core that is not fully assigned in the test.


CoreDescriptors::<T>::mutate(core_idx, |core_descriptor| {
let new_queue = match core_descriptor.queue {
Some(queue) => {
ensure!(begin > queue.last, Error::<T>::DisallowedInsert);

CoreSchedules::<T>::try_mutate((queue.last, core_idx), |schedule| {
if let Some(schedule) = schedule.as_mut() {
debug_assert!(schedule.next_schedule.is_none(), "queue.end was supposed to be the end, so the next item must be `None`!");
schedule.next_schedule = Some(begin);
ensure!(begin >= queue.last, Error::<T>::DisallowedInsert);

// Update queue if we are appending:
if begin > queue.last {
CoreSchedules::<T>::mutate((queue.last, core_idx), |schedule| {
if let Some(schedule) = schedule.as_mut() {
debug_assert!(schedule.next_schedule.is_none(), "queue.end was supposed to be the end, so the next item must be `None`!");
schedule.next_schedule = Some(begin);
} else {
defensive!("Queue end entry does not exist?");
}
});
}

CoreSchedules::<T>::mutate((begin, core_idx), |schedule| {
let assignments = if let Some(mut old_schedule) = schedule.take() {
old_schedule.assignments.append(&mut assignments);
old_schedule.assignments
} else {
defensive!("Queue end entry does not exist?");
}
CoreSchedules::<T>::try_mutate((begin, core_idx), |schedule| {
// It should already be impossible to overwrite an existing schedule due
// to strictly increasing block number. But we check here for safety and
// in case the design changes.
ensure!(schedule.is_none(), Error::<T>::DuplicateInsert);
*schedule =
Some(Schedule { assignments, end_hint, next_schedule: None });
Ok::<(), DispatchError>(())
})?;
Ok::<(), DispatchError>(())
})?;
assignments
};
*schedule = Some(Schedule { assignments, end_hint, next_schedule: None });
});

QueueDescriptor { first: queue.first, last: begin }
},
Expand Down
86 changes: 28 additions & 58 deletions polkadot/runtime/parachains/src/assigner_coretime/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -235,10 +235,7 @@ fn assign_core_works_with_prior_schedule() {
}

#[test]
// Invariants: We assume that CoreSchedules is append only and consumed. In other words new
// schedules inserted for a core must have a higher block number than all of the already existing
// schedules.
fn assign_core_enforces_higher_block_number() {
fn assign_core_enforces_higher_or_equal_block_number() {
let core_idx = CoreIndex(0);

new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
Expand All @@ -255,7 +252,7 @@ fn assign_core_enforces_higher_block_number() {
assert_ok!(CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(15u32),
default_test_assignments(),
vec![(CoreAssignment::Idle, PartsOf57600(28800))],
None,
));

Expand All @@ -281,32 +278,27 @@ fn assign_core_enforces_higher_block_number() {
),
Error::<Test>::DisallowedInsert
);
// Call assign core again on last entry should work:
assert_eq!(
CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(15u32),
vec![(CoreAssignment::Pool, PartsOf57600(28800))],
None,
),
Ok(())
);
});
}

#[test]
fn assign_core_enforces_well_formed_schedule() {
let para_id = ParaId::from(1u32);
let core_idx = CoreIndex(0);

new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
run_to_block(1, |n| if n == 1 { Some(Default::default()) } else { None });

let empty_assignments: Vec<(CoreAssignment, PartsOf57600)> = vec![];
let overscheduled = vec![
(CoreAssignment::Pool, PartsOf57600::FULL),
(CoreAssignment::Task(para_id.into()), PartsOf57600::FULL),
];
let underscheduled = vec![(CoreAssignment::Pool, PartsOf57600(30000))];
let not_unique = vec![
(CoreAssignment::Pool, PartsOf57600::FULL / 2),
(CoreAssignment::Pool, PartsOf57600::FULL / 2),
];
let not_sorted = vec![
(CoreAssignment::Task(para_id.into()), PartsOf57600(19200)),
(CoreAssignment::Pool, PartsOf57600(19200)),
(CoreAssignment::Idle, PartsOf57600(19200)),
];

// Attempting assign_core with malformed assignments such that all error cases
// are tested
Expand All @@ -319,42 +311,6 @@ fn assign_core_enforces_well_formed_schedule() {
),
Error::<Test>::AssignmentsEmpty
);
assert_noop!(
CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(11u32),
overscheduled,
None,
),
Error::<Test>::OverScheduled
);
assert_noop!(
CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(11u32),
underscheduled,
None,
),
Error::<Test>::UnderScheduled
);
assert_noop!(
CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(11u32),
not_unique,
None,
),
Error::<Test>::AssignmentsNotSorted
);
assert_noop!(
CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(11u32),
not_sorted,
None,
),
Error::<Test>::AssignmentsNotSorted
);
});
}

Expand All @@ -374,7 +330,14 @@ fn next_schedule_always_points_to_next_work_plan_item() {
Schedule { next_schedule: Some(start_4), ..default_test_schedule() };
let expected_schedule_4 =
Schedule { next_schedule: Some(start_5), ..default_test_schedule() };
let expected_schedule_5 = default_test_schedule();
let expected_schedule_5 = Schedule {
next_schedule: None,
end_hint: None,
assignments: vec![
(CoreAssignment::Pool, PartsOf57600(28800)),
(CoreAssignment::Idle, PartsOf57600(28800)),
],
};

// Call assign_core for each of five schedules
assert_ok!(CoretimeAssigner::assign_core(
Expand Down Expand Up @@ -408,7 +371,14 @@ fn next_schedule_always_points_to_next_work_plan_item() {
assert_ok!(CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(start_5),
default_test_assignments(),
vec![(CoreAssignment::Pool, PartsOf57600(28800))],
None,
));
// Test updating last entry once more:
assert_ok!(CoretimeAssigner::assign_core(
core_idx,
BlockNumberFor::<Test>::from(start_5),
vec![(CoreAssignment::Idle, PartsOf57600(28800))],
None,
));

Expand Down
12 changes: 12 additions & 0 deletions prdoc/pr_6384.prdoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
title: Relax requirements on `assign_core`.
doc:
- audience: Runtime Dev
description: |-
Relax requirements for `assign_core` so that it accepts updates for the last scheduled entry.
This will allow the coretime chain to split up assignments into multiple
messages, which allows for interlacing down to single block granularity.

Fixes: https://github.com/paritytech/polkadot-sdk/issues/6102
crates:
- name: polkadot-runtime-parachains
bump: major
Loading