Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Commit

Permalink
Use same fmt and clippy configs as in Polkadot (#3004)
Browse files Browse the repository at this point in the history
* Copy rustfmt.toml from Polkadot master

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

* Format with new config

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

* Add Polkadot clippy config

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

* Update Cargo.lock

Looks like paritytech/polkadot#7611 did not
correctly update the lockfile. Maybe a different Rust Version?!

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
  • Loading branch information
ggwpez authored Aug 14, 2023
1 parent 1b890d8 commit 234b821
Show file tree
Hide file tree
Showing 60 changed files with 332 additions and 232 deletions.
1 change: 1 addition & 0 deletions .cargo/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,5 @@ rustflags = [
"-Aclippy::needless_option_as_deref", # false positives
"-Aclippy::derivable_impls", # false positives
"-Aclippy::stable_sort_primitive", # prefer stable sort
"-Aclippy::extra-unused-type-parameters", # stylistic
]
12 changes: 8 additions & 4 deletions .rustfmt.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,18 @@ reorder_imports = true
# Consistency
newline_style = "Unix"

# Format comments
comment_width = 100
wrap_comments = true

# Misc
binop_separator = "Back"
chain_width = 80
match_arm_blocks = false
spaces_around_ranges = false
binop_separator = "Back"
reorder_impl_items = false
match_arm_leading_pipes = "Preserve"
match_arm_blocks = false
match_block_trailing_comma = true
reorder_impl_items = false
spaces_around_ranges = false
trailing_comma = "Vertical"
trailing_semicolon = false
use_field_init_shorthand = true
3 changes: 2 additions & 1 deletion client/cli/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,8 @@ pub struct RunCmd {
}

impl RunCmd {
/// Create a [`NormalizedRunCmd`] which merges the `collator` cli argument into `validator` to have only one.
/// Create a [`NormalizedRunCmd`] which merges the `collator` cli argument into `validator` to
/// have only one.
pub fn normalize(&self) -> NormalizedRunCmd {
let mut new_base = self.base.clone();

Expand Down
3 changes: 2 additions & 1 deletion client/collator/src/service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,8 @@ where

/// Fetch the collation info from the runtime.
///
/// Returns `Ok(Some(_))` on success, `Err(_)` on error or `Ok(None)` if the runtime api isn't implemented by the runtime.
/// Returns `Ok(Some(_))` on success, `Err(_)` on error or `Ok(None)` if the runtime api isn't
/// implemented by the runtime.
pub fn fetch_collation_info(
&self,
block_hash: Block::Hash,
Expand Down
20 changes: 11 additions & 9 deletions client/consensus/common/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,14 @@ pub struct ParachainCandidate<B> {
pub proof: sp_trie::StorageProof,
}

/// A specific parachain consensus implementation that can be used by a collator to produce candidates.
/// A specific parachain consensus implementation that can be used by a collator to produce
/// candidates.
///
/// The collator will call [`Self::produce_candidate`] every time there is a free core for the parachain
/// this collator is collating for. It is the job of the consensus implementation to decide if this
/// specific collator should build a candidate for the given relay chain block. The consensus
/// implementation could, for example, check whether this specific collator is part of a staked set.
/// The collator will call [`Self::produce_candidate`] every time there is a free core for the
/// parachain this collator is collating for. It is the job of the consensus implementation to
/// decide if this specific collator should build a candidate for the given relay chain block. The
/// consensus implementation could, for example, check whether this specific collator is part of a
/// staked set.
#[async_trait::async_trait]
pub trait ParachainConsensus<B: BlockT>: Send + Sync + dyn_clone::DynClone {
/// Produce a new candidate at the given parent block and relay-parent blocks.
Expand Down Expand Up @@ -94,8 +96,8 @@ impl<B: BlockT> ParachainConsensus<B> for Box<dyn ParachainConsensus<B> + Send +
/// Parachain specific block import.
///
/// This is used to set `block_import_params.fork_choice` to `false` as long as the block origin is
/// not `NetworkInitialSync`. The best block for parachains is determined by the relay chain. Meaning
/// we will update the best block, as it is included by the relay-chain.
/// not `NetworkInitialSync`. The best block for parachains is determined by the relay chain.
/// Meaning we will update the best block, as it is included by the relay-chain.
pub struct ParachainBlockImport<Block: BlockT, BI, BE> {
inner: BI,
monitor: Option<SharedData<LevelMonitor<Block, BE>>>,
Expand Down Expand Up @@ -232,8 +234,8 @@ pub struct PotentialParent<B: BlockT> {
/// a set of [`PotentialParent`]s which could be potential parents of a new block with this
/// relay-parent according to the search parameters.
///
/// A parachain block is a potential parent if it is either the last included parachain block, the pending
/// parachain block (when `max_depth` >= 1), or all of the following hold:
/// A parachain block is a potential parent if it is either the last included parachain block, the
/// pending parachain block (when `max_depth` >= 1), or all of the following hold:
/// * its parent is a potential parent
/// * its relay-parent is within `ancestry_lookback` of the targeted relay-parent.
/// * the block number is within `max_depth` blocks of the included block
Expand Down
4 changes: 2 additions & 2 deletions client/consensus/common/src/parachain_consensus.rs
Original file line number Diff line number Diff line change
Expand Up @@ -176,8 +176,8 @@ where
///
/// # Note
///
/// This will access the backend of the parachain and thus, this future should be spawned as blocking
/// task.
/// This will access the backend of the parachain and thus, this future should be spawned as
/// blocking task.
pub async fn run_parachain_consensus<P, R, Block, B>(
para_id: ParaId,
parachain: Arc<P>,
Expand Down
4 changes: 2 additions & 2 deletions client/consensus/proposer/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ pub trait ProposerInterface<Block: BlockT> {
/// `ParachainInherentData`.
///
/// Also specify any required inherent digests, the maximum proposal duration,
/// and the block size limit in bytes. See the documentation on [`sp_consensus::Proposer::propose`]
/// for more details on how to interpret these parameters.
/// and the block size limit in bytes. See the documentation on
/// [`sp_consensus::Proposer::propose`] for more details on how to interpret these parameters.
///
/// The `InherentData` and `Digest` are left deliberately general in order to accommodate
/// all possible collator selection algorithms or inherent creation mechanisms,
Expand Down
3 changes: 2 additions & 1 deletion client/consensus/relay-chain/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
//!
//! 1. Each node that sees itself as a collator is free to build a parachain candidate.
//!
//! 2. This parachain candidate is send to the parachain validators that are part of the relay chain.
//! 2. This parachain candidate is send to the parachain validators that are part of the relay
//! chain.
//!
//! 3. The parachain validators validate at most X different parachain candidates, where X is the
//! total number of parachain validators.
Expand Down
13 changes: 7 additions & 6 deletions client/network/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,8 @@ impl Decode for BlockAnnounceData {
impl BlockAnnounceData {
/// Validate that the receipt, statement and announced header match.
///
/// This will not check the signature, for this you should use [`BlockAnnounceData::check_signature`].
/// This will not check the signature, for this you should use
/// [`BlockAnnounceData::check_signature`].
fn validate(&self, encoded_header: Vec<u8>) -> Result<(), Validation> {
let candidate_hash =
if let CompactStatement::Seconded(h) = self.statement.unchecked_payload() {
Expand Down Expand Up @@ -192,9 +193,9 @@ pub type BlockAnnounceValidator<Block, RCInterface> =

/// Parachain specific block announce validator.
///
/// This is not required when the collation mechanism itself is sybil-resistant, as it is a spam protection
/// mechanism used to prevent nodes from dealing with unbounded numbers of blocks. For sybil-resistant
/// collation mechanisms, this will only slow things down.
/// This is not required when the collation mechanism itself is sybil-resistant, as it is a spam
/// protection mechanism used to prevent nodes from dealing with unbounded numbers of blocks. For
/// sybil-resistant collation mechanisms, this will only slow things down.
///
/// This block announce validator is required if the parachain is running
/// with the relay chain provided consensus to make sure each node only
Expand Down Expand Up @@ -472,8 +473,8 @@ impl AssumeSybilResistance {
/// announcements which come tagged with seconded messages.
///
/// This is useful for backwards compatibility when upgrading nodes: old nodes will continue
/// to broadcast announcements with seconded messages, so these announcements shouldn't be rejected
/// and the peers not punished.
/// to broadcast announcements with seconded messages, so these announcements shouldn't be
/// rejected and the peers not punished.
pub fn allow_seconded_messages() -> Self {
AssumeSybilResistance(true)
}
Expand Down
19 changes: 10 additions & 9 deletions client/pov-recovery/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,19 @@
//! A parachain needs to build PoVs that are send to the relay chain to progress. These PoVs are
//! erasure encoded and one piece of it is stored by each relay chain validator. As the relay chain
//! decides on which PoV per parachain to include and thus, to progess the parachain it can happen
//! that the block corresponding to this PoV isn't propagated in the parachain network. This can have
//! several reasons, either a malicious collator that managed to include its own PoV and doesn't want
//! to share it with the rest of the network or maybe a collator went down before it could distribute
//! the block in the network. When something like this happens we can use the PoV recovery algorithm
//! implemented in this crate to recover a PoV and to propagate it with the rest of the network.
//! that the block corresponding to this PoV isn't propagated in the parachain network. This can
//! have several reasons, either a malicious collator that managed to include its own PoV and
//! doesn't want to share it with the rest of the network or maybe a collator went down before it
//! could distribute the block in the network. When something like this happens we can use the PoV
//! recovery algorithm implemented in this crate to recover a PoV and to propagate it with the rest
//! of the network.
//!
//! It works in the following way:
//!
//! 1. For every included relay chain block we note the backed candidate of our parachain. If the
//! block belonging to the PoV is already known, we do nothing. Otherwise we start
//! a timer that waits for a randomized time inside a specified interval before starting to recover
//! the PoV.
//! a timer that waits for a randomized time inside a specified interval before starting to
//! recover the PoV.
//!
//! 2. If between starting and firing the timer the block is imported, we skip the recovery of the
//! PoV.
Expand All @@ -39,8 +40,8 @@
//!
//! 4a. After it is recovered, we restore the block and import it.
//!
//! 4b. Since we are trying to recover pending candidates, availability is not guaranteed. If the block
//! PoV is not yet available, we retry.
//! 4b. Since we are trying to recover pending candidates, availability is not guaranteed. If the
//! block PoV is not yet available, we retry.
//!
//! If we need to recover multiple PoV blocks (which should hopefully not happen in real life), we
//! make sure that the blocks are imported in the correct order.
Expand Down
14 changes: 8 additions & 6 deletions client/relay-chain-inprocess-interface/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,8 @@ use sp_state_machine::{Backend as StateBackend, StorageValue};
/// The timeout in seconds after that the waiting for a block should be aborted.
const TIMEOUT_IN_SECONDS: u64 = 6;

/// Provides an implementation of the [`RelayChainInterface`] using a local in-process relay chain node.
/// Provides an implementation of the [`RelayChainInterface`] using a local in-process relay chain
/// node.
#[derive(Clone)]
pub struct RelayChainInProcessInterface {
full_client: Arc<FullClient>,
Expand Down Expand Up @@ -188,8 +189,8 @@ impl RelayChainInterface for RelayChainInProcessInterface {

/// Wait for a given relay chain block in an async way.
///
/// The caller needs to pass the hash of a block it waits for and the function will return when the
/// block is available or an error occurred.
/// The caller needs to pass the hash of a block it waits for and the function will return when
/// the block is available or an error occurred.
///
/// The waiting for the block is implemented as follows:
///
Expand All @@ -199,10 +200,11 @@ impl RelayChainInterface for RelayChainInProcessInterface {
///
/// 3. If the block isn't imported yet, add an import notification listener.
///
/// 4. Poll the import notification listener until the block is imported or the timeout is fired.
/// 4. Poll the import notification listener until the block is imported or the timeout is
/// fired.
///
/// The timeout is set to 6 seconds. This should be enough time to import the block in the current
/// round and if not, the new round of the relay chain already started anyway.
/// The timeout is set to 6 seconds. This should be enough time to import the block in the
/// current round and if not, the new round of the relay chain already started anyway.
async fn wait_for_block(&self, hash: PHash) -> RelayChainResult<()> {
let mut listener =
match check_block_in_chain(self.backend.clone(), self.full_client.clone(), hash)? {
Expand Down
3 changes: 2 additions & 1 deletion client/relay-chain-rpc-interface/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,8 @@ impl RelayChainInterface for RelayChainRpcInterface {

/// Wait for a given relay chain block
///
/// The hash of the block to wait for is passed. We wait for the block to arrive or return after a timeout.
/// The hash of the block to wait for is passed. We wait for the block to arrive or return after
/// a timeout.
///
/// Implementation:
/// 1. Register a listener to all new blocks.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -403,9 +403,11 @@ impl ReconnectingWebsocketWorker {

/// Run this worker to drive notification streams.
/// The worker does the following:
/// - Listen for [`RpcDispatcherMessage`], perform requests and register new listeners for the notification streams
/// - Distribute incoming import, best head and finalization notifications to registered listeners.
/// If an error occurs during sending, the receiver has been closed and we remove the sender from the list.
/// - Listen for [`RpcDispatcherMessage`], perform requests and register new listeners for the
/// notification streams
/// - Distribute incoming import, best head and finalization notifications to registered
/// listeners. If an error occurs during sending, the receiver has been closed and we remove
/// the sender from the list.
/// - Find a new valid RPC server to connect to in case the websocket connection is terminated.
/// If the worker is not able to connec to an RPC server from the list, the worker shuts down.
async fn run(mut self) {
Expand Down
3 changes: 2 additions & 1 deletion client/relay-chain-rpc-interface/src/rpc_client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,8 @@ impl RelayChainRpcClient {
.await
}

/// Fetch the hash of the validation code used by a para, making the given `OccupiedCoreAssumption`.
/// Fetch the hash of the validation code used by a para, making the given
/// `OccupiedCoreAssumption`.
pub async fn parachain_host_validation_code_hash(
&self,
at: RelayHash,
Expand Down
6 changes: 4 additions & 2 deletions client/service/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,8 @@ where
})
}

/// Creates a new background task to wait for the relay chain to sync up and retrieve the parachain header
/// Creates a new background task to wait for the relay chain to sync up and retrieve the parachain
/// header
fn warp_sync_get<B, RCInterface>(
para_id: ParaId,
relay_chain_interface: RCInterface,
Expand Down Expand Up @@ -413,7 +414,8 @@ where
receiver
}

/// Waits for the relay chain to have finished syncing and then gets the parachain header that corresponds to the last finalized relay chain block.
/// Waits for the relay chain to have finished syncing and then gets the parachain header that
/// corresponds to the last finalized relay chain block.
async fn wait_for_target_block<B, RCInterface>(
sender: oneshot::Sender<<B as BlockT>::Header>,
para_id: ParaId,
Expand Down
10 changes: 5 additions & 5 deletions pallets/aura-ext/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@
//! check the constructed block on the relay chain.
//!
//! ```
//!# struct Runtime;
//!# struct Executive;
//!# struct CheckInherents;
//! # struct Runtime;
//! # struct Executive;
//! # struct CheckInherents;
//! cumulus_pallet_parachain_system::register_validate_block! {
//! Runtime = Runtime,
//! BlockExecutor = cumulus_pallet_aura_ext::BlockExecutor::<Runtime, Executive>,
Expand Down Expand Up @@ -75,8 +75,8 @@ pub mod pallet {
/// Serves as cache for the authorities.
///
/// The authorities in AuRa are overwritten in `on_initialize` when we switch to a new session,
/// but we require the old authorities to verify the seal when validating a PoV. This will always
/// be updated to the latest AuRa authorities in `on_finalize`.
/// but we require the old authorities to verify the seal when validating a PoV. This will
/// always be updated to the latest AuRa authorities in `on_finalize`.
#[pallet::storage]
pub(crate) type Authorities<T: Config> = StorageValue<
_,
Expand Down
32 changes: 16 additions & 16 deletions pallets/collator-selection/src/weights.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,12 +85,12 @@ impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> {
/// Storage: Session NextKeys (r:1 w:0)
/// Proof Skipped: Session NextKeys (max_values: None, max_size: None, mode: Measured)
/// Storage: CollatorSelection Invulnerables (r:1 w:1)
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(641), added: 1136, mode: MaxEncodedLen)
/// Storage: CollatorSelection Candidates (r:1 w:1)
/// Proof: CollatorSelection Candidates (max_values: Some(1), max_size: Some(4802), added: 5297, mode: MaxEncodedLen)
/// Storage: System Account (r:1 w:1)
/// Proof: System Account (max_values: None, max_size: Some(128), added: 2603, mode: MaxEncodedLen)
/// The range of component `b` is `[1, 19]`.
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(641), added:
/// 1136, mode: MaxEncodedLen) Storage: CollatorSelection Candidates (r:1 w:1)
/// Proof: CollatorSelection Candidates (max_values: Some(1), max_size: Some(4802), added: 5297,
/// mode: MaxEncodedLen) Storage: System Account (r:1 w:1)
/// Proof: System Account (max_values: None, max_size: Some(128), added: 2603, mode:
/// MaxEncodedLen) The range of component `b` is `[1, 19]`.
/// The range of component `c` is `[1, 99]`.
fn add_invulnerable(b: u32, c: u32) -> Weight {
// Proof Size summary in bytes:
Expand All @@ -109,8 +109,8 @@ impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> {
.saturating_add(Weight::from_parts(0, 53).saturating_mul(c.into()))
}
/// Storage: CollatorSelection Invulnerables (r:1 w:1)
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(3202), added: 3697, mode: MaxEncodedLen)
/// The range of component `b` is `[1, 100]`.
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(3202), added:
/// 3697, mode: MaxEncodedLen) The range of component `b` is `[1, 100]`.
fn remove_invulnerable(b: u32) -> Weight {
// Proof Size summary in bytes:
// Measured: `119 + b * (32 ±0)`
Expand Down Expand Up @@ -172,12 +172,12 @@ impl WeightInfo for () {
/// Storage: Session NextKeys (r:1 w:0)
/// Proof Skipped: Session NextKeys (max_values: None, max_size: None, mode: Measured)
/// Storage: CollatorSelection Invulnerables (r:1 w:1)
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(641), added: 1136, mode: MaxEncodedLen)
/// Storage: CollatorSelection Candidates (r:1 w:1)
/// Proof: CollatorSelection Candidates (max_values: Some(1), max_size: Some(4802), added: 5297, mode: MaxEncodedLen)
/// Storage: System Account (r:1 w:1)
/// Proof: System Account (max_values: None, max_size: Some(128), added: 2603, mode: MaxEncodedLen)
/// The range of component `b` is `[1, 19]`.
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(641), added:
/// 1136, mode: MaxEncodedLen) Storage: CollatorSelection Candidates (r:1 w:1)
/// Proof: CollatorSelection Candidates (max_values: Some(1), max_size: Some(4802), added: 5297,
/// mode: MaxEncodedLen) Storage: System Account (r:1 w:1)
/// Proof: System Account (max_values: None, max_size: Some(128), added: 2603, mode:
/// MaxEncodedLen) The range of component `b` is `[1, 19]`.
/// The range of component `c` is `[1, 99]`.
fn add_invulnerable(b: u32, c: u32) -> Weight {
// Proof Size summary in bytes:
Expand All @@ -196,8 +196,8 @@ impl WeightInfo for () {
.saturating_add(Weight::from_parts(0, 53).saturating_mul(c.into()))
}
/// Storage: CollatorSelection Invulnerables (r:1 w:1)
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(3202), added: 3697, mode: MaxEncodedLen)
/// The range of component `b` is `[1, 100]`.
/// Proof: CollatorSelection Invulnerables (max_values: Some(1), max_size: Some(3202), added:
/// 3697, mode: MaxEncodedLen) The range of component `b` is `[1, 100]`.
fn remove_invulnerable(b: u32) -> Weight {
// Proof Size summary in bytes:
// Measured: `119 + b * (32 ±0)`
Expand Down
Loading

0 comments on commit 234b821

Please sign in to comment.