-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(katana): retain transactions in pool until mined #2630
Conversation
bc83f1a
to
85186b0
Compare
WalkthroughOhayo, sensei! This pull request introduces significant changes to the Changes
Possibly related PRs
Suggested reviewers
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (11)
crates/katana/pool/src/lib.rs (2)
4-4
: Ohayo sensei! New modules align well with the architecture.The addition of
pending
andsubscription
modules provides a clean separation of concerns for managing transaction visibility and state tracking.This modular approach will make it easier to:
- Track transaction states throughout their lifecycle
- Maintain visibility of transactions during mining
- Handle subscription-based notifications of state changes
Also applies to: 6-6
15-15
: Ohayo! Stream-based approach improves transaction handling.The switch from
PendingTx
toPendingTransactions
enables better transaction state management through streaming, which is crucial for maintaining transaction visibility during the mining process.This change aligns perfectly with the PR's objective by ensuring transactions remain accessible until they're fully mined.
crates/katana/pipeline/src/stage/sequencing.rs (2)
56-59
: Ohayo sensei! The changes align well with the PR objectives.The switch to
pending_transactions()
instead ofadd_listener()
effectively addresses the transaction visibility issue during mining. This ensures transactions remain queryable until they're fully mined.Consider adding debug-level logging when transactions transition between states (pending → mining → mined) to aid in troubleshooting potential timing issues.
Line range hint
82-91
: Enhance error logging for unexpected task completion.The error logging could be more detailed to help with debugging production issues. Consider including task-specific context in the error messages.
- error!(target: "pipeline", reason = ?res, "Messaging task finished unexpectedly."); + error!(target: "pipeline", reason = ?res, task = "messaging", "Task finished unexpectedly with configuration: {:?}", self.messaging_config); - error!(target: "pipeline", reason = ?res, "Block production task finished unexpectedly."); + error!(target: "pipeline", reason = ?res, task = "block_production", "Task finished unexpectedly with miner state");crates/katana/pool/src/ordering.rs (1)
150-154
: Consider improving test readability with collect, sensei!While the async transformation looks good, we could make the test more readable by collecting the stream into a Vec first:
-let pendings = executor::block_on_stream(pool.pending_transactions()); -pendings.into_iter().zip(txs).for_each(|(pending, tx)| { +let pendings = executor::block_on_stream(pool.pending_transactions()).collect::<Vec<_>>(); +pendings.into_iter().zip(txs).for_each(|(pending, tx)| { assert_eq!(pending.tx.as_ref(), &tx); });crates/katana/pool/src/subscription.rs (1)
13-16
: Ohayo, sensei! Consider using anasync
mutex fortxs
.Since the
Subscription
struct operates within asynchronous contexts, using a synchronous mutex (parking_lot::Mutex
) may lead to blocking the thread. An asynchronous mutex liketokio::sync::Mutex
is more suitable for this scenario.Suggestion:
Update the mutex to
tokio::sync::Mutex
:- use parking_lot::Mutex; + use tokio::sync::Mutex; 13 pub struct Subscription<T, O: PoolOrd> { 14 txs: Mutex<BTreeSet<PendingTx<T, O>>>, 15 receiver: mpsc::UnboundedReceiver<PendingTx<T, O>>, 16 }Remember to adjust the locking calls accordingly:
- let mut txs = this.txs.lock(); + let mut txs = this.txs.lock().await;Note that since
poll_next
cannot be anasync
function (due to theStream
trait requirements), you may need to usetokio::sync::Mutex::blocking_lock()
if necessary.crates/katana/pool/src/pending.rs (3)
11-12
: Fix duplicated word in documentation comment.Ohayo, sensei! There's a minor typo in the documentation comment: the word "by" is duplicated in "sorted by by its priority."
91-91
: Update comment to reference the correct batch.Ohayo, sensei! The comment should refer to the "second batch" of transactions instead of the "first batch."
97-98
: Correct grammar in comment.Ohayo, sensei! The comment should say "transactions" instead of "transaction" to reflect the plural.
crates/katana/pool/src/pool.rs (2)
157-158
: Address TODO: Implement Cache for Rejected TransactionsOhayo, sensei! The TODO comment indicates the need to create a cache for rejected transactions to comply with the RPC spec for
getTransactionStatus
. Implementing this will allow clients to retrieve the status of rejected transactions as per the specification.Would you like assistance in implementing the cache for rejected transactions, or should we open a new GitHub issue to track this task?
Line range hint
387-393
: Update Test to Reflect Non-Destructivepending_transactions
Ohayo, sensei! The test
pool_operations
assumes that callingpending_transactions
removes transactions from the pool. With the new implementation, transactions are not removed when retrieved. The assertions checking for an empty pool will fail. Please update the test to reflect the current behavior.Here's the suggested change:
// take all transactions let _ = pool.pending_transactions(); - // all txs should've been removed - assert!(pool.size() == 0); - assert!(pool.inner.transactions.read().is_empty()); - assert!(txs.iter().all(|tx| pool.get(tx.hash()).is_none())); + // transactions should still be in the pool + assert_eq!(pool.size(), txs.len()); + assert!(!pool.inner.transactions.read().is_empty()); + assert!(txs.iter().all(|tx| pool.get(tx.hash()).is_some()));
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (9)
crates/katana/core/src/service/mod.rs
(5 hunks)crates/katana/pipeline/src/stage/sequencing.rs
(1 hunks)crates/katana/pool/Cargo.toml
(1 hunks)crates/katana/pool/src/lib.rs
(4 hunks)crates/katana/pool/src/ordering.rs
(3 hunks)crates/katana/pool/src/pending.rs
(1 hunks)crates/katana/pool/src/pool.rs
(12 hunks)crates/katana/pool/src/subscription.rs
(1 hunks)crates/katana/rpc/rpc/src/starknet/mod.rs
(4 hunks)
🔇 Additional comments (19)
crates/katana/pool/Cargo.toml (2)
16-16
: Ohayo! The tokio sync feature addition looks good, sensei!
The explicit inclusion of the "sync" feature is essential for the transaction pool management improvements, particularly for coordinating transaction visibility during mining.
20-20
: The futures-util addition is well-placed, sensei!
Adding futures-util as a dev dependency is appropriate for testing the async transaction handling. Let's verify its usage:
✅ Verification successful
Ohayo! The futures-util dependency is perfectly placed and used correctly, sensei!
The verification confirms that:
- futures-util is correctly added as a dev-dependency in Cargo.toml
- It's being used in testing code within pending.rs for poll assertions
- The dependency placement aligns with its test-focused usage
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of futures-util in tests
# Expected: Find test files using futures-util utilities
rg -t rust "use futures_util" crates/katana/pool/src
Length of output: 54
Script:
#!/bin/bash
# Let's check for any futures-util usage patterns more broadly
rg -t rust "futures_util::" crates/katana/pool/src
# Also check if it's used in tests directory specifically
rg -t rust "futures_util" crates/katana/pool/src/tests
# Let's see what test files exist to understand the structure
fd . crates/katana/pool/src/tests
# Check the Cargo.toml to see if it's actually a dev-dependency
cat crates/katana/pool/Cargo.toml
Length of output: 1019
crates/katana/pool/src/lib.rs (2)
50-52
: Ohayo! Well-documented streaming interface.
The new pending_transactions
method with its clear documentation explains the streaming behavior, which is essential for maintaining transaction visibility during mining.
62-64
: Documentation and error handling still needed.
The previous review comment about adding documentation and error handling for remove_transactions
is still applicable.
crates/katana/pipeline/src/stage/sequencing.rs (1)
56-59
: Verify error handling for pending transactions stream.
The pending transactions stream should be resilient to temporary disruptions.
Let's check how errors are handled in the pending transactions implementation:
✅ Verification successful
Let me search for more specific error handling implementation in the pending transactions stream.
Let me check one more aspect of the implementation to see how the subscription part works when the iterator is exhausted.
Let me check one final aspect - the error handling in the transaction pool implementation.
Let me check one final aspect of the implementation - specifically the poll_next method.
Ohayo sensei! After analyzing the pending transactions implementation, I can now provide a conclusive response.
The pending transactions stream implementation is properly resilient to disruptions.
The implementation shows a robust two-phase approach:
- First phase uses an iterator over existing transactions (
this.all.next()
) - If exhausted, it falls back to subscription-based polling (
this.subscription.poll_next_unpin(cx)
)
The error handling is comprehensive:
- Pool-level errors are properly categorized and handled via
PoolError
enum - Transaction validation errors are propagated through
PoolResult
- Stream implementation gracefully handles transitions between phases
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for error handling in pending transactions implementation
rg -A 5 "pending_transactions|PendingTransactions"
Length of output: 16113
Script:
#!/bin/bash
# Search for error handling in the pending transactions stream implementation
ast-grep --pattern 'impl$$$Stream for PendingTransactions$$$'
# Also check the implementation details
rg -A 10 "impl.*Stream.*PendingTransactions"
Length of output: 835
Script:
#!/bin/bash
# Search for the full implementation of poll_next and subscription handling
rg -A 20 "fn poll_next.*PendingTransactions"
# Also check the Subscription implementation
rg -A 10 "impl.*Subscription"
Length of output: 1442
Script:
#!/bin/bash
# Search for error handling in the pool implementation
rg -A 10 "PoolError|PoolResult"
# Also check the full implementation of poll_next for PendingTransactions
ast-grep --pattern 'fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'\''_'\'\'>) -> Poll<Option<Self::Item>> {
$$$
}'
Length of output: 7986
Script:
#!/bin/bash
# Search for the poll_next implementation in pending.rs
rg -A 15 "fn poll_next.*self: Pin<&mut Self>, cx: &mut Context" crates/katana/pool/src/pending.rs
Length of output: 491
crates/katana/pool/src/ordering.rs (3)
128-129
: Ohayo sensei! The futures executor import looks good!
The addition of futures::executor
is necessary for the new async transaction handling approach, aligning well with the PR's objective of improving transaction visibility during mining.
182-182
: Excellent implementation of async transaction collection, sensei!
The use of block_on_stream
with immediate collection to Vec is clean and maintains the thorough verification of tip-based ordering. The implementation aligns perfectly with the PR's goal of improving transaction handling.
Line range hint 128-182
: Verify consistent usage of async transaction handling across tests
Let's ensure all test files have been updated to use the new async pattern consistently.
✅ Verification successful
The previous results show the migration to async, but we need to verify the actual implementations and test patterns to ensure consistency. This will help us confirm if all test files are properly using the async pattern.
Async transaction handling is consistently implemented across the codebase
The verification shows that:
- All transaction handling uses the new async pattern via
pending_transactions()
- No instances of old synchronous
take_transactions()
were found - The async implementation is consistently used across:
- Pool implementation (
pool.rs
) - Transaction ordering (
ordering.rs
) - Pipeline sequencing (
sequencing.rs
) - RPC interfaces (
dev.rs
,torii.rs
) - Test files properly use
block_on_stream
for async handling
- Pool implementation (
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent usage of async transaction handling in tests
# Look for any remaining synchronous transaction collection patterns that might have been missed
# Search for old synchronous patterns
echo "Checking for potential remaining synchronous transaction collection patterns..."
rg -l 'take_transactions\(\)' 'crates/katana'
# Search for new async patterns to verify consistent implementation
echo "Verifying consistent async implementation..."
rg -l 'pending_transactions\(\)' 'crates/katana'
Length of output: 542
Script:
#!/bin/bash
# Let's examine the implementation of pending_transactions() to verify consistency
echo "Checking pending_transactions implementations..."
rg -A 5 'pending_transactions' crates/katana
# Let's also check for any test files to ensure they're using the async pattern
echo -e "\nChecking test files for transaction handling patterns..."
fd -e rs test crates/katana -x rg -l 'transactions|pending' {}
Length of output: 10558
crates/katana/core/src/service/mod.rs (2)
30-34
: Generics Enhance Flexibility of BlockProductionTask
Ohayo, sensei! The introduction of generics EF
and O
in the BlockProductionTask
struct, with constraints EF: ExecutorFactory
and O: PoolOrd<Transaction = ExecutableTxWithHash>
, improves the flexibility and type safety of the code by allowing different executor factories and transaction ordering strategies.
83-83
: Verify Correct Timing of Transaction Removal
Ohayo, sensei! At line 83, the code removes mined transactions from the pool using this.pool.remove_transactions(&outcome.txs);
. Given the PR's goal to reduce visibility gaps by retaining transactions until they are fully mined, please verify that this removal doesn't reintroduce the issue by ensuring transactions are only removed after they are fully committed.
crates/katana/pool/src/pending.rs (3)
14-20
: PendingTransactions
struct is well-defined.
Ohayo, sensei! The struct definition is clear and encapsulates the necessary fields with proper visibility.
22-37
: Implementation of Stream
trait is correct.
Ohayo, sensei! The poll_next
method correctly implements the Stream
trait, efficiently handling pending transactions and new subscriptions.
105-138
: subscription_stream_wakeup
test is effective and well-structured.
Ohayo, sensei! The test accurately verifies that the stream wakes up when new transactions are added, ensuring correct asynchronous behavior.
crates/katana/pool/src/pool.rs (2)
185-189
: Potential Performance Issue in pending_transactions
Method
Ohayo, sensei! Cloning all transactions in pending_transactions
with self.inner.transactions.read().clone()
could be inefficient for large pools. This may lead to high memory usage and slow performance. As previously noted, consider iterating over references or using a more efficient data structure to avoid cloning the entire set.
428-462
: Tests for remove_transactions
Method Look Good
Ohayo, sensei! The added tests for the remove_transactions
method are comprehensive and effectively verify that transactions are correctly removed from the pool.
crates/katana/rpc/rpc/src/starknet/mod.rs (4)
15-15
: Ohayo sensei! Importing TransactionPool
Correctly
The addition of TransactionPool
to the import statement is appropriate and necessary for the updated transaction handling logic.
26-26
: Ohayo sensei! Including TxWithHash
for Enhanced Transaction Information
The inclusion of TxWithHash
in the imports supports improved transaction handling by providing more detailed transaction information.
568-569
: Ohayo sensei! Consistent Transaction Status Retrieval from Pool
The code appropriately checks the transaction pool when a transaction hash is not found elsewhere and returns TransactionStatus::Received
if it exists in the pool.
434-436
: Ohayo sensei! Ensure Correct Conversion of Transactions from Pool
The code correctly retrieves the transaction from the pool when it's not found elsewhere and converts it using TxWithHash::from(tx.as_ref())
. To ensure type correctness and prevent potential issues with references, please verify that the usage of as_ref()
is necessary.
Run the following script to inspect the types and confirm that tx.as_ref()
aligns with the expected input for TxWithHash::from
:
✅ Verification successful
Ohayo sensei! Usage of as_ref()
is Correct and Necessary
The as_ref()
usage is correct because:
- The pool returns
Option<Arc<ExecutableTxWithHash>>
TxWithHash::from
is implemented for both owned and referencedExecutableTxWithHash
- The
ExecutableTxWithHash
struct has#[derive(AsRef)]
, makingas_ref()
a valid operation - Using
as_ref()
avoids unnecessary cloning of theArc
-wrapped transaction
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the types involved in transaction retrieval and conversion.
# Expected: `tx` is of a type that, after applying `as_ref()`, matches the expected input for `TxWithHash::from`.
# Check the return type of `self.inner.pool.get(hash)`
rg -A 2 'fn get\(&self, hash: TxHash\) -> Option\<(?P<type>.+)\>' crates/katana/pool/src/lib.rs
# Check the implementation of `From` trait for `TxWithHash`
rg -A 2 'impl From<&(?P<source_type>.+)> for TxWithHash' crates/katana_primitives/transaction.rs
Length of output: 437
Script:
#!/bin/bash
# Let's fix the regex and search for the correct files
# Check the return type of pool.get()
rg -A 2 'fn get.*TxHash.*Option' crates/katana/pool/src/
# Check From implementation for TxWithHash
fd -e rs transaction.rs
rg -A 2 'impl From.*for TxWithHash'
Length of output: 1647
Script:
#!/bin/bash
# Let's check the actual type definitions and implementations
# Check the Transaction type in pool
rg -B 2 'type Transaction' crates/katana/pool/src/lib.rs
# Check ExecutableTxWithHash definition
rg -A 5 'struct ExecutableTxWithHash' crates/katana/primitives/src/transaction.rs
# Check if there are any other From implementations for TxWithHash
rg 'impl.*From.*TxWithHash' crates/katana/primitives/src/transaction.rs
Length of output: 568
Script:
#!/bin/bash
# Let's check the pool's transaction type and its implementation
# Check the actual transaction type returned by the pool
rg -A 5 'pub struct Pool' crates/katana/pool/src/pool.rs
# Check the implementation of PoolTransaction trait
rg -A 5 'impl PoolTransaction' crates/katana/pool/src/
# Check the full ExecutableTxWithHash struct and its traits
rg -B 5 -A 10 'struct ExecutableTxWithHash' crates/katana/primitives/src/transaction.rs
Length of output: 1439
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
crates/katana/pool/src/pool.rs (2)
157-158
: Implement rejected transactions cache for RPC spec compliance.Ohayo, sensei! The TODO comment indicates a need for a rejected transactions cache to comply with the RPC spec's
getTransactionStatus
requirements. I can help implement this feature.Would you like me to help implement the rejected transactions cache? I can:
- Design a time-based LRU cache for rejected transactions
- Integrate it with the
getTransactionStatus
RPC endpoint- Add appropriate tests
430-462
: Enhance test coverage with edge cases.Ohayo, sensei! While the current tests are good, consider adding these edge cases:
- Removing non-existent transactions
- Removing an empty list of transactions
- Removing transactions while subscribers are active
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
crates/katana/pool/src/ordering.rs
(3 hunks)crates/katana/pool/src/pool.rs
(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/katana/pool/src/ordering.rs
🔇 Additional comments (3)
crates/katana/pool/src/pool.rs (3)
8-8
: LGTM! Well-structured subscriber management implementation.
The addition of async channels and subscription types provides a solid foundation for transaction retention.
Also applies to: 12-13, 37-39, 60-60
93-116
: Previous review comment about cloning optimization is still valid.
214-218
: Previous review comment about HashSet optimization is still valid.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2630 +/- ##
==========================================
+ Coverage 56.90% 57.09% +0.18%
==========================================
Files 397 399 +2
Lines 49461 49661 +200
==========================================
+ Hits 28148 28355 +207
+ Misses 21313 21306 -7 ☔ View full report in Codecov by Sentry. |
every block interval, the node would take transactions from the pool - removing it directly from the pool. this creates a small window (depending on the machine) that the transaction appears nonexistent. this is due to how the tx flows from the pool and executor. this applies for both instant and interval block production mode.
for instant mining, the window is between tx being picked up from the pool and the tx being committed to db. while for interval, tx being picked up from the pool and the tx being inserted into the pending block.
when a tx is being queried thru the rpc, the node will first check if the it exist in the db, else find in the pending block (if interval mode). this pr adds a new (last) step, which is to try finding the tx in the pool if it doesn't exist anywhere else.
Summary by CodeRabbit
New Features
PendingTransactions
andSubscription
structures for improved handling of pending transactions.Bug Fixes
Documentation
tokio
for enhanced asynchronous capabilities.Refactor