Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix spelling issues #1389

Open
wants to merge 3 commits into
base: nightly
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions adapters/celestia/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ All of Jupiter boils down to two trait implementations: [`DaVerifier`](https://g

### The DaVerifier Trait

The DaVerifier trait is the simpler of the two core traits. Its job is to take a list of BlobTransactions from a DA layer block
The DaVerifier trait is the simplest of the two core traits. Its job is to take a list of BlobTransactions from a DA layer block
and verify that the list is _complete_ and _correct_. Once deployed in a rollup, the data verified by this trait
will be passed to the state transition function, so non-determinism should be strictly avoided.

Expand Down Expand Up @@ -51,7 +51,7 @@ splitted into [`Compact Shares`](https://github.com/celestiaorg/celestia-app/blo
and included in the data square under the [`PAY_FOR_BLOB_NAMESPACE`](https://github.com/celestiaorg/celestia-app/blob/main/specs/src/specs/namespace.md).

Second, each submitted blob is split into the [`Sparse Shares`](https://github.com/celestiaorg/celestia-app/blob/main/specs/src/specs/shares.md#share-format)
and also included in the data square, each blob under it's own namespace.
and also included in the data square, each blob under its own namespace.

The layout and structure of the `ExtendedDataSquare` is explained in [data square layout spec](https://github.com/celestiaorg/celestia-app/blob/main/specs/src/specs/data_square_layout.md#data-square-layout)
and in the [data structures spec](https://github.com/celestiaorg/celestia-app/blob/main/specs/src/specs/data_structures.md#arranging-available-data-into-shares).
Expand Down Expand Up @@ -83,10 +83,10 @@ all of the data from a special reserved namespace on Celestia which contains the
with the current block. The transactions are serialized using `protobuf` and encoded into data square in
[compact share format](https://github.com/celestiaorg/celestia-app/blob/main/specs/src/specs/shares.md#transaction-shares).

In order to prove that, we use a proofs called `EtxProof` which consist of the merkle proofs for all the shares contaniing transaction
In order to prove that, we use a proofs called `EtxProof` which consist of the merkle proofs for all the shares containing transaction
as well the offset to the beginning of the cosmos transaction in first of those shares.

To venify them, we first iterate over rollup's blobs re-created from _completeness_ verification. We associate each blob
To verify them, we first iterate over rollup's blobs re-created from _completeness_ verification. We associate each blob
with its `EtxProof`. Then we verify that the etx proof holds the contiguous range of shares and verify the merkle proofs
of it's shares with corresponding row_roots from `DataAvailabilityHeader`.
If that process succeeds, we can extract the cosmos transaction data from the given proof. We need to check if the
Expand Down
2 changes: 1 addition & 1 deletion adapters/solana/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ pub struct Chunk {
* `num_chunks`: Number of chunks that constitute the blob
* `chunk_num`: The position in the sequence of chunks that form blob with `digest`. Used to order the chunks in order to reconstruct the blob
* `actual_size`: The chunks are equal sized, so the final chunk has padding. `actual_size` is used to enable stripping out padding during reconstruction.
* We can do away with padding if we find that it's un-necessary.
* We can do away with padding if we find that it's unnecessary.
* The `blockroot` program contains 3 instructions
* Initialize - used to initialize the accounts
* Clear - Used to clear the `ChunkAccumulator` account of any incomplete blobs.
Expand Down
2 changes: 1 addition & 1 deletion examples/demo-rollup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ Most queries for ledger information accept an optional `QueryMode` argument. The
There are several ways to uniquely identify items in the Ledger DB.

- By _number_. Each family of structs (`slots`, `blocks`, `transactions`, and `events`) is numbered in order starting from `1`. So, for example, the
first transaction to appear on the DA layer will be numered `1` and might emit events `1`-`5`. Or, slot `17` might contain batches `41` - `44`.
first transaction to appear on the DA layer will be numbered `1` and might emit events `1`-`5`. Or, slot `17` might contain batches `41` - `44`.
- By _hash_. (`slots`, `blocks`, and `transactions` only)
- By _containing item_id and offset_.
- (`Events` only) By _transaction_id and key_.
Expand Down