Skip to content

Commit

Permalink
Fix some typo in the documentation (solana-labs#34058)
Browse files Browse the repository at this point in the history
Co-authored-by: Andrew Fitzgerald <[email protected]>
  • Loading branch information
hugo-syn and apfitzge authored Nov 14, 2023
1 parent aa991b6 commit 71dcf77
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions docs/src/cli/deploy-a-program.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,8 +173,8 @@ solana program show --buffers
To specify a different authority:

```bash
solana program show --programs --buffer-authority <AURTHORITY_ADRESS>
solana program show --buffers --buffer-authority <AURTHORITY_ADRESS>
solana program show --programs --buffer-authority <AUTHORITY_ADDRESS>
solana program show --buffers --buffer-authority <AUTHORITY_ADDRESS>
```

To close a single account:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Validator votes are messages that have a critical function for consensus and con

Each vote transaction should maintain a `wallclock` in its data. The merge strategy for Votes will keep the last N set of votes as configured by the local client. For push/pull the vector is traversed recursively and each Transaction is treated as an individual CrdsValue with its own local wallclock and signature.

Gossip is designed for efficient propagation of state. Messages that are sent through gossip-push are batched and propagated with a minimum spanning tree to the rest of the network. Any partial failures in the tree are actively repaired with the gossip-pull protocol while minimizing the amount of data transfered between any nodes.
Gossip is designed for efficient propagation of state. Messages that are sent through gossip-push are batched and propagated with a minimum spanning tree to the rest of the network. Any partial failures in the tree are actively repaired with the gossip-pull protocol while minimizing the amount of data transferred between any nodes.

## How this design solves the Challenges

Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/accounts-db-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ slot for which it has not completed accounts db replication. The `ReplicaAccount
the `ReplicaAccountMeta`, Hash and the AccountData. The `ReplicaAccountMeta` contains info about
the existing `AccountMeta` in addition to the account data length in bytes.

The `ReplicaAccountsServer`: this service is reponsible for serving the `ReplicaAccountsRequest`
The `ReplicaAccountsServer`: this service is responsible for serving the `ReplicaAccountsRequest`
and sends `ReplicaAccountsResponse` to the requestor. The response contains the count of the
ReplAccountInfo and the vector of ReplAccountInfo. This service runs both in the validator
and the replica relaying replication information. The server can stream the account information
Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/handle-duplicate-block.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ potential forks that the cluster has to resolve.
## Protocol
1. When WindowStage detects a duplicate slot proof `P`, it checks the new `gossip_root` to see if `<= 1/3` of the nodes have rooted a slot `S >= P`. If so, it pushes a proof to `gossip_duplicate_slots` to gossip. WindowStage then signals ReplayStage about this duplicate slot `S`. These proofs can be purged from gossip once the validator sees > 2/3 of people gossiping roots `R > S`.

2. When ReplayStage receives the signal for a duplicate slot `S` from `1)` above, the validator monitors gossip and replay waiting for`>= DUPLICATE_THRESHOLD` votes for the same hash which implies the same version of the slot. If this conditon is met for some version with hash `H` of slot `S`, this is then known as the `duplicate_confirmed` version of the slot.
2. When ReplayStage receives the signal for a duplicate slot `S` from `1)` above, the validator monitors gossip and replay waiting for`>= DUPLICATE_THRESHOLD` votes for the same hash which implies the same version of the slot. If this condition is met for some version with hash `H` of slot `S`, this is then known as the `duplicate_confirmed` version of the slot.

Before a duplicate slot `S` is `duplicate_confirmed`, it's first excluded from the vote candidate set in the fork choice rules. In addition, ReplayStage also resets PoH to the *latest* ancestor of the *earliest* `non-duplicate/confirmed_duplicate_slot`, so that block generation can start happening on the earliest known *safe* block.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/timely-vote-credits.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ vote credits earned by validator votes.
Vote credits are the accounting method used to determine what percentage of
inflation rewards a validator earns on behalf of its stakers. Currently, when
a slot that a validator has previously voted on is "rooted", it earns 1 vote
credit. A "rooted" slot is one which has received full committment by the
credit. A "rooted" slot is one which has received full commitment by the
validator (i.e. has been finalized).

One problem with this simple accounting method is that it awards one credit
Expand Down

0 comments on commit 71dcf77

Please sign in to comment.