run using npx ts-node index.ts
How can block sizes of a Merkle tree's hashed leaf nodes be optimised as a function of the unreliability of the channel in order to maximise speed in data verification?
- Trying to find a shortcut by simulating hashes based on reliability, fundamentally wrong and doesn't utilize merkle trees
Misguided idea on what was being computed: shifting from calculating time taken to hash and constructing merkle trees to simulated time taken as a result of transfer of proofs and blocks based on size.
- When trying to find the corrupted leaf, simply verifying the proof against the original root won't work as the simulated network unreliability affects more than just the current leaf being verified.
Solve by querying for the same block and proof each time if the block transferred does not verify correctly with the sent proof.
-
More partitions or smaller block size always leads to exponentially slower net speed
-
Probably because partitions are being hashed and thus when simulating calculating speed of data transfer, hash sizes are fixed and thus always result in a longer duration. The solution is to not map over the initial partitions with SHA256.
-
Size of leaves is incorrect, one example of leaf vs proof size is 14.4 44887. Issue was still using the leaves of the corrupted merkle tree instead of just sending the partitions since the leaves are standardized at the SHA256 hash size.
-
-
Scaling file size
-
Recovering corrupted file blocks with one Merkle proof per corrupted block