Skip to content

Latest commit

 

History

History
149 lines (114 loc) · 9.5 KB

verifying_a_contribution.md

File metadata and controls

149 lines (114 loc) · 9.5 KB

Accepting a Contribution

The contributor will download the latest challenge file and run the contribution as per the instructions in the repo's README document. They can do this without supervision or permission, as the challenge file is available (read-only) in its S3 bucket. The contributor is required to provide a response file. Having compressed points, this file (51Gb) is about half the size of the challenge file. The usual method for making this file available is to ask them to upload it via SFTP to our ppot-sftp EC2 instance. To prepare for the upload, make sure the server is running. It's a very light, cheap server, but best to shut it down when it's not active. Log in and check that it has enough free disk space. Delete old response or challenge files to free up space if required. Forward these instructions to the contributor:

sftp [email protected]
The password is ****
Once connected, at the 
sftp>
 prompt:
cd uploads
put <your response file>
Then quit to exit sftp

Once the upload is completed, the response file will be in /home/ppot/uploads/. The file should be named in the format response_nnnn_name. Rename it at this point if required.

Copy the file to the S3 bucket. The command will be something like:

aws s3 cp response_nnnn_name s3://pse-trusted-setup-ppot/

Verifying the Contribution

The verification step takes the response file produced by the contributor and the challenge file from which it was made, and generates a new challenge file. It first performs a check that the contribution is descended from the challenge. The hashes will be reported, so it's a good idea to capture the output and save it. The software to perform the verify is the same as many users will use to run the contribution: https://github.com/kobigurk/phase2-bn254

Server preparations

The verify can be run on the ppot-sftp machine, but it takes a long time (> 60 hours). It's more cost-effective to spin up a more powerful machine. The verification takes about 9 hours on a c5.4xlarge instance. The machine can be instantiated using the ppot verifier AMI, which has all the necessary binaries.

The instance needs enough storage to store 2 challenge files + 1 response file, so at least 260Gb.

Verify command

Once the challenge and response files are in place, the verify command can be started. Run it from the phase2-bn254/powersoftau folder.

Here's a sample command to run the verify:

$ cargo run --release --bin verify_transform_constrained challenge response new_challenge 28 2097152 >> verify_0086.log

This command assumes that links have been created to link challenge to the actual challenge file and response to the response file.

Sample output:

Will verify and decompress a contribution to accumulator for 2^28 powers of tau
Calculating previous challenge hash...
Hash of the `challenge` file for verification:
        3448f144 c1ad5de7 ed29cf23 63d944b8
        fd3240e4 05419c30 92e45d5b 8204dc44
        baca83ad 709394c7 5af7c91f a7a68422
        579c9788 82d9ed39 473390dc ab7e9606
`response` was based on the hash:
        3448f144 c1ad5de7 ed29cf23 63d944b8
        fd3240e4 05419c30 92e45d5b 8204dc44
        baca83ad 709394c7 5af7c91f a7a68422
        579c9788 82d9ed39 473390dc ab7e9606
Hash of the response file for verification:
        9dd89930 afaaa291 78878d45 9794fe35
        1c4f70bc f76f5d46 9e1f809d 4563a615
        0b168bd0 d7087e7a e200b57d a481a898
        832ab707 60fdd586 3eb22ece 736e530a
Verifying a contribution to contain proper powers and correspond to the public key...
Verification succeeded!
Verification succeeded! Writing to new challenge file...
Here's the BLAKE2b hash of the decompressed participant's response as new_challenge file:
        3ee2b349 a7381bbc ceefc4dd b3b2360f
        52d61cda d9829665 f0cc078b af8622bf
        32149804 dda4fae9 32b770f0 3c07a8a4
        8fc1a4dc ea18fe35 82bfbd95 bd380e3f
Done! new challenge file contains the new challenge file. The other files
were left alone.

Check that the hash of the challenge file matches the one generated by the prior contribution.

The process will generate new_challenge. This should be renamed to the form challenge_nnnn, where nnnn is the next contribution number in sequence. Upload the file to the S3 bucket, just as for the response file.

Update the records

The repo's README.md file has a table recording all contributions. Update the table with the new contribution detaiuls.

The repo has a folder for each contribution, containing a record of the contribution and any relevant attetsation files and logs. Add a folder for the new contribution (or copy a prior folder), and add the details for the new contribution.

Public key history

The public key for a Groth-16 Phase 1 contribution is a data structure containing a number of points, corresponding to the multiple sections being computed:

  • Tau.g1
  • Tau.g2
  • AlphaTau.g1
  • BetaTau.g1
  • BetaTau.g2

The contributor computes their public key while they have possession of their secret, then includes it with their contribution. One of the elements in the public key is derived from the hash of the previous contribnution. This enables a cryptographic check to ensure that the contribution is derived from the prior contribution. Thus, the entire chain of contributions can be verified from the initial state.

The history of public keys is a defense against interference in the chain of contributions, which would invalidate the ceremony, or at least those contributions following the interference. We must be careful to maintain the history.

The challenge and response file formats include the contributor's public key, but not the entire history. The ptau format, used by snarkjs, does include the whole chain of public keys. Snarkjs also has a command that will verify the history and report the hash of each contribution.

To preserve the history, we save the latest contribution in ptau format in addition to the challenge and response files, and update it whenever a new contribution is received.

Snarkjs has an additional capability that allows the ptau file to contain only the public key history along with some metadata, and without the actual point data for the powers of tau in each section. The point data would be redundant, as it is included in the challenge file, so this saves space and allows us to keep the history succinctly.

Adding the Public Key

  1. Obtain the prior ptau file.
  2. Download the response file from the new contribution
  3. Install snarkjs
  4. Run the command
  5. Upload the new ptau file to S3 storage (s3://pse-trusted-setup-ppot/ptau/)

This is an example command:

$  snarkjs powersoftau import response pot28_0085_nopoints.ptau /home/ppot/uploads/response_0086_nebra pot28_0086_nopoints.ptau --nopoints --name=nabra -v > import_0086.log

The 'import response' command is usually used to convert a response file to a ptau file. With the --nopoints option, it simply verifies and imports the pubkey and appends it to the history.

The command will report the hash of the new response, for confirmation.

The ppot-sftp machine has the binaries to run this command. The history of ptau files is also maintained on this machine (See the /home/ubuntu/ptau/ folder).

Verifying the pubkey history

The command snarkjs powersoftau verify will confirm that the public key chain is valid, and report the hashes for the entire history. It takes the name of the ptau file as an argument.

A bug in snarkjs throws an error when this command is run on a nopoints file. This fork contains a fix, so use it to run the verify command. It is installed on ppot-sftp, so the command can be run like this:

cd ~/ptau/
~/snarkjs/build/cli.cjs powersoftau verify pot28_0086_nopoints.ptau

Data Storage and Torrent Sharing

The S3 bucket (pse-trusted-setup-ppot) serves as the primary source for the PPoT data files. The latest files should always be kept as Standard storage, and readily downloadable by the public. Older files should be changed to a Glacier storage class to keep costs down.

To provide a defense against data loss, files are shared using peer-to-peer torrents. See here. Once a new contribution has been received and verified, the newly created files become the most important data files to share. Torrent files need to be created for them, and the sharing network should be notified.

The ppot-torrent-server EC2 instance has the transmission torrent client installed and running. It is used to create torrent files and share them. The file has limited disk space, so only a few files can be shared. It is often necessary to remove an older file in order to free up space.

The main folder for serving data is /mnt/media/ptau/downloads/. To create a torrent, download the file to this folder and run the crt_torrent.sh script. Arguments are the file name and a comment. This takes a long time, so it's ususally best to be in a screen session.

Once a torrent file is created, upload it to the S3 bucket: s3://pse-trusted-setup-ppot/torrents/

Add it to the transmission daemon so that the file will be seeded. The command is of this form:

transmission-remote -n 'transmission:transmission' -a /mnt/media/ptau/downloads/file.torrent

A new contribution will require 3 new torrents:

  • The new response file
  • The new challenge file
  • The new ptau file with pubkey history. Add it to the /history/ folder, and create the torrent from the entire folder.