Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#711 + #713 remove num chunks from public input; relax the constraints for the dummy chunk proofs to avoid dummy chunk proofgen time #712

Merged
merged 37 commits into from
Aug 4, 2023
Merged
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
f3cc71f
partiall address review comments of #670
zhenfeizhang Aug 1, 2023
1f173b8
disable disable_proof_agg by default
zhenfeizhang Aug 1, 2023
2aeb3e0
fix soundness issues with data hash
zhenfeizhang Aug 1, 2023
ac9c4ee
add fixed cells for chunk_is_valid
zhenfeizhang Jul 28, 2023
38d14c1
fix typo
zhenfeizhang Jul 28, 2023
08f21d0
well... need morning coffee
zhenfeizhang Jul 28, 2023
9c84798
typo in readme
zhenfeizhang Jul 28, 2023
845ee0c
fix soundness issue in is_smaller_than
zhenfeizhang Jul 28, 2023
311b6dc
[feat] unify u8 u16 lookup table (#694)
lightsing Aug 2, 2023
d6eb833
opt min_num_rows_block in poseidon_circuit (#700)
noel2004 Aug 2, 2023
5658bc5
speedup ci using verify_at_rows (#703)
lispc Aug 2, 2023
3097221
update super circuit row estimation API (#695)
lispc Aug 2, 2023
8319ade
remove num_of_chunks in aggregation circuit's instance column (#704)
lispc Aug 2, 2023
29eb9c3
fix: lost tx_type when doing type conversion (#710)
kunxian-xia Aug 2, 2023
818dc78
fix condition (#708)
lightsing Aug 2, 2023
8a0a908
gates for zero checks
zhenfeizhang Aug 2, 2023
cb0f622
statement 1 is correct
zhenfeizhang Aug 2, 2023
539d63c
reenable statements 3,6,7
zhenfeizhang Aug 2, 2023
8182446
reenable statement 4
zhenfeizhang Aug 2, 2023
6e69d1e
everything seems to work again
zhenfeizhang Aug 2, 2023
dd9d01f
update aggregation test accordingly
zhenfeizhang Aug 2, 2023
ec95160
update spec
zhenfeizhang Aug 2, 2023
4154a6b
minor clean up
zhenfeizhang Aug 2, 2023
407be9b
Merge remote-tracking branch 'origin/develop' into 711-remove-num_chu…
silathdiir Aug 3, 2023
70dd136
Fix fmt.
silathdiir Aug 3, 2023
60f05f2
Make `ChunkHash` fields public.
silathdiir Aug 3, 2023
00ba5ff
fix decompose function
zhenfeizhang Aug 4, 2023
5325f45
update figures
zhenfeizhang Aug 4, 2023
4f6eac8
clean up
zhenfeizhang Aug 4, 2023
f58aa57
Merge branch 'develop' into 711-remove-num_chunks-from-public-input
silathdiir Aug 4, 2023
0446a28
Merge branch 'develop' into 711-remove-num_chunks-from-public-input
kunxian-xia Aug 4, 2023
246b471
address comments
zhenfeizhang Aug 4, 2023
e491a92
Merge branch '711-remove-num_chunks-from-public-input' of github.com:…
zhenfeizhang Aug 4, 2023
2401449
add is_final checks
zhenfeizhang Aug 4, 2023
9da7fc3
update readme
zhenfeizhang Aug 4, 2023
80861c1
constraint hash input length
zhenfeizhang Aug 4, 2023
aa79cc8
fix clippy
zhenfeizhang Aug 4, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion aggregator/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ snark-verifier-sdk = { git = "https://github.com/scroll-tech/snark-verifier", br


[features]
default = [ ]
default = [ ]
print-trace = [ "ark-std/print-trace" ]
# This feature is useful for unit tests where we check the SAT of pi aggregation circuit
disable_proof_aggregation = []
21 changes: 11 additions & 10 deletions aggregator/README.md
kunxian-xia marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ We want to aggregate `k` snarks, each from a valid chunk. We generate `(n-k)` em

In the above example, we have `k = 2` valid chunks, and `2` empty chunks.

> Interlude: we just need to generate 1 empty snark, and the rest `n-k-1` will be identical for the same batch. We cannot pre-compute it though, as the witness `c_k.post_state_root` and `c_k.withdraw_root` are batch dependent.
The padded snarks are identical the the last valid snark, so the aggregator does not need to generate snarks for padded chunks.

### Configuration

Expand All @@ -140,7 +140,6 @@ There will be three configurations for Aggregation circuit.
The public input of the aggregation circuit consists of
- 12 elements from accumulator
- 32 elements of `batch_pi_hash`
- 1 element of `k`

### Statements
For snarks $s_1,\dots,s_k,\dots, s_n$ the aggregation circuit argues the following statements.
Expand All @@ -162,9 +161,9 @@ for i in 1 ... __n__

This is done by compute the RLCs of chunk[i]'s data_hash for `i=0..k`, and then check the RLC matches the one from the keccak table.

4. chunks are continuous: they are linked via the state roots. __Static__.
4. chunks are continuous when they are not padded: they are linked via the state roots.

for i in 1 ... __n-1__
for i in 1 ... __k-1__
zhenfeizhang marked this conversation as resolved.
Show resolved Hide resolved
```
c_i.post_state_root == c_{i+1}.prev_state_root
```
Expand All @@ -175,17 +174,19 @@ for i in 1 ... __n__
batch.chain_id == chunk[i].chain_id
```

6. The last `(n-k)` chunk[i]'s prev_state_root == post_state_root when chunk[i] is padded
6. The last `(n-k)` chunk[i] are padding
```
for i in 1 ... n:
is_padding = (i > k) // k is a public input
if is_padding:
chunk_i.prev_state_root == chunk_i.post_state_root
chunk_i.withdraw_root == chunk_{i-1}.withdraw_root
chunk_i.data_hash == [0u8; 32]
chunk_{i-1}.chain_id == chunk_i.chain_id
chunk_{i-1}.prev_state_root == chunk_i.prev_state_root
chunk_{i-1}.post_state_root == chunk_i.post_state_root
chunk_{i-1}.withdraw_root == chunk_i.withdraw_root
chunk_{i-1}.data_hash == chunk_i.data_hash
```
This is done via comparing the `data_rlc` of `chunk_{i-1}` and ` chunk_{i}`.
7. chunk[i]'s data_hash len is `0` when chunk[i] is padded
kunxian-xia marked this conversation as resolved.
Show resolved Hide resolved

8. batch data hash is correct w.r.t. its RLCs

### Handling dynamic inputs

Expand Down
Binary file modified aggregator/figures/architecture.jpg
zhenfeizhang marked this conversation as resolved.
Show resolved Hide resolved
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified aggregator/figures/hashes.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 7 additions & 12 deletions aggregator/src/aggregation/circuit.rs
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ impl Circuit<Fr> for AggregationCircuit {

let timer = start_timer!(|| "load aux table");

let (hash_digest_cells, num_valid_snarks) = {
let hash_digest_cells = {
config
.keccak_circuit_config
.load_aux_tables(&mut layouter)?;
Expand All @@ -269,16 +269,11 @@ impl Circuit<Fr> for AggregationCircuit {
end_timer!(timer);

let timer = start_timer!(|| ("assign hash cells").to_string());
let (hash_digest_cells, num_valid_snarks) = assign_batch_hashes(
&config,
&mut layouter,
challenges,
&preimages,
self.batch_hash.number_of_valid_chunks,
)
.map_err(|_e| Error::ConstraintSystemFailure)?;
let hash_digest_cells =
assign_batch_hashes(&config, &mut layouter, challenges, &preimages)
.map_err(|_e| Error::ConstraintSystemFailure)?;
end_timer!(timer);
(hash_digest_cells, num_valid_snarks)
hash_digest_cells
};
// digests
let (batch_pi_hash_digest, chunk_pi_hash_digests, _potential_batch_data_hash_digest) =
Expand Down Expand Up @@ -331,6 +326,8 @@ impl Circuit<Fr> for AggregationCircuit {
);

region.constrain_equal(
// in the keccak table, the input and output date have different
// endianess
chunk_pi_hash_digests[i][j * 8 + k].cell(),
snark_inputs[i * DIGEST_LEN + (3 - j) * 8 + k].cell(),
)?;
Expand Down Expand Up @@ -371,8 +368,6 @@ impl Circuit<Fr> for AggregationCircuit {
}
}

log::trace!("number of valid snarks: {:?}", num_valid_snarks.value());

end_timer!(witness_time);
Ok(())
}
Expand Down
2 changes: 1 addition & 1 deletion aggregator/src/aggregation/rlc/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ impl RlcConfig {
let q2 = meta.query_selector(enable_challenge);
let cs2 = q2 * (a - challenge_expr.keccak_input());

vec![cs1 + cs2]
vec![cs1, cs2]
});
Self {
#[cfg(test)]
Expand Down
141 changes: 124 additions & 17 deletions aggregator/src/aggregation/rlc/gates.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
use halo2_proofs::{
arithmetic::Field,
circuit::{AssignedCell, Cell, Region, RegionIndex, Value},
halo2curves::bn256::Fr,
plonk::Error,
Expand All @@ -10,24 +11,25 @@ use crate::{constants::LOG_DEGREE, util::assert_equal};
use super::RlcConfig;

impl RlcConfig {
/// initialize the chip with fixed 0 and 1 cells
/// initialize the chip with fixed cells
pub(crate) fn init(&self, region: &mut Region<Fr>) -> Result<(), Error> {
region.assign_fixed(|| "const zero", self.fixed, 0, || Value::known(Fr::zero()))?;
region.assign_fixed(|| "const one", self.fixed, 1, || Value::known(Fr::one()))?;
region.assign_fixed(|| "const two", self.fixed, 2, || Value::known(Fr::from(2)))?;
region.assign_fixed(|| "const four", self.fixed, 3, || Value::known(Fr::from(4)))?;
region.assign_fixed(
|| "const eight",
self.fixed,
4,
|| Value::known(Fr::from(8)),
)?;
region.assign_fixed(|| "const nine", self.fixed, 4, || Value::known(Fr::from(9)))?;
region.assign_fixed(
|| "const thirty two",
self.fixed,
5,
|| Value::known(Fr::from(32)),
)?;
region.assign_fixed(
|| "const two to thirty two",
zhenfeizhang marked this conversation as resolved.
Show resolved Hide resolved
self.fixed,
6,
|| Value::known(Fr::from(1 << 32)),
)?;
Ok(())
}

Expand Down Expand Up @@ -68,7 +70,7 @@ impl RlcConfig {
}

#[inline]
pub(crate) fn eight_cell(&self, region_index: RegionIndex) -> Cell {
pub(crate) fn nine_cell(&self, region_index: RegionIndex) -> Cell {
Cell {
region_index,
row_offset: 4,
Expand All @@ -85,6 +87,15 @@ impl RlcConfig {
}
}

#[inline]
pub(crate) fn two_to_thirty_two_cell(&self, region_index: RegionIndex) -> Cell {
Cell {
region_index,
row_offset: 6,
column: self.fixed.into(),
}
}

pub(crate) fn load_private(
&self,
region: &mut Region<Fr>,
Expand Down Expand Up @@ -288,6 +299,21 @@ impl RlcConfig {
self.mul_add(region, b, &cond_not, &tmp, offset)
}

// if cond = 1, enforce a==b
// caller need to ensure cond is binary
pub(crate) fn conditional_enforce_equal(
&self,
region: &mut Region<Fr>,
a: &AssignedCell<Fr, Fr>,
b: &AssignedCell<Fr, Fr>,
cond: &AssignedCell<Fr, Fr>,
offset: &mut usize,
) -> Result<(), Error> {
let diff = self.sub(region, a, b, offset)?;
let res = self.mul(region, &diff, cond, offset)?;
self.enforce_zero(region, &res)
}

// Returns inputs[0] + challenge * inputs[1] + ... + challenge^k * inputs[k]
#[allow(dead_code)]
pub(crate) fn rlc(
Expand All @@ -304,7 +330,8 @@ impl RlcConfig {
Ok(acc)
}

// Returns inputs[0] + challenge * inputs[1] + ... + challenge^k * inputs[k]
// Returns challenge^k * inputs[0] * flag[0] + ... + challenge * inputs[k-1] * flag[k-1]] +
// inputs[k]* flag[k]
pub(crate) fn rlc_with_flag(
&self,
region: &mut Region<Fr>,
Expand Down Expand Up @@ -337,11 +364,14 @@ impl RlcConfig {
Ok(())
}

// decompose a field element into 254 bits of boolean cells
// decompose a field element into log_size bits of boolean cells
// require the input to be less than 2^log_size
// require log_size < 254
pub(crate) fn decomposition(
&self,
region: &mut Region<Fr>,
input: &AssignedCell<Fr, Fr>,
log_size: usize,
offset: &mut usize,
) -> Result<Vec<AssignedCell<Fr, Fr>>, Error> {
let mut input_element = Fr::default();
Expand All @@ -351,6 +381,7 @@ impl RlcConfig {
.to_bytes()
.iter()
.flat_map(byte_to_bits_le)
.take(log_size)
.collect::<Vec<_>>();
// sanity check
{
Expand All @@ -364,7 +395,6 @@ impl RlcConfig {

let bit_cells = bits
.iter()
.take(254) // hard coded for BN curve
.map(|&bit| {
let cell = self.load_private(region, &Fr::from(bit as u64), offset)?;
self.enforce_binary(region, &cell, offset)?;
Expand Down Expand Up @@ -399,7 +429,7 @@ impl RlcConfig {
}

// return a boolean if a is smaller than b
// requires that both a and b are smallish
// requires that both a and b are less than 32 bits
pub(crate) fn is_smaller_than(
&self,
region: &mut Region<Fr>,
Expand All @@ -408,11 +438,88 @@ impl RlcConfig {
offset: &mut usize,
) -> Result<AssignedCell<Fr, Fr>, Error> {
// when a and b are both small (as in our use case)
// if a < b, (a-b) will under flow and the highest bit of (a-b) be one
// else, the highest bit of (a-b) be zero
let sub = self.sub(region, a, b, offset)?;
let bits = self.decomposition(region, &sub, offset)?;
Ok(bits[253].clone())
// if a >= b, c = 2^32 + (a-b) will be >= 2^32
// we bit decompose c and check the 33-th bit
let two_to_thirty_two = self.load_private(region, &Fr::from(1 << 32), offset)?;
let two_to_thirty_two_cell =
self.two_to_thirty_two_cell(two_to_thirty_two.cell().region_index);
region.constrain_equal(two_to_thirty_two_cell, two_to_thirty_two.cell())?;

let ca = self.add(region, &two_to_thirty_two, a, offset)?;
let c = self.sub(region, &ca, b, offset)?;
let bits = self.decomposition(region, &c, 33, offset)?;
let res = self.not(region, &bits[32], offset)?;
Ok(res)
}

// return a boolean if a ?= 0
pub(crate) fn is_zero(
&self,
region: &mut Region<Fr>,
a: &AssignedCell<Fr, Fr>,
offset: &mut usize,
) -> Result<AssignedCell<Fr, Fr>, Error> {
// constraints
// - res + a * a_inv = 1
// - res * a = 0
// for some witness a_inv where
// a_inv = 0 if a = 0
// a_inv = 1/a if a != 0
let mut a_tmp = Fr::default();
a.value().map(|&v| a_tmp = v);
let res = a_tmp == Fr::zero();
let res_cell = self.load_private(region, &Fr::from(res as u64), offset)?;
let a_inv = a_tmp.invert().unwrap_or(Fr::zero());
let a_inv_cell = self.load_private(region, &a_inv, offset)?;
{
// - res + a * a_inv = 1
self.selector.enable(region, *offset)?;
a.copy_advice(|| "a", region, self.phase_2_column, *offset)?;
a_inv_cell.copy_advice(|| "b", region, self.phase_2_column, *offset + 1)?;
res_cell.copy_advice(|| "c", region, self.phase_2_column, *offset + 2)?;
let d = region.assign_advice(
|| "d",
self.phase_2_column,
*offset + 3,
|| Value::known(Fr::one()),
)?;
region.constrain_equal(d.cell(), self.one_cell(d.cell().region_index))?;
*offset += 4;
}
{
// - res * a = 0
self.selector.enable(region, *offset)?;
a.copy_advice(|| "a", region, self.phase_2_column, *offset)?;
res_cell.copy_advice(|| "b", region, self.phase_2_column, *offset + 1)?;
let c = region.assign_advice(
|| "c",
self.phase_2_column,
*offset + 2,
|| Value::known(Fr::zero()),
)?;
let d = region.assign_advice(
|| "d",
self.phase_2_column,
*offset + 3,
|| Value::known(Fr::zero()),
)?;
region.constrain_equal(c.cell(), self.zero_cell(c.cell().region_index))?;
region.constrain_equal(d.cell(), self.zero_cell(d.cell().region_index))?;
*offset += 4;
}
Ok(res_cell)
}

// return a boolean if a ?= b
pub(crate) fn is_equal(
&self,
region: &mut Region<Fr>,
a: &AssignedCell<Fr, Fr>,
b: &AssignedCell<Fr, Fr>,
offset: &mut usize,
) -> Result<AssignedCell<Fr, Fr>, Error> {
let diff = self.sub(region, a, b, offset)?;
self.is_zero(region, &diff, offset)
}
}
#[inline]
Expand Down
24 changes: 16 additions & 8 deletions aggregator/src/batch.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,23 +62,31 @@ impl BatchHash {
// ========================
// todo: return errors instead
for i in 0..MAX_AGG_SNARKS - 1 {
assert_eq!(
chunks_with_padding[i].post_state_root,
chunks_with_padding[i + 1].prev_state_root,
);
assert_eq!(
chunks_with_padding[i].chain_id,
chunks_with_padding[i + 1].chain_id,
);
if chunks_with_padding[i + 1].is_padding {
assert_eq!(chunks_with_padding[i + 1].data_hash, keccak256([]).into());
assert_eq!(
chunks_with_padding[i + 1].data_hash,
chunks_with_padding[i].data_hash
);
assert_eq!(
chunks_with_padding[i + 1].prev_state_root,
chunks_with_padding[i + 1].post_state_root
chunks_with_padding[i].prev_state_root
);
assert_eq!(
chunks_with_padding[i].withdraw_root,
chunks_with_padding[i + 1].withdraw_root
chunks_with_padding[i + 1].post_state_root,
chunks_with_padding[i].post_state_root
);
assert_eq!(
chunks_with_padding[i + 1].withdraw_root,
chunks_with_padding[i].withdraw_root
);
} else {
assert_eq!(
chunks_with_padding[i].post_state_root,
chunks_with_padding[i + 1].prev_state_root,
);
}
}
Expand Down
Loading