You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on proof checking for Marabou proofs, and I wanted to raise a case where Marabou concludes that a query is UNSAT (during preprocessing), but running PGD adversarial attack seem to generate satisfying assignments.
The results come from experiments by my colleague Marco Casadio running Marabou as a Vehicle back-end; each Marabou property corresponds to classification robustness within a hyper-rectangle around a correctly classified input from the training data. Here is the network (in ONNX format), the Vehicle-generated Marabou properties and the satisfying assignments (serialised in CSV from numpy, each line is an input point): pgd_ce.zip.
Here is an example query that I ran and the output; Marabou concludes UNSAT at pre-processing so no proof is generated:
Hi there, I suspect this is because the default numerical error tolerance is too loose for this network. If you built Marabou from the source, could you try setting some of the numerical error tolerance values to smaller values and recompile?
Hi,
I am working on proof checking for Marabou proofs, and I wanted to raise a case where Marabou concludes that a query is UNSAT (during preprocessing), but running PGD adversarial attack seem to generate satisfying assignments.
The results come from experiments by my colleague Marco Casadio running Marabou as a Vehicle back-end; each Marabou property corresponds to classification robustness within a hyper-rectangle around a correctly classified input from the training data. Here is the network (in ONNX format), the Vehicle-generated Marabou properties and the satisfying assignments (serialised in CSV from numpy, each line is an input point): pgd_ce.zip.
Here is an example query that I ran and the output; Marabou concludes UNSAT at pre-processing so no proof is generated:
Output:
Let me know if you need more info; the full code for the PGD attack can be found here.
The text was updated successfully, but these errors were encountered: