Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Satisfying assignments for UNSAT query (with PGD attack) #820

Open
rdesmartin opened this issue Jul 3, 2024 · 1 comment
Open

Satisfying assignments for UNSAT query (with PGD attack) #820

rdesmartin opened this issue Jul 3, 2024 · 1 comment

Comments

@rdesmartin
Copy link

Hi,

I am working on proof checking for Marabou proofs, and I wanted to raise a case where Marabou concludes that a query is UNSAT (during preprocessing), but running PGD adversarial attack seem to generate satisfying assignments.

The results come from experiments by my colleague Marco Casadio running Marabou as a Vehicle back-end; each Marabou property corresponds to classification robustness within a hyper-rectangle around a correctly classified input from the training data. Here is the network (in ONNX format), the Vehicle-generated Marabou properties and the satisfying assignments (serialised in CSV from numpy, each line is an input point): pgd_ce.zip.

Here is an example query that I ran and the output; Marabou concludes UNSAT at pre-processing so no proof is generated:

$ ./Marabou ./adv_mid_bs32_0.999_e8_adv.onnx ./property\!177-query1.txt --prove-unsat     

Output:

Proof production is not yet supported with DEEPSOI search, turning search off.
Network: ./adv_mid_bs32_0.999_e8_adv.onnx
Property: ./property!177-query1.txt

Engine::processInputQuery: Input query (before preprocessing): 387 equations, 792 variables
unsat

Let me know if you need more info; the full code for the PGD attack can be found here.

@wu-haoze
Copy link
Collaborator

wu-haoze commented Jul 4, 2024

Hi there, I suspect this is because the default numerical error tolerance is too loose for this network. If you built Marabou from the source, could you try setting some of the numerical error tolerance values to smaller values and recompile?

Set the following to 1e-9

const double GlobalConfiguration::PREPROCESSOR_ALMOST_FIXED_THRESHOLD = 0.00001;

Set the following to 1e-7
const double GlobalConfiguration::CONSTRAINT_COMPARISON_TOLERANCE = 0.00001;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants