This is the source code of the ALVIE tool, accompanying the paper: "Bridging the Gap: Automated Analysis of Sancus" by M. Busi, R. Focardi, F. Luccio To appear at IEEE CSF 2024.
This project was inspired by Bognar et al., "Mind the Gap: Studying the Insecurity of Provably Secure Embedded Trusted Execution Architectures" appeared at IEEE S&P'22, and tries to reproduce their results automatically and to discover new attacks/anomalies in the Sancus processor.
The project is structured as follows:
alvie
contains the source code of the project, in particular:alvie/code
contains the OCaml sources
spec-lib
includes the specifications we used for learningexample
are the specs for the example of Sec. 3
results
includes the models generated by the learning, following the naming from Bognar et. al and our paper (pdf renderings are in the subfolderpdf
)counterexamples
includes the witness graphs (pdf renderings are in the subfolderpdf
)Dockerfile
is the docker file for the project*.sh
files are useful to reproduce our results (see below)
-
To improve the performance, for each attack in Table 1 we provide a specification that uses a strict subset of the capabilities of the advanced attacker A+ (Figure 9).
-
The enclave specification is kept equal to that in the paper except for the secret comparison instruction, which is performed between
?
(s
in the paper) andr4
(#0
in the paper). This is equivalent for our purposes.
Simply run cd
to the folder where this README is, the run the following commands (requires docker):
$ docker pull matteobusi/alvie_csf24
$ docker run --rm -it matteobusi/alvie_csf24
Once everything is ready a prompt should be waiting for your commands.
If you wish to build the Docker image, you can run the following command:
$ docker build --platform linux/amd64 -t alvie
The argument --platform linux/amd64
is needed to build the image on ARM machines because the mCRL2 model checker is not available for ARM yet.
Once the container is ready, you can run the following command to check that the attacks reported in the paper are synthesized correctly and actually break the security of the relevant Sancus versions (the full test requires a few minutes):
$ dune exec tt_attack
There are a few options to reproduce the experiments we did.
This task takes a few seconds on our server.
To reproduce, cd
to the folder where this README.md
.
Afterwards, remove existing results:
$ rm -Rf results/example counterexamples/example
and then run
$ ./learn_example.sh
to learn the models corresponding to the example in the paper (you can find the models in results/example
).
Once done, run
$ ./check_example.sh
to produce the witness graph for the example.
You can find it in the counterexamples/example
directory.
This task is longer that the one above and takes between one hour and a couple of days on our server, depending on the experiment.
Assume that you want to run experiments for attack-name
which was fixed in patch-commit
in the sancus-core-gap
repo by Bognar et al. (see table below for names and commits).
Simply, cd
to the folder where this README
is and then run
rm -Rf results/*attack-name*.dot counterexamples/*attack-name*.dot
to delete the existing models and witness graphs, then launch the learning with
$ ./learn_one.sh patch-commit attack-name
Once the learning is done (results in results
folder), just run
$ ./check_one.sh attack-name
to produce the witness graph for attack-name
.
You can find it in the counterexamples
directory.
Original commit | ef753b6 |
Patch B2 | 3170d5d |
Patch B3 | 6475709 |
Patch B6 | d54f031 |
Patch B4 | 3636536 |
Patch B1 | e8cf011 |
Patch B7(1) | 264f135 |
Final commit | bf89c0b |
Patch B8 | no fix |
Patch B9 | no fix |
In our experience one of the fastest single experiment to run is B6, i.e.,
$ rm -Rf results/*-b6-*.dot counterexamples/*-b6-*.dot
$ ./learn_one.sh d54f031 b6
$ ./check_one.sh b6
If you want to test on one of the new attacks (i.e., B8 or B9) use the following command for learning (command for the checking phase is identical!):
$ ./learn_one_nospecial.sh attack-name
If you have many cores running all the experiments takes roughly as much as it takes to run a single one.
Simply, cd
to the folder where this README
is, delete existing models and witness graphs using:
rm -Rf results/*.dot counterexamples/*.dot
and then run:
$ ./learn_all.sh
to learn all the models in the paper.
Once the learning is done (results in results
folder), just run
$ ./check_all.sh
to produce the witness graphs.
You can find them in the counterexamples
directory.
All the above scripts will produce files in the Graphviz format.
To view them you can either use a visualizer (e.g., KGraphViewer
or xdot
) or export them directly to PDF using:
$ dot -Tpdf file.dot -o file.pdf