Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a test about a constant time equal function #15

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dinosaure
Copy link

Hi,

I tried to use your framework to assert that a function is constant-time. The goal of this PR is not to add something new into this repository but to have some explanation - may be, I misunderstand something. Here we have a "constant-time" memcmp function (available here) and I tried to run your framework on it.

However, with dudect_simple_O0, I reached the point where the program tell me:

Probably not constant time
The max_t increases even if this function should be in constant-time.

The situation differs when we increase the number of samples with MEASUREMENTS_PER_CHUNK. In this situation, even memcmp never reaches the Probably not constant time. By this way, I can not really reproduce what you were saying in our paper locally - I currently test that on a bare-metal server, an Intel Xeon CPU D-1531 @ 2.20Ghz.

May be I do something wrong or I misinterpret results 😕 . Thanks!

@oreparaz
Copy link
Owner

oreparaz commented May 6, 2021

Hi @dinosaure,

it's aeons since I last had a look at these kinds of issues. I suggest several paths to debug this kind of issues:

  • open the binary with objdump / IDA and try to make sense of it. check carefully if the compiler / linker is rearranging the order of execution of different functions. an optimizing compiler could re-arrange the execution of different blocks without affecting the input/output behavior, but with very significant changes to the measured execution time. in particular, I'd suggest staring at the disassembly and make sure the chunk between RDTSC instructions corresponds to the function you want to measure. maybe we're not measuring what we want to measure 🙂.
  • try to progressively simplify the function that should be constant time but it's reported to not be, measure every step of the simplification and see if you get different results. the simplified function may no longer compute the correct value, but that doesn't matter for this purpose. cross check this with the disassembler dump. similarly, add some blatant non-constant time chunks if a function is expected to leak, but you can't detect it
  • check there's no compiler-time/linking-time optimization magic messing up measurements. in hindsight, distributing dudect as a single header file isn't a great idea to make this unlikely to happen. probably separating the function under test from the measurement function in a different translation unit will make it harder for the compiler to optimize (make sure to disable LTO).

The situation differs when we increase the number of samples with MEASUREMENTS_PER_CHUNK.

My gut feeling for the different behavior when varying MEASUREMENTS_PER_CHUNK is that the compiler emits different code if that constant is small vs big (unrolling / ...). Maybe dump the binary and check if there're significant differences?

In this situation, even memcmp never reaches the Probably not constant time

I'd suggest exaggerating the intended leakage: can you make the two classes more dissimilar? for example, increasing the buffer length that memcmp checks, so that when the input to memcmp are two identical buffers, it takes a long time to compare them.

Hope you can have some quality time with your compiler, CPU and disassembler. Good luck! 🍀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants