Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[capture] CW310 / Husky power measurements, level toggles during capture #186

Open
johannheyszl opened this issue Oct 13, 2023 · 5 comments
Assignees
Labels
bug Something isn't working Priority:P2

Comments

@johannheyszl
Copy link
Collaborator

When capturing traces on a CW310 / Husky setup we get traces on two different levels seemingly at random. Tested using: ./capture.py --cfg-file capture_aes_cw310.yaml capture aes-random --num-traces 100 --plot-traces 100. Traces distributions are on those two levels:
image

cc @vogelpi @vrozic @nasahlpa

@johannheyszl johannheyszl added the bug Something isn't working label Oct 13, 2023
@johannheyszl johannheyszl self-assigned this Oct 13, 2023
@jpcrypt
Copy link
Contributor

jpcrypt commented Oct 13, 2023

If you're using an externally generated sampling clock (i.e. from CW310; not a Husky-generated clock), you could be running into this problem (which we've since fixed): newaetech/chipwhisperer#448

@johannheyszl
Copy link
Collaborator Author

Thanks, the description says "Once the clocks are set, their relative phase stays constant, but any changes to scope.clock can affect the phase.", however, we seem to see toggles between traces in a long set where scope.clock.clkgen_src = 'extclk' is only done once at beginning. Does it still apply?

@jpcrypt
Copy link
Contributor

jpcrypt commented Oct 13, 2023

No, it wouldn't - I had a closer look at your capture.py script and the only way I can see for the issue that I highlighted to arise is across multiple capture.py runs (not within the same run). Sorry for the misdirect!

@johannheyszl
Copy link
Collaborator Author

Issue not present when using Waverunner scope (same node on CW310, 1GS/s)
image

@johannheyszl
Copy link
Collaborator Author

johannheyszl commented Oct 18, 2023

Found out that when changing the binary running on OT, this issue can go away, but there is no reasonable explanation. In this commit I have added a function that is not called in the test, but leads to the issue going away (both freshly compiled for comparison under the same environment). The entire issue is reproducible on different setups using the same bitstreams and binaries.

Will not investigate further since not expected to be issue on post silicon.
Inspection of captured traces is recommended to detect issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Priority:P2
Projects
None yet
Development

No branches or pull requests

2 participants