You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given the .csv files under data/ directory, do the following around 2am cron job
wait for that file to not have any modifications (mtime) for at least 10 minutes
kill the job which has that file open (the python script we have)
make that python script receive HUP signal and exit gracefully
do basic analytics on the data
number of presses and releases of each pin (D1-D5 + -ACK)
number of reconnects and floods. Detect the noise-block: the sequential (over 5) rapid (less than 1 sec interval) events for more than a single pin: simple implementation - for an event with time t, look at the next event and so on until t+1 -- how many events you get, and then analyze... Then classify those noise-blocks into two types:
reconnect: if the duration is less than a second -- it was a "disconnect" event. Count it separately. Display only the number
flood: if the duration over a second. Display up to 10 longest ones as beginning/ending time
should operate on "pin" not on "action" so we could test on old files (when we think it trouble-free)
The text was updated successfully, but these errors were encountered:
looking at the data ATM. I think it would be valuable also to right away create a function to extract actual events of interest -- e.g. trigger pulses and responses. Having those "noise blocks" is an annoying obstacle and likely we would need to
first identify those noise blocks and mark those recordings as such
go from the beginning and for every event which switched pin to state 1 (in "noisy block" - ignore), figure out matching 0 (might be in "noise block").
if matching "0" found -- compute/record "response_time"
If there is no matching "0" -- e.g. next one for the pin is again "1" -- record "response_time": null
in outlined above summary do not just show number of 0s and 1s -- show number of responses figured out, mean/max (of not null) values, number of null values.
The same logic should also be exposed as a helper utility which would be given input .csv file(s) and optionally a range of datetimes for which to get those events. This way we would be able to extract events for a desired scanning session. (not yet sure if we should do some treatment of trigger pulses there to ensure alignment etc)
Given the .csv files under
data/
directory, do the following around 2am cron job-ACK
)The text was updated successfully, but these errors were encountered: