-
Notifications
You must be signed in to change notification settings - Fork 0
Acquiring data Kinect v2
Start kinect2-nidaq
.
Important Note: Ensure that your depth cameras are parallel to the bucket floor and that the buckets are flat in the cage shells. Environmentally attributed angles can increase the potential for extraction errors/anomalies down the line.
First, make sure you can stream frames correctly, tick the PREVIEW MODE
box and click START SESSION
. You should see RGB and depth frames over on the PREVIEW tab.
Note: on the bottom right of the PREVIEW tab, there are 2 sliders indicating the minimum and maximum distances to capture. Ensure that your distances have been set to a range similar to that of the real-life setup. For example, if the camera is ~70cm/27inches from the bucket floor, then the Max distance (mm) should equal ~705, while the Min distance (mm) should equal ~450, depicting a rough distance from the top of the bucket to the camera.
Then, you have a couple of options for acquiring data:
- Record frames for a pre-specified amount of time
- Record until you click the stop button
For both methods, it is highly recommended to record the Subject Name and Session Name in the appropriate fields. Common things to use here might be Subject Name MOUSEID7
and Session Name Odor exposure
. Finally, choose the directory you want to store your data in. If you leave COMPRESS DATA
unticked, then a new folder with a timestamp will be created with the following files,
- depth_ts.txt --> timestamps from the camera (column 1) timestamps from the National Instruments board (column 2, zeros if the board is off or not used)
- depth.dat --> uint16, Little-endian encoded depth videos
- metadata.json --> metadata from the session
If you leave this box ticked, then these files will be stored in a gzipped tarball with a timestamp in the same directory.
To record for 45 minutes, simply untick the PREVIEW MODE
box and write 45
in the RECORDING TIME (MINUTES)
field. To record depth video, make sure DEPTH STREAM
is ticked, to record RGB video tick COLOR STREAM
(color video is not used for any downstream analysis in MoSeq2). Finally, click on START SESSION
and you should see a countdown bar in the bottom left of the window.
To record until you click STOP SESSION
, make sure the RECORDING TIME (MINUTES)
field is empty, then tick the RECORD UNTIL USER CLICKS STOP
. Finally, click START SESSION
and frames will be streamed to disk until you click STOP SESSION
.
If you record any data using the National Instruments board, you can use a Python function like the following to read it in:
import numpy as np
def load_nidaq_data(filename, nch=2, dtype='<f8'):
with open(filename, "rb") as file_read:
dat = np.fromfile(file_read, dtype)
nidaq_dict = {}
for i in range(nch-1):
nidaq_dict['ch{:02d}'.format(i)] = dat[i::nch]
nidaq_dict['tstep'] = dat[nch-1::nch]
return nidaq_dict
Where nch
is the total number of channels you recorded. The file format is 32-bit little-endian floating-point, and bytes are written from each channel in series followed by the timestamp from the National Instruments board, so, e.g. ch1--time1 | ch2--time1 | ch3--time1 | timestamp1 | ch1--time2 | ch2--time2 | ch3--time2 | timestamp2
. The timestamps are timestamps from the National Instruments board clock, which are also copied to depth_ts.txt
in the second column (here, the first column contains timestamps from the camera, and the second column contains the nearest NIDAQ timestamp). Use the second column to align data from the National Instruments stream to the Kinect stream.
MoSeq2 Wiki | Home | Setup | Acquisition