Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add monocular settings doc for the app #717

Merged
merged 15 commits into from
Nov 12, 2024
Merged
16 changes: 9 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
# Recommended Defaults

- When importing images, use markdown-style imports

```markdown
![alt text](/path/to/image.png)
```

- When trying to create a gallery view, consider using the `PhotoGrid` component
- Avoid using custom HTML and CSS as much as possible.


# Capitalization

- We use title case for all titles, headings, menu sections etc. We use title case as it is defined by the Associated Press Stylebook. There is a converter available [here](https://titlecaseconverter.com/) to double-check correctness.
- Fixed product and feature names are capitalized, like e.g.
- Reference Image Mapper
- Neon Companion App
- Neon Companion Device
- Pupil Invisible Glasses
- Video Renderer
- Heatmap Visualization
- Reference Image Mapper
- Neon Companion
- Pupil Invisible Glasses
- Video Renderer
- Heatmap Visualization
1 change: 1 addition & 0 deletions alpha-lab/imu-transformations/pl-imu-transformations
Submodule pl-imu-transformations added at 542043
8 changes: 4 additions & 4 deletions alpha-lab/neon-with-capture/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Use Neon with Pupil Capture

It's possible to make recordings using Neon in Pupil Capture under MacOS and Linux. You can use the Neon module as if it was a Pupil Core headset by plugging it into your Computer and running Pupil Capture from [source](https://github.com/pupil-labs/pupil/tree/master).
It's possible to make recordings using Neon in Pupil Capture under MacOS and Linux. You can use the Neon module as if it was a Pupil Core headset by plugging it into your Computer and running Pupil Capture from [source](https://github.com/pupil-labs/pupil/tree/master).

Note, that in this case gaze estimation will **NOT** be done using **NeonNet**, but using Pupil Core's gaze estimation pipeline. This means you will have to do a calibration and experience a lack of robustness compared to NeonNet.

Expand All @@ -11,10 +11,10 @@ To get started with Neon in Capture, follow these steps:
1. Connect **Neon** to your computer.
2. Open **Pupil Capture**.
3. Under Video Source, click **"Activate Device"** and choose **Neon** to activate the scene and eye cameras.
4. In the eye camera's window, you can adjust the absolute exposure time and gain. Keep in mind that the Neon eye cameras do not have individual controls, so changes will affect both cameras.
Additionally, you may want to switch to ROI mode in the general settings to define a specific area for pupil detection. For more guidance, check out this [video](https://drive.google.com/file/d/1tr1KQ7QFmFUZQjN9aYtSzpMcaybRnuqi/view?usp=sharing).
4. In the eye camera's window, you can adjust the absolute exposure time and gain. Keep in mind that the Neon eye cameras do not have individual controls, so changes will affect both cameras.
Additionally, you may want to switch to ROI mode in the general settings to define a specific area for pupil detection. For more guidance, check out this [video](https://drive.google.com/file/d/1tr1KQ7QFmFUZQjN9aYtSzpMcaybRnuqi/view?usp=sharing).
5. Select either `Neon 3D` or `2D` as the **Gaze Mapping** option in the **Calibration** tab.

::: warning
Recordings made with the Neon Companion app (rather than Pupil Capture) are **NOT** compatible with Pupil Player.
:::
:::
7 changes: 4 additions & 3 deletions invisible/data-collection/offset-correction/index.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
# Using Offset Correction to Improve Gaze Accuracy
For some wearers, you may find a constant offset in their gaze estimates. Especially in setups where wearers gaze at somethings that is less than 1 meter away from them this can happen due to parallax error.

To compensate for those offsets, you can use the Offset Correction feature in the Pupil Invisible Companion App. See the video below to learn how it works! The video is using demonstrating the feature within the Neon Companion App, but the approach is very similar.
For some wearers, you may find a constant offset in their gaze estimates. Especially in setups where wearers gaze at somethings that is less than 1 meter away from them this can happen due to parallax error.

To compensate for those offsets, you can use the Offset Correction feature in the Pupil Invisible Companion App. See the video below to learn how it works! The video is using demonstrating the feature within the Neon Companion app, but the approach is very similar.

<Youtube src="7weK8UPLOzo" />

::: warning
Note, that the amount of offset introduced by parallax error is highly dependend on the distance between the wearer and the object they are looking at. The closer the object, the larger the offset.

The offset correction is only valid for the specific distance at which it was recorded. If the wearer changes their distance to the object, you will need to record a new offset correction.
:::
:::
4 changes: 4 additions & 0 deletions neon/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,10 @@ let theme_config_additions = {
text: "Measuring the IED",
link: "/data-collection/measuring-ied/",
},
{
text: "Gaze Mode",
link: "/data-collection/gaze-mode/",
},
{
text: "Scene Camera Exposure",
link: "/data-collection/scene-camera-exposure/",
Expand Down
1 change: 1 addition & 0 deletions neon/data-collection/data-format/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ This file contains meta-information on the recording.
| **start_time** | Timestamp of when the recording was started. Given as UTC timestamp in nanoseconds. |
| **template_data** | Data regarding the selected template for the recording as well as the response values. |
| **wearer_id** | Unique identifier of the wearer selected for this recording. |
| **gaze_mode** | Indicates whether binocular or monocular (right/ left) pipeline was used to infer gaze. |
| **wearer_name** | Name of the wearer selected for this recording. |
| **workspace_id** | The ID of the Pupil Cloud workspace this recording has been assigned to. |

Expand Down
2 changes: 1 addition & 1 deletion neon/data-collection/first-recording/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Next, install the Neon Companion app on your device:

</a>

![Neon Companion App](/ne-companion_app_logo-bg.png)
![Neon Companion app](/ne-companion_app_logo-bg.png)

</div>

Expand Down
25 changes: 25 additions & 0 deletions neon/data-collection/gaze-mode/index.md
mikelgg93 marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Gaze Mode
You can configure Neon to generate binocular or monocular gaze data by changing the `Gaze Mode` in the Neon Companion app
settings.

In `Binocular` mode, gaze data is generated using images from both the left and right eyes. This is the default setting
and is recommended for most users.

Some specialist applications, like ophthalmic testing, require gaze data to be generated from just one eye. This can
be achieved by switching to a `Monocular` gaze mode. `Monocular Left` generates gaze data using only images of the left
eye, while `Monocular Right` uses only images of the right eye.

## Changing Gaze Modes
N-M-T marked this conversation as resolved.
Show resolved Hide resolved
You can switch between gaze modes in the Neon Companion app settings.

:::info
After selecting a new gaze mode, be sure to unplug and re-plug the Neon device.
:::

## Considerations When Switching to Monocular Gaze

- If a monocular gaze mode is selected, no binocular gaze signal will be generated. This means all downstream data, including fixations and enrichment data, will be based on monocular gaze data.

- [Eye State](/data-collection/data-streams/#_3d-eye-states) and [Pupillometry](/data-collection/data-streams/#pupil-diameters) are unaffected by the gaze mode configuration and will always be generated using images from **both** eyes.

- If a monocular gaze mode is selected, Pupil Cloud will **not** re-process a recording to obtain a 200 Hz signal. Instead, Pupil Cloud will use the real-time signal, which may be lower than 200 Hz depending on which Companion device was used, and which gaze rate was selected in the Neon Companion app settings.
5 changes: 3 additions & 2 deletions neon/data-collection/index.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# Data Collection with Neon

Data collection is a key step for any application of Neon! In this section you can learn everything about it, starting with how to make [your first recording](/data-collection/first-recording/)!

You can find an overview of what data is contained in Neon recordings including all the [data streams](/data-collection/data-streams/) from the various sensors, as well as any additional data like [events](/data-collection/events/), [wearers](/data-collection/wearers/), and [templates](/data-collection/templates/).

You can find introductions on how to use the Neon Companion App, e.g. on [Offset Correction](/data-collection/offset-correction/), and how-to guides on common tasks during data collection, e.g. [Time Synchronization](/data-collection/time-synchronization/).
You can find introductions on how to use the Neon Companion app, e.g. on [Offset Correction](/data-collection/offset-correction/), and how-to guides on common tasks during data collection, e.g. [Time Synchronization](/data-collection/time-synchronization/).

Documentation on useful software and integrations for data collection is also available, see e.g. [Monitor App](/data-collection/monitor-app/) or [Lab Streaming Layer](/data-collection/lab-streaming-layer/).

Finally, you can find a list of [troubleshooting](/data-collection/troubleshooting/) tips and tricks for common issues during data collection.
Finally, you can find a list of [troubleshooting](/data-collection/troubleshooting/) tips and tricks for common issues during data collection.
16 changes: 8 additions & 8 deletions neon/data-collection/lab-streaming-layer/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

[Lab Streaming Layer](https://labstreaminglayer.org/) (LSL) is an open-source framework that connects, manages, and synchronizes data streams from multiple sources, such as EEG, GSK, and motion capture systems. Check out the [LSL documentation](https://labstreaminglayer.readthedocs.io/info/intro.html) for a full overview of supported devices.

The Neon Companion App has built-in support for LSL, streaming Neon’s real-time generated data over the LSL network. This allows you to easily synchronize Neon with other LSL-supported devices.
The Neon Companion app has built-in support for LSL, streaming Neon’s real-time generated data over the LSL network. This allows you to easily synchronize Neon with other LSL-supported devices.

## **Usage**

LSL streaming can be initiated in the Companion App by enabling the "Stream over LSL" setting.

When enabled, data will be streamed over the LSL network, and subsequently, to any connected LSL inlet (such as the LSL LabRecorder App, or another third-party system with inlet functionality) which is listening. Like the [Real-Time API](https://docs.pupil-labs.com/neon/real-time-api/tutorials/), it is not necessary for the Companion App to be actively recording, but simultaneously streaming LSL data while making a recording is supported.

Note that you'll need to ensure the Neon Companion App is connected to the same network as the other devices streaming via LSL.
Note that you'll need to ensure the Neon Companion app is connected to the same network as the other devices streaming via LSL.

## **LSL Outlets**

Expand Down Expand Up @@ -40,11 +40,11 @@ If your devices are on the same network but you have trouble connecting, it is l

- UDP broadcasts to port `16571` and/or
- UDP multicast to port `16571` at
- `FF02:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `FF05:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `FF08113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `FF0E:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `224.0.0.1`, `224.0.0.183`, `239.255.172.215`
- `FF02:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `FF05:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `FF08113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `FF0E:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- `224.0.0.1`, `224.0.0.183`, `239.255.172.215`
- TCP and UDP connections to the ports `16572`-`16604`

More troubleshooting tips can be found in the [Network Troubleshooting](https://labstreaminglayer.readthedocs.io/info/network-connectivity.html) page in LSL’s documentation.
More troubleshooting tips can be found in the [Network Troubleshooting](https://labstreaminglayer.readthedocs.io/info/network-connectivity.html) page in LSL’s documentation.
13 changes: 7 additions & 6 deletions neon/data-collection/measuring-ied/index.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
# Measuring the Inter-Eye-Distance (IED)

The wearer's IED can be set in the Neon Companion app for applications requiring
precise pupillometry or eye-state measurements.
This does not require prior calibration of the device. However, the accuracy of Neon’s 3D eye-state and pupil-size
The wearer's IED can be set in the Neon Companion app for applications requiring
precise pupillometry or eye-state measurements.
This does not require prior calibration of the device. However, the accuracy of Neon’s 3D eye-state and pupil-size
measurements can be enhanced by correctly setting the IED for each wearer.

To add the wearer's IED, input the value into their ‘Wearer Profile’ in the Neon Companion app before
starting
To add the wearer's IED, input the value into their ‘Wearer Profile’ in the Neon Companion app before
starting
a recording. The default IED is set to 63 mm, which is the average for adults.

Here's a simple way to measure IED:

1. Ask the person to look at a distant object (to avoid convergence)
2. Hold a ruler in front of their eyes and measure from the center of one pupil to the center of the
other
other
3 changes: 2 additions & 1 deletion neon/data-collection/offset-correction/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Using Offset Correction to Improve Gaze Accuracy
For some wearers, you may find a constant offset in their gaze estimates. To compensate for those, you can use the Offset Correction feature in the Neon Companion App. See the video below to learn how it works!

For some wearers, you may find a constant offset in their gaze estimates. To compensate for those, you can use the Offset Correction feature in the Neon Companion app. See the video below to learn how it works!

<Youtube src="7weK8UPLOzo" />

Expand Down
17 changes: 10 additions & 7 deletions neon/data-collection/psychopy/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,12 @@ We have created a dedicated plugin for PsychoPy that enables Neon to be used in
- [Coder](https://psychopy.org/coder/index.html) – Gives users the option to generate experiments or do other things programmatically, [using Psychopy like any other Python package](https://psychopy.org/api/).

## Using PsychoPy with Neon

When using PsychoPy with Neon, you can save eyetracking data in PsychoPy's hdf5 format, by enabling the "Save hdf5 file" option within the experiment settings. But we also recommend recording in the Neon Companion app for the duration of the experiment for data redundancy. PsychoPy’s standard "Eyetracker Record" component can be used to start and stop recordings on the Companion Device accordingly. If desired, custom timestamped events can be triggered from PsychoPy and saved in the Neon recording.

* For experiments that only require pupillometry/eye state, make sure the "Compute Eye State" setting is enabled in the companion app. For experiments that do not require screen-based gaze coordinates, this is all that is required.
- For experiments that only require pupillometry/eye state, make sure the "Compute Eye State" setting is enabled in the companion app. For experiments that do not require screen-based gaze coordinates, this is all that is required.

* To use Neon for screen-based work in PsychoPy, the screen needs to be robustly located within the scene camera’s field of view, and Neon’s gaze data subsequently transformed from scene camera-based coordinates to screen-based coordinates. The plugin for PsychoPy achieves this with the use of AprilTag Markers and the [real-time-screen-gaze](https://github.com/pupil-labs/real-time-screen-gaze) Python package (installed automatically with the plugin).
- To use Neon for screen-based work in PsychoPy, the screen needs to be robustly located within the scene camera’s field of view, and Neon’s gaze data subsequently transformed from scene camera-based coordinates to screen-based coordinates. The plugin for PsychoPy achieves this with the use of AprilTag Markers and the [real-time-screen-gaze](https://github.com/pupil-labs/real-time-screen-gaze) Python package (installed automatically with the plugin).

## Builder

Expand All @@ -38,7 +39,7 @@ Three new Builder components will be available in the components list under the
- April Tag Markers: for screen-based work, you will need to render AprilTag markers on your display. These components make it easy to do so. We recommend at least four markers, but more markers will improve gaze mapping.

- **April Tag Frame**: this component is recommended for most users. Using it in your Builder experiment will display an array of AprilTag markers around the edge of the screen. You can configure the number of markers to display along the horizontal and vertical edges of the screen, the size and contrast of the markers, and (optionally) the marker IDs. A minimum of four markers (2 horizontally by 2 vertically) is recommended, but more markers will provide more robust detection and accurate mapping. Marker IDs are automatically chosen but can be manually specified if needed.
![AprilTag Frame](./apriltag-frame.png)
![AprilTag Frame](./apriltag-frame.png)

- **April Tag**: this component will add a single AprilTag marker to your display. It is intended for use when the April Tag Frame component cannot be used (e.g., you need to display stimuli on the edges of the display where the April Tag Frame component would place markers in the way). Using this component will give you control over the size and position of each marker. You will need to ensure that a unique marker ID is assigned to each AprilTag marker.

Expand All @@ -51,11 +52,12 @@ Three new Builder components will be available in the components list under the
[PsychoPy saves eyetracking data in its own format](https://psychopy.org/hardware/eyeTracking.html#what-about-the-data). Screen gaze data will be saved as `MonocularEyeSampleEvent` records (even when using the binocular gaze mode). Eye state data, if enabled, will appear in `BinocularEyeSampleEvent` records.

For eye state data in`BinocularEyeSampleEvent` records:

- For eye state records
- `[left|right]_gaze_[x|y|z]` will be the optical axis vectors
- `[left|right]_eye_cam_[x|y|z]` will be eye positions
- `[left|right]_pupil_measure1` will be pupil diameters in mm
- `[left|right]_pupil_measure1_type` will be `77`
- `[left|right]_gaze_[x|y|z]` will be the optical axis vectors
- `[left|right]_eye_cam_[x|y|z]` will be eye positions
- `[left|right]_pupil_measure1` will be pupil diameters in mm
- `[left|right]_pupil_measure1_type` will be `77`

### Example Builder Experiment

Expand All @@ -66,6 +68,7 @@ Check out our simple but complete [gaze contingent demo designed in PsychoPy Bui
To use Neon with PsychoPy coder, you'll need to configure ioHub, add AprilTag markers to the screen, and register the screen surface with the eyetracker. The example below shows how to collect realtime gaze position and pupil diameter in PsychoPy Coder.

### Example Coder Experiment

```python
from psychopy import visual, event
from psychopy.core import getTime
Expand Down
Loading
Loading