-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HOW-TO] Synchronize gain adjustment between two Raspberry Pi cameras #1116
Comments
Ah, I see that I've answered this on the forum! Might it be easier to keep the discussion here, going forward? |
Before we continue/not continue the discussion here, could you provide a link to the issue where it was previously discussed? Because it depends on the details discussed there. My main goal is to achieve intensity-equivalent images, and gain synchronization is only one possible solution. |
I was referring to the reply that I posted here: https://forums.raspberrypi.com/viewtopic.php?t=376829#p2254996 |
I think we can continue the discussion here. My Global Shutter Raspberry Pi cameras are externally Any ideas on how I could solve this problem? |
Just to understand, can you say a little more about what you're doing? I think I understood that:
|
According to the Raspberry Pi camera documentation (https://www.raspberrypi.com/documentation/accessories/camera.html), I connected the camera's GND and XTR pins to the Raspberry Pico and ran the MicroPython code on the Pico controller.
Afterward, I ran the main() function and successfully achieved synchronized image capture. To verify this, I ran the check_sync() function and got an output like:
When I uncommented the line "AnalogueGain": 8.0 in the start_camera(index) function and ran check_sync() again, I got an output like
The difference is three orders. The difference between camera's timestamp now is estimated in milliseconds, not microseconds. So I conclude that I break synchronization. The same break happens for different exposure times but it is quite obvious: shutter value is explicitly defined in micropython code.
So answers to your questions:
It is fine for me if it doesn't break synchronization and does not lead to dropped frames. I'll check it.
|
Thanks for all the information. I probably need to pass this on to someone who has actually used the external trigger mechanism, but unfortunately he's on holiday so it would be into next week before he could get back to you. But just to comment on a few other things:
|
First of all, I want to thank you for this discussion and your help. I'm confident that we’ll find a solution through this dialogue.
In the check_sync() function, I start the two cameras in separate threads, retrieve the timestamps from the request metadata (using the capture_timestamp() function), and print the difference for every twentieth frame.
How to plot the obtained array? The same as before, using cv2.imshow? Thank you. |
The x-axis represents the intensity of the pixel at [i, j] from the first camera, and the y-axis represents the intensity of the same pixel from the second camera. Blue dots show data with both cameras set to a 15 ms exposure and a fixed analog gain of 15. Red dots represent data with both cameras set to the same 15 ms exposure but with a fixed analog gain of 5. I would like to:
imx296.json file has the following algorithms:
Since AWB and contrast are already off, what else can I disable to achieve a linear grayscale intensity response? |
Hi again, a few things to comment on here. Firstly, your YUV420 modifications looked OK to me. It's more efficient because the hardware does the conversion for you, rather than doing it slowly in software. OpenCV should understand and display the single channel greyscale image directly. As regards exposure comparisons, it might be worth looking at some raw images, which you can capture in DNG files. This is exactly what comes from the sensor. You should find that, after subtracting the black level, this is exactly linear in both exposure time and analogue gain (until pixels start to saturate). I'm assuming your graphs are referring to the processed output images. By far the biggest non-linearity here is controlled by the rpi.contrast algorithm, so disabling that is the first thing. Other algorithms may have an effect, and you could try disabling those too - maybe rpi.alsc, rpi.ccm, rpi.sharpen (it might only be rpi.black_level that's essential, but obviously if removing any causes it to go horribly wrong then you'll need to put those back). The "x." trick should work for all of them. I still don't understand why changing the analogue gain should cause the sync to change. Perhaps I could ask @njhollinghurst to comment on that? |
Working with raw images was one of the solutions I considered. Still, it's not the easiest because handling raw signals requires manually implementing some useful image preprocessing algorithms (like rpi.denoise and rpi.agc). Aside from other algorithms, I still need to denoise and enhance the weak raw signal. How can I demosaic the raw signal into a grayscale image and then apply denoising and enhancement in real-time, without saving the images to disk? Any advice, side libs? Yes, please ask @njhollinghurst to join this discussion. This is an important issue to resolve, as it could help others who want to synchronize two cameras with an external trigger board. |
I only really suggested looking at some raw images to check that exposure and gain cause exactly linear changes in pixel level. But it should be possible to emulate this in the processed output images by disabling other stages (most obviously rpi.contrast). It might be worth experimenting with just a single camera, where the image doesn't change, to confirm that this really works. |
With regard to the timestamps... Do you have an independent way to check if the cameras are synchronized? For example by filming a mobile phone stopwatch application. The timestamps actually record when the start of the frame was received by the Raspberry Pi. With a rolling-shutter camera, it's closely related to the exposure time. But a global-shutter camera has the ability to retain the frame internally for several milliseconds (I don't know why it might do this, but it's theoretically possible) so there is room for doubt. |
It seems that adding 'x.' to 'rpi.contrast' makes the response linear enough. Thank you. |
In the early stages of my work, I used a 'running lights' setup — 10 LEDs arranged in a line. Each LED emits light for a certain period before turning off, while the next LED in line begins to emit. I can control the duration each LED stays on, ranging from 100 microseconds to 10 seconds. I used this setup to check camera synchronization, and everything seemed to work well. Afterward, I relied solely on the frame metadata timestamps. When I don't specify 'AnalogueGain' in the camera controls, the timestamp difference between frames from the first and second camera is minimal (up to 20 millisecond). However, explicitly setting the same or different 'AnalogueGain' increases the timestamp difference by three orders of magnitude. Similarly, setting different exposures causes the synchronization to break down within 5-10 seconds. Initially, the timestamp difference is small, but after a short time, it increases dramatically. Sync.mp4 |
I'm guessing that one of 3 things is going wrong:
Is it possible to repeat the LED experiment when either one or both cameras are having their analogue gains frequently set -- do the images go out of sync, or only the timestamps? Is the error an integer number of frame intervals? How does the error evolve over time? Don't try to change the shutter duration using the API -- it should be fixed and should match the duration of the trigger pulse. |
It’s possible to repeat the LED experiments, but I’m not sure why you mentioned 'controls are frequently set'? In line 33, I configured both cameras and started them once. After that, in an infinite loop, I capture and release requests. It seems like the controls are only set once, right? With line 25 commented out, synchronization works. When I uncomment that line, synchronization fails.
|
I have two externally triggered Raspberry Pi global shutter cameras connected to a Raspberry Pi 5, with each camera running in its own thread. They capture nearly identical but slightly shifted fields of view, and I can apply an affine transformation to spatially align them. However, the luminous flux between the two cameras differs by up to 10%. Both cameras have a fixed exposure, but due to the shifted fields of view and the difference in light flux, each camera pre-processes its image differently and selects a different analog gain.
My goal is to find a fast way to make the output image arrays as pixel-wise equivalent as possible in terms of pixel brightness.
I've plotted the full range relationship between pixel intensities from both cameras, to create a lookup table. But this is only valid when both cameras have the same fixed analog gain.
Is there a way for the first camera to automatically select the optimal gain (with AeEnable=True) while locking the second camera to that same gain value? In other words, the first camera would adjust its gain, and the second camera would then match its gain to the first camera.
I appreciate your help in advance.
The text was updated successfully, but these errors were encountered: