Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HOW-TO] Synchronize gain adjustment between two Raspberry Pi cameras #1116

Open
Raccoon987 opened this issue Sep 17, 2024 · 17 comments
Open

Comments

@Raccoon987
Copy link

Raccoon987 commented Sep 17, 2024

I have two externally triggered Raspberry Pi global shutter cameras connected to a Raspberry Pi 5, with each camera running in its own thread. They capture nearly identical but slightly shifted fields of view, and I can apply an affine transformation to spatially align them. However, the luminous flux between the two cameras differs by up to 10%. Both cameras have a fixed exposure, but due to the shifted fields of view and the difference in light flux, each camera pre-processes its image differently and selects a different analog gain.

My goal is to find a fast way to make the output image arrays as pixel-wise equivalent as possible in terms of pixel brightness.

I've plotted the full range relationship between pixel intensities from both cameras, to create a lookup table. But this is only valid when both cameras have the same fixed analog gain.

Is there a way for the first camera to automatically select the optimal gain (with AeEnable=True) while locking the second camera to that same gain value? In other words, the first camera would adjust its gain, and the second camera would then match its gain to the first camera.

I appreciate your help in advance.

@davidplowman
Copy link
Collaborator

Ah, I see that I've answered this on the forum! Might it be easier to keep the discussion here, going forward?

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 23, 2024

Before we continue/not continue the discussion here, could you provide a link to the issue where it was previously discussed? Because it depends on the details discussed there. My main goal is to achieve intensity-equivalent images, and gain synchronization is only one possible solution.

@davidplowman
Copy link
Collaborator

I was referring to the reply that I posted here: https://forums.raspberrypi.com/viewtopic.php?t=376829#p2254996

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 23, 2024

I think we can continue the discussion here.

My Global Shutter Raspberry Pi cameras are externally
triggered using a Raspberry Pico, as outlined in the GS camera manual. This allows me to capture images simultaneously, with equal exposure set for both cameras. However, explicitly setting the analog gain disrupts synchronization, as does setting different exposures for each camera. I'm not sure why this happens—perhaps you could explain it to me.
I only need monochrome images, so I've disabled AWB (AwbEnable = False) and set {"rpi.awb": {"bayes": 0}} in the imx296.json file.
The best approach for producing equal images would be to equalize the light flux before it reaches the camera lenses. Unfortunately, for various reasons, I can’t physically make the light fluxes equal. My next option is to reduce the sensitivity of one of the camera sensors to balance the light. I've found that the camera ISO can only be adjusted by controlling the exposure and analog gain. This is the last stage where linear reducing of light or sensor sensitivity proportional to the 20% difference in light flux may lead to obtaining equal images.
Beyond this point, the accumulated and converted light signal is processed by image preprocessing algorithms controlled by the imx296.json file.
Since the cameras receive different light fluxes, they independently calculate the gain values.
After several experiments, I noticed that the difference between the resulting images seems to depend on pixel intensity. The ratio between the corresponding bright pixels from both cameras is not the same as the ratio between mid-range or dark pixels — there's a nonlinear relationship.
I plotted pixel intensity from the first camera against the second camera for the full range (0 to 255), and this relationship was nonlinear even when gain and exposure were fixed. Without locking the gain, there’s an additional uncontrolled variation in intensity, as each camera selects its own gain.
When I set AeEnable = False, I get synchronized image capture, with the analog gain fixed at 1 for both cameras — but the images are too dark.
I don’t want to completely disable the gain adjustment algorithm because it’s useful.
I realize this issue extends beyond the original topic title — sorry for that.

Any ideas on how I could solve this problem?

@davidplowman
Copy link
Collaborator

Just to understand, can you say a little more about what you're doing? I think I understood that:

  • You're setting up both cameras and starting them. But they will sit and do nothing until the external trigger pulse. Is that correct?
  • Then you're capturing the first frame that comes out of each camera?
  • The exposure time will, I believe, be determined by the pulse length. Though I think we'd recommend setting the exposure time on both cameras to a fixed value. Does this all describe what you're doing?
  • I don't really understand the analogue gain issue. As far as I know, you should be able to set the analogue gain explicitly for both sensors. In what way is this not working?
  • After that I was finding it a bit harder to follow. The pixel levels coming out of the camera are basically linear in exposure time and analogue gain. The final processed images will not be linear, however, because of the gamma transform that gets applied.
  • I wasn't entirely sure why you still wanted to let the analogue gain values vary. You can feed gain values from one camera to the other, though you need to have the camera running so that you know what gain value to apply.

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 24, 2024

According to the Raspberry Pi camera documentation (https://www.raspberrypi.com/documentation/accessories/camera.html), I connected the camera's GND and XTR pins to the Raspberry Pico and ran the MicroPython code on the Pico controller.

sudo su
echo 1 > /sys/module/imx296/parameters/trigger_mode
exit
from machine import Pin, PWM
from time import sleep

pwm = PWM(Pin(28))
framerate = 60
shutter = 2000  # In microseconds
frame_length = 1000000 / framerate
pwm.freq(framerate)
pwm.duty_u16(int((1 - (shutter - 14) / frame_length) * 65535))

Afterward, I ran the main() function and successfully achieved synchronized image capture. To verify this, I ran the check_sync() function and got an output like:

16000
18000
10000
22000
... 

When I uncommented the line "AnalogueGain": 8.0 in the start_camera(index) function and ran check_sync() again, I got an output like

16000000
12000000
11000000
...

The difference is three orders. The difference between camera's timestamp now is estimated in milliseconds, not microseconds. So I conclude that I break synchronization. The same break happens for different exposure times but it is quite obvious: shutter value is explicitly defined in micropython code.

from picamera2 import Picamera2
import threading
import time
import cv2
import numpy as np
import copy
import pprint

def capture_and_process(picam, result_list, meta_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    array = request.make_array(name="main") 
    array = cv2.cvtColor(array, cv2.COLOR_RGB2GRAY)
    result_list[index] = array
    meta_list[index] = metadata
    request.release()

def capture_timestamp(picam, result_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    ts = int(metadata["SensorTimestamp"])
    result_list[index] = ts
    request.release()

def start_camera(index):
    picam = Picamera2(index)
    print("Camera sensor modes: ", picam.sensor_modes)
    config = picam.create_preview_configuration(
        controls={"FrameDurationLimits": (16667, 16667),
                  "FrameRate": 60,
                  "ExposureTime": 2000,
                  "Saturation": 0,
                  "AwbEnable": False, 
                  #"AnalogueGain": 8.0,
                  })
    print(f"camera {index} main config: ", config["main"])
    picam.start(config)
    time.sleep(0.5)
    return picam

def check_sync():
    picams = [start_camera(i) for i in range(2)]
    results = [None] * len(picams)  
    try:
        c = 0
        while True:
            threads = [threading.Thread(target=capture_timestamp, args=(picam, results, index)) for index, picam in
                       enumerate(picams)]

            for thread in threads:
                thread.start()

            for thread in threads:
                thread.join()  
            c += 1
            if c % 20 == 0:
                print("timestamp delta between two cameras: ", results[0],  results[1], abs(results[0] - results[1]))    
    except KeyboardInterrupt:
        # Ctrl + C to properly stop cameras
        print("Stopping cameras...")
    finally:
        [c.stop() for c in picams]
        print("Cameras stopped.")  

def main():
    picams = [start_camera(i) for i in range(2)]
    results = [None] * len(picams)
    metadata = [{}] * len(picams)
    
    try:
        while True:
            threads = [threading.Thread(target=capture_and_process, args=(picam, results, metadata, index)) for index, picam in
                       enumerate(picams)]

            for thread in threads:
                thread.start()
            for thread in threads:
                thread.join()

            cv2.imshow('Master/Bottom', np.flip(results[0], axis=1))
            cv2.imshow('Slave/Top', results[1])
           
            if cv2.waitKey(1) == ord('q'):
                break
    except KeyboardInterrupt:
        pass
    finally:
        [c.stop() for c in picams]
        print("Cameras stopped.")  

So answers to your questions:

  1. yes
  2. yes
  3. yes
  4. Construction like this one:
while True:
    cam2.set_controls({'AnalogueGain': cam1.capture_metadata()['AnalogueGain']})

It is fine for me if it doesn't break synchronization and does not lead to dropped frames. I'll check it.

  1. How may I turn off the gamma transform? Based on my experiments, the pixel intensity relationship is almost linear in the low and mid-range intensity regions but becomes nonlinear for bright pixels. In the linear region, the slope changes slightly. I prefer a fully linear response, as I don’t need a 'nice' picture—just one that is simple and predictable.

  2. I want the first camera to automatically adjust its gain, as this adjustment does a good job of keeping the image neither too bright nor too dark. I would then like to link the second camera's gain to that of the first. Otherwise, depending on the environment, the first image might be either brighter or darker than the second, depending on the camera's preprocessing algorithm. This behavior is unpredictable for me.

@davidplowman
Copy link
Collaborator

Thanks for all the information. I probably need to pass this on to someone who has actually used the external trigger mechanism, but unfortunately he's on holiday so it would be into next week before he could get back to you.

But just to comment on a few other things:

  1. When you quoted those numbers (16000, 18000 and so on), it wasn't clear to me what they were. I couldn't spot where you were printing them in the code either. Did I miss something or could you clarify?

  2. One problem with setting the camera's analogue gain while it is running, is that it takes several frames for it to take effect. For it to take effect immediately, you would need to stop the camera, set the analogue gain, then restart it. But that's a relatively slow process too, so it depends what kind of frame rate you are hoping to achieve.

  3. You can turn off the gamma transform by finding "rpi.contrast" in the camera tuning file and changing it to "x.rpi.contrast" (which effectively "comments it out"). The tuning file will be called imx296.json, probably under /usr/share/libcamera/ipa/rpi/pisp (Pi 5) or /usr/share/libcamera/ipa/rpi/vc4 (other Pis). Of course, the resulting image will look dark but very contrasty.

  4. To get the greyscale version of an image, it would be more efficient to avoid cv2 and ask for 'YUV420' format instead. Then you could take the top "height" rows of the array directly.

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 25, 2024

First of all, I want to thank you for this discussion and your help. I'm confident that we’ll find a solution through this dialogue.

  1. I forgot to include the capture_timestamp() function in my code. I’ve now added it to my code snippet. The numbers 16,000, 18,000, etc., represent the difference between the timestamps of frames captured by the first and second cameras. This means that the time shift between frame j of the first camera and frame j of the second camera is only 16 or 18 microseconds, indicating that the cameras capture frames simultaneously. However, values like 16,000,000 or 12,000,000 show that the difference is now measured in milliseconds, which, compared to the exposure time of 2 ms and the frame duration of 16.6 ms, indicates non-simultaneous capture.

In the check_sync() function, I start the two cameras in separate threads, retrieve the timestamps from the request metadata (using the capture_timestamp() function), and print the difference for every twentieth frame.

  1. Yes. With an FPS of 60, I can't use this method to set equal gains.

  2. I'll try this.

  3. Why is the 'YUV420' format more efficient in the case of grayscale images? Is this the correct modification?

w, h = 640, 480

def capture_and_process(picam, result_list, meta_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    array = request.make_array(name="main") 
    y_h = array.shape[0] * 2 // 3
    array = array[:y_h, :]
    result_list[index] = array
    meta_list[index] = metadata
    request.release()

def start_camera(index):
    picam = Picamera2(index)
    print("Camera sensor modes: ", picam.sensor_modes)
    config = picam.create_preview_configuration(
        main={
            "size": (w, h),  
            "format": "YUV420",  
        },
        controls={"FrameDurationLimits": (16667, 16667),
                  "FrameRate": 60,
                  "ExposureTime": 2000,
                  "Saturation": 0,
                  "AwbEnable": False, 
                  #"AnalogueGain": 8.0,
                  })
    print(f"camera {index} main config: ", config["main"])
    picam.start(config)
    time.sleep(0.5)
    return picam 

How to plot the obtained array? The same as before, using cv2.imshow?

Thank you.

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 30, 2024

newplot

The x-axis represents the intensity of the pixel at [i, j] from the first camera, and the y-axis represents the intensity of the same pixel from the second camera. Blue dots show data with both cameras set to a 15 ms exposure and a fixed analog gain of 15. Red dots represent data with both cameras set to the same 15 ms exposure but with a fixed analog gain of 5.
All points lie above a dashed diagonal line because the luminous flux between the two cameras differs by up to 10% or more. The relationship is nonlinear, but I can easily equalize the image intensity using this curve.
However, explicitly setting the analog gain disrupts synchronization. When the gain isn’t fixed, each camera independently chooses its gain, causing the curve to shift—sometimes below the dashed diagonal if the 'weaker' first camera has a much higher gain than the second.
For each frame, we get two sets of metadata. For each pair of [i, j] pixels, their intensities fall on a curve like the one shown in the image, but the curve's position and shape depend on the camera parameters stored in the metadata.

I would like to:

  1. For a known gain difference between the cameras and other information from the frame metadata, be able to reproduce the full curve. Or get a function like: camera1_intensity = F(camera1_gain, camera2_gain, camera1_metadata, camera2_metadata)(camera2_intensity)
    Afterward, I can create a lookup table for each of the 255 intensity values.
  2. (Optional) Flatten this curve to achieve a linear relationship.

imx296.json file has the following algorithms:

"rpi.black_level"
"rpi.lux"
"rpi.dpc"
"rpi.noise"
"rpi.geq"
"rpi.denoise"
"rpi.awb"         Turn off by setting "AwbEnable": False in camera controls or "rpi.awb": {"bayes": 0} in .json file
"rpi.agc"
"rpi.alsc"
"rpi.contrast"    turn off by setting "x.rpi.contrast"
"rpi.ccm"
"rpi.sharpen"
"rpi.hdr" 

Since AWB and contrast are already off, what else can I disable to achieve a linear grayscale intensity response?
Also, how can I predict the curve's position based on the frame metadata and the gain difference between the cameras?

@davidplowman
Copy link
Collaborator

Hi again, a few things to comment on here.

Firstly, your YUV420 modifications looked OK to me. It's more efficient because the hardware does the conversion for you, rather than doing it slowly in software. OpenCV should understand and display the single channel greyscale image directly.

As regards exposure comparisons, it might be worth looking at some raw images, which you can capture in DNG files. This is exactly what comes from the sensor. You should find that, after subtracting the black level, this is exactly linear in both exposure time and analogue gain (until pixels start to saturate).

I'm assuming your graphs are referring to the processed output images. By far the biggest non-linearity here is controlled by the rpi.contrast algorithm, so disabling that is the first thing. Other algorithms may have an effect, and you could try disabling those too - maybe rpi.alsc, rpi.ccm, rpi.sharpen (it might only be rpi.black_level that's essential, but obviously if removing any causes it to go horribly wrong then you'll need to put those back). The "x." trick should work for all of them.

I still don't understand why changing the analogue gain should cause the sync to change. Perhaps I could ask @njhollinghurst to comment on that?

@Raccoon987
Copy link
Author

Working with raw images was one of the solutions I considered. Still, it's not the easiest because handling raw signals requires manually implementing some useful image preprocessing algorithms (like rpi.denoise and rpi.agc). Aside from other algorithms, I still need to denoise and enhance the weak raw signal. How can I demosaic the raw signal into a grayscale image and then apply denoising and enhancement in real-time, without saving the images to disk? Any advice, side libs?

Yes, please ask @njhollinghurst to join this discussion. This is an important issue to resolve, as it could help others who want to synchronize two cameras with an external trigger board.

@davidplowman
Copy link
Collaborator

I only really suggested looking at some raw images to check that exposure and gain cause exactly linear changes in pixel level. But it should be possible to emulate this in the processed output images by disabling other stages (most obviously rpi.contrast). It might be worth experimenting with just a single camera, where the image doesn't change, to confirm that this really works.

@njhollinghurst
Copy link
Contributor

With regard to the timestamps... Do you have an independent way to check if the cameras are synchronized? For example by filming a mobile phone stopwatch application.

The timestamps actually record when the start of the frame was received by the Raspberry Pi. With a rolling-shutter camera, it's closely related to the exposure time. But a global-shutter camera has the ability to retain the frame internally for several milliseconds (I don't know why it might do this, but it's theoretically possible) so there is room for doubt.

@Raccoon987
Copy link
Author

Hi again, a few things to comment on here.

Firstly, your YUV420 modifications looked OK to me. It's more efficient because the hardware does the conversion for you, rather than doing it slowly in software. OpenCV should understand and display the single channel greyscale image directly.

As regards exposure comparisons, it might be worth looking at some raw images, which you can capture in DNG files. This is exactly what comes from the sensor. You should find that, after subtracting the black level, this is exactly linear in both exposure time and analogue gain (until pixels start to saturate).

I'm assuming your graphs are referring to the processed output images. By far the biggest non-linearity here is controlled by the rpi.contrast algorithm, so disabling that is the first thing. Other algorithms may have an effect, and you could try disabling those too - maybe rpi.alsc, rpi.ccm, rpi.sharpen (it might only be rpi.black_level that's essential, but obviously if removing any causes it to go horribly wrong then you'll need to put those back). The "x." trick should work for all of them.

I still don't understand why changing the analogue gain should cause the sync to change. Perhaps I could ask @njhollinghurst to comment on that?

newplot (1)

It seems that adding 'x.' to 'rpi.contrast' makes the response linear enough. Thank you.

@Raccoon987
Copy link
Author

Raccoon987 commented Oct 1, 2024

In the early stages of my work, I used a 'running lights' setup — 10 LEDs arranged in a line. Each LED emits light for a certain period before turning off, while the next LED in line begins to emit. I can control the duration each LED stays on, ranging from 100 microseconds to 10 seconds. I used this setup to check camera synchronization, and everything seemed to work well. Afterward, I relied solely on the frame metadata timestamps.

When I don't specify 'AnalogueGain' in the camera controls, the timestamp difference between frames from the first and second camera is minimal (up to 20 millisecond). However, explicitly setting the same or different 'AnalogueGain' increases the timestamp difference by three orders of magnitude. Similarly, setting different exposures causes the synchronization to break down within 5-10 seconds. Initially, the timestamp difference is small, but after a short time, it increases dramatically.

LED

Sync.mp4

@njhollinghurst
Copy link
Contributor

I'm guessing that one of 3 things is going wrong:

  • The cameras are capturing images at unexpected times when controls are frequently set
  • The timestamps are reported incorrectly when controls are frequently set
  • One or both of the pipelines is dropping frames, causing timetamps to jump by a whole number of frames

Is it possible to repeat the LED experiment when either one or both cameras are having their analogue gains frequently set -- do the images go out of sync, or only the timestamps? Is the error an integer number of frame intervals? How does the error evolve over time?

Don't try to change the shutter duration using the API -- it should be fixed and should match the duration of the trigger pulse.

@Raccoon987
Copy link
Author

It’s possible to repeat the LED experiments, but I’m not sure why you mentioned 'controls are frequently set'? In line 33, I configured both cameras and started them once. After that, in an infinite loop, I capture and release requests. It seems like the controls are only set once, right? With line 25 commented out, synchronization works. When I uncomment that line, synchronization fails.

01:  from picamera2 import Picamera2
02:  import threading
03:  import time
04:  import numpy as np
05:  import pprint
06:  
07:  
08:  
09:  def capture_timestamp(picam, result_list, index):
10:      request = picam.capture_request()
11:      metadata = request.get_metadata()
12:      ts = int(metadata["SensorTimestamp"])
13:      result_list[index] = ts
14:      request.release()
15:  
16:  def start_camera(index):
17:      picam = Picamera2(index)
18:      print("Camera sensor modes: ", picam.sensor_modes)
19:      config = picam.create_preview_configuration(
20:          controls={"FrameDurationLimits": (16667, 16667),
21:                    "FrameRate": 60,
22:                    "ExposureTime": 2000,
23:                    "Saturation": 0,
24:                    "AwbEnable": False, 
25:                    #"AnalogueGain": 8.0,
26:                    })
27:      print(f"camera {index} main config: ", config["main"])
28:      picam.start(config)
29:      time.sleep(0.5)
30:      return picam
31:  
32:  def check_sync():
33:      picams = [start_camera(i) for i in range(2)]
34:      results = [None] * len(picams)  
35:      try:
36:          c = 0
37:          while True:
38:              threads = [threading.Thread(target=capture_timestamp, args=(picam, results, index)) for index, picam in
39:                         enumerate(picams)]
40:  
41:              for thread in threads:
42:                  thread.start()
43:  
44:              for thread in threads:
45:                  thread.join()  
46:              c += 1
47:              if c % 20 == 0:
48:                  print("timestamp delta between two cameras: ", results[0],  results[1], abs(results[0] - results[1]))    
49:      except KeyboardInterrupt:
50:          # Ctrl + C to properly stop cameras
51:          print("Stopping cameras...")
52:      finally:
53:          [c.stop() for c in picams]
54:          print("Cameras stopped.")  
55:
56:
57:  # run check_sync() with commented out #line 25. The cameras independently select their own analog gain.
58:  check_sync()
59:
60:>> 16000
61:>> 18000
62:>> 10000
63:>> 22000
64:>> ...
65:
66:  # run check_sync() with uncommented line 25. Both cameras have a fixed analog gain of 8.0.
67:  check_sync()
68:
69:>> 12000000
70:>> 11000000
71:>> 16000000
72:>> ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants