-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Raspberry Pi requires multi-planar buffer type #15
Comments
I should also point out that this may only be true on Raspberry Pi 4. That is what I'm currently testing on. |
Hi, I'm very interested in getting multi-planar format support going. Unfortunately I don't have a camera device which supports/needs it. What device are you testing this with? I'm pretty sure standard USB webcams will work using the single-planar (aka packed) v4l2 layer even on the RPi 4, no? From your first comment I read that you need multi-planar support for the output device, is that correct? Maybe we can tackle it for input and output at the same time even. Apart from the format change, I can envision some more edits to be required, e.g. a new |
I'm using Using v4l2-ctl I still haven't been able to get a h264 stream decoded, keep running into However it seems I've taken a few steps back. I tried to install aarch64 arch linux (kernel: 4.8.9) and now I don't get any v4l2 devices at all. Anyway,
Yes. If I can help I'll try.
I'm very new to this space and I'm not really even sure what multi-planar is. If I understand correctly it's buffers for a device that needs a discontiguous buffer? (going off https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/planar-apis.html#planar-apis) |
Oh I see. So you're not working with actual video I/O devices, but trying to use the hardware memory-to-memory codec on the RPi (which happens to be implemented as a v4l2 module).
Yes, that's about right. Decoding a video frame will usually give you "raw" frames, probably some YUV format instead of the more familiar RGB formats. YUV data captures luminance and chrominance. While YUV pixels can be packed, video codecs usually prefer them to be laid out in memory planes, e.g. one plane for the luminance (Y) and one for the chrominance components (UV). |
FYI I'm planning to tackle this tomorrow. Are you available for testing this weekend? |
Yeah, I should be free sometime this weekend to test. Sorry, meant to reply sooner. Went out to buy a C7000, gonna try setting up virtual desktops, encode their screens, and decode on Pi's. I got the PI back to having the v4l2 devices online. Would also note that the PI seems to fail v4l2-compliance tests.
Anyway, shoot me any tests you want me to run. |
It looks like getting your usecase working is going to be a lot more involved than I initially thought. I've started the work in the
|
Those 2 sound about right. I found some more interesting information at https://www.raspberrypi.org/forums/viewtopic.php?f=68&t=281296&p=1737303&hilit=v4l2+%2Fdev%2Fvideo10#p1737303 that might be useful. I'm a beginner in rust but if there's anything I can do to help I'd love to. |
Some good news - I was able to implement MMAP output streaming today (tested using |
I just pushed some preliminary multi-planar buffer support to the EDIT: the way to do this for your local project is to clone this repo ( |
Cool. I'll give it a test in the next couple hours. |
Looks golden. Source: use std::io::Read;
use v4l::prelude::DeviceExt;
use v4l::output::device::Device as OutputDevice;
// use nix::sys::ioctl;
const DECODE_DEVICE: &str = "/dev/video10";
fn main() -> std::io::Result<()> {
let mut file = std::fs::File::open("/home/alarm/short.h264")?;
let mut contents = vec![];
file.read_to_end(&mut contents)?;
let dev = OutputDevice::with_path(DECODE_DEVICE)
.expect(&format!("Unable to open decode device {}", DECODE_DEVICE));
let capabilities = dev.query_caps().expect("could not get capabilities");
dbg!(capabilities);
let formats = dev.enum_formats()?;
println!("Number of formats: {}", formats.len());
for fmt in formats {
print!("{}", fmt);
}
Ok(())
} Output: [alarm@alarmpi framebuffer_decode]$ ./target/debug/framebuffer_decode
[src/main.rs:21] capabilities = Capabilities {
driver: "bcm2835-codec",
card: "bcm2835-codec-decode",
bus: "platform:bcm2835-codec",
version: (
5,
4,
75,
),
capabilities: VIDEO_M2M_MPLANE | EXT_PIX_FORMAT | STREAMING,
}
Number of formats: 2
index : 0
type: : 10
flags: : COMPRESSED
description : H.264
fourcc : H264
index : 1
type: : 10
flags: : COMPRESSED
description : Motion-JPEG
fourcc : MJPG |
Also, I hit up the raspberry pi forums about the compliance issue: https://www.raspberrypi.org/forums/viewtopic.php?f=67&t=291227&p=1760878#p1760878 Some interesting input on v4l2-ctl/v4l2-compliance in there. |
Sounds good. Now we need to check whether the actual streaming I/O works with the new code. Did you try writing some H.264 or JPEG frames to that device (by creating a |
Not sure if I'm doing this correctly, but output it Source: use std::io::Read;
use v4l::{Timestamp, buffer::{Buffer, Flags, Metadata}};
use v4l::io::stream::Output;
use v4l::output::device::Device as OutputDevice;
use v4l::prelude::{DeviceExt, MmapStream};
// use nix::sys::ioctl;
const DECODE_DEVICE: &str = "/dev/video10";
const VIDEO_PATH: &str = "/home/alarm/short.h264";
fn main() -> std::io::Result<()> {
let mut dev = OutputDevice::with_path(DECODE_DEVICE)
.expect(&format!("Unable to open decode device {}", DECODE_DEVICE));
let capabilities = dev.query_caps().expect("could not get capabilities");
dbg!(capabilities);
let formats = dev.enum_formats()?;
println!("Number of formats: {}", formats.len());
for fmt in formats {
print!("{}", fmt);
}
let mut stream = MmapStream::with_buffers(&mut dev, 1).expect("Failed to create buffer stream");
let mut file = std::fs::File::open(VIDEO_PATH)?;
let mut contents = vec![];
file.read_to_end(&mut contents)?;
let metadata = Metadata {
bytesused: contents.len() as u32,
flags: Flags::empty(),
field: 0,
timestamp: Timestamp::new(0, 0),
sequence: 0
};
let buffer = Buffer {
planes: vec![contents.as_ref()],
meta: metadata
};
Output::next(&mut stream, buffer)?;
Ok(())
} Output: [alarm@alarmpi ~]$ ./framebuffer_decode/target/debug/framebuffer_decode
[src/main.rs:19] capabilities = Capabilities {
driver: "bcm2835-codec",
card: "bcm2835-codec-decode",
bus: "platform:bcm2835-codec",
version: (
5,
4,
75,
),
capabilities: VIDEO_M2M_MPLANE | EXT_PIX_FORMAT | STREAMING,
}
Number of formats: 2
index : 0
type: : 10
flags: : COMPRESSED
description : H.264
fourcc : H264
index : 1
type: : 10
flags: : COMPRESSED
description : Motion-JPEG
fourcc : MJPG
thread 'main' panicked at 'Failed to create buffer stream: Os { code: 22, kind: InvalidInput, message: "Invalid argument" }', src/main.rs:27:60
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace |
Okay, so this will require some more debugging. Can you add some logs to this function: https://github.com/raymanfx/libv4l-rs/blob/next/src/io/mmap/arena.rs#L54 and see where exactly it fails? I just looked at some Raspberry Pi 4 bundles today, but the 4GB models are still rather expensive, I don't think I can justify getting one just for this project. Anything less than 4GB is not worth paying for though. Apart from that, I'm still wondering about how the H.264 decoder works here - you're basically just passing the entire file buffer in one go. From what I read in the link you provided (https://www.raspberrypi.org/forums/viewtopic.php?f=67&t=291227&p=1760878#p1760878), I think you're supposed to pass framed data - so you'd need to split the H.264 bytestream into frames before passing them to the decoder? Then again, they also they it should work with unframed data as well, disregarding a latency increase. |
I'll add some logs to that function and see where it's failing. Yeah, from what the forum post explains, the decoder should work on unframed data (with performance degradation). But honestly I don't really know if passing in the file buffer is the right way to do it either. Can I just give you ssh access to my PI? I leave it on 24/7 anyway. If that works just email me at ayrton AT sparling DOT us. |
Oh .. I see it now. The H.264 decoder device on the Raspberry Pi 4 requires using the EDIT: not entirely true - while your decoder supports M2M, you don't actually want to use it here, since you're feeding data from an external source. I'll continue with the refactor for now to ease implementing future usecases such as M2M. |
Hey @raymanfx, what is the status for it ? |
Hi @patrickelectric, I prepared the multi-planar API support a while ago: https://github.com/raymanfx/libv4l-rs/tree/mplane but was not happy with the API at the time. Since it is not required for my projects, I did not finish the work. If you are interested in picking it up or collaborating, let me know. |
Hi @raymanfx , it looks that the mplane- branch has already been compatible with a
|
Raspberry Pi only supports the multi-planar API.
Could we get another method (or modify the current
v4l::output::device::Device.enum_formats
method) that allows us to specify the buffer type?We need to allow
libv4l-rs/src/output/device.rs
Line 97 in 2ab400d
to be
Reference
v4l::buffer::BufferType::VideoOutputMplane
.The text was updated successfully, but these errors were encountered: