-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFE] handling of CAN networks #940
Comments
I doubt that this has many users and it would complicate the maintenance for us so I would not be favor of inculding something like this by default. However we do offer a plugin system https://github.com/containers/netavark/blob/main/plugin-API.md so you could implement this externally without us. |
That's fair - I agree this is quite an embedded niche, but I think podman is starting to get quite a few embedded users so let's see if someone else shows up :) Ultimately I was saying a lot of work because this is interfaces I'm not familiar with, but I don't think it has to be very complex so I think it stands a chance if it can be made simple enough (if it really fits in netlink if create, placing in ns and some mostly-static forwarding rule it probably won't need much maintenance, and vcan allows easy testing without hardware so there's also no risk of bitrot if done properly). As said above I won't have time for this short term anyway, feel free to close but if someone else interested comes by please speak up! |
I have no problem with keeping RFE's open, makes them more discoverable for other users. If someone writes a plug-in it would at least show me how much code this really is and at what kind of complexity. As far as just moving a interface into a netns this is implement here as an exmaple https://github.com/containers/netavark/blob/main/examples/host-device-plugin.rs |
I've tried compiling the host-device-plugin that's included in the examples and while the interface does seem to get moved into the container netns, the interface is in a down state within the container so it never gets an IP address. I'm effectively looking for a full replacement to the CNI host-device plugin https://www.cni.dev/plugins/current/main/host-device/. I need to use SRIOV VF's for each of my containers and the fact that I can't enable CNI anymore on Podman 5 is really hurting me. |
Hi. This is a bit peculiar so feel free to convert this to a discussion, but I feel like issues would get more visibility if someone looks for this in the future, as I think it'll take a while to get something out of this (I won't have time to contribute much short term)
Background
On embedded devices with CAN networks, there is no way to pass a can interface to a container without using network=host.
It'd be great to have a way of doing something with it.
This can be tested with the vcan module that allows creating virtual CAN interfaces without hardware:
Possible solutions
Just move the interface
The simplest way I could work around the issue is just to pass the vcan interface to a container; that's quite crippling:
This has the advantage of being scriptable out of netavark somewhat realistically for the time being
vxcan tunnels
I think it'd make more sense to use vxcan tunnels which are basically like veth for can: it creates a pair of interfaces that communicate with each other accross namespaces
One can then use CAN_GW to forward traffic from vxcan-host to the real can interface, as described in this talk:
https://wiki.automotivelinux.org/_media/agl-distro/agl2018-socketcan.pdf
Ideally we could create a podman network type with a can interface as parent and have netavark create vxcan interfaces and handle the can_gw config when containers are created; but that'll take a bit of work.
tl;dr
There's probably quite a lot of work to do and I don't have time to help much short term, but I'm opening this to track any future work about it, and if stars align I might contribute later if there's an agreement on the way forward.
The text was updated successfully, but these errors were encountered: