Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] handling of CAN networks #940

Open
martinetd opened this issue Mar 6, 2024 · 4 comments
Open

[RFE] handling of CAN networks #940

martinetd opened this issue Mar 6, 2024 · 4 comments
Labels

Comments

@martinetd
Copy link

Hi. This is a bit peculiar so feel free to convert this to a discussion, but I feel like issues would get more visibility if someone looks for this in the future, as I think it'll take a while to get something out of this (I won't have time to contribute much short term)

Background

On embedded devices with CAN networks, there is no way to pass a can interface to a container without using network=host.
It'd be great to have a way of doing something with it.

This can be tested with the vcan module that allows creating virtual CAN interfaces without hardware:

# ip link add dev vcan0 type vcan
# ip link set vcan0 up
(in one shell, commands from can-utils package)
# candump vcan0
  vcan0  123   [4]  DE AD BE EF
(in another shell)
# cansend vcan0 123#DEADBEEF 

Possible solutions

Just move the interface

The simplest way I could work around the issue is just to pass the vcan interface to a container; that's quite crippling:

  • cannot share between multiple containers (this might actually be for the best?)
  • if the container stops when the interface is in the network namespace, that interface is lost forever? I found not way of recreating a can interface for real hardware..

This has the advantage of being scriptable out of netavark somewhat realistically for the time being

# ip netns list
netns-55c02473-599c-9671-d5b9-1f0619e1ff8b (id: 0)
# ip link set vcan0 up
# ip link set vcan0 netns netns-55c02473-599c-9671-d5b9-1f0619e1ff8b

vxcan tunnels

I think it'd make more sense to use vxcan tunnels which are basically like veth for can: it creates a pair of interfaces that communicate with each other accross namespaces

# ip link add vxcan-container netns netns-55c02473-599c-9671-d5b9-1f0619e1ff8b type vxcan peer name vxcan-host

One can then use CAN_GW to forward traffic from vxcan-host to the real can interface, as described in this talk:
https://wiki.automotivelinux.org/_media/agl-distro/agl2018-socketcan.pdf

Ideally we could create a podman network type with a can interface as parent and have netavark create vxcan interfaces and handle the can_gw config when containers are created; but that'll take a bit of work.

tl;dr

There's probably quite a lot of work to do and I don't have time to help much short term, but I'm opening this to track any future work about it, and if stars align I might contribute later if there's an agreement on the way forward.

@Luap99
Copy link
Member

Luap99 commented Mar 6, 2024

I doubt that this has many users and it would complicate the maintenance for us so I would not be favor of inculding something like this by default.

However we do offer a plugin system https://github.com/containers/netavark/blob/main/plugin-API.md so you could implement this externally without us.

@martinetd
Copy link
Author

That's fair - I agree this is quite an embedded niche, but I think podman is starting to get quite a few embedded users so let's see if someone else shows up :)

Ultimately I was saying a lot of work because this is interfaces I'm not familiar with, but I don't think it has to be very complex so I think it stands a chance if it can be made simple enough (if it really fits in netlink if create, placing in ns and some mostly-static forwarding rule it probably won't need much maintenance, and vcan allows easy testing without hardware so there's also no risk of bitrot if done properly).
One of the big plus for netavark for me was that it includes all of the old CNI plugins into a single binary so I'd rather not go back to the extra binaries slope, especially since the example plugins are rather big for some reason (just did a release build as of main branch and even the noop plugins are around 5MB?!), but I also understand your position.

As said above I won't have time for this short term anyway, feel free to close but if someone else interested comes by please speak up!

@Luap99
Copy link
Member

Luap99 commented Mar 6, 2024

I have no problem with keeping RFE's open, makes them more discoverable for other users.
Certainly if there is enough interest I would not oppose adding another driver directly here.

If someone writes a plug-in it would at least show me how much code this really is and at what kind of complexity. As far as just moving a interface into a netns this is implement here as an exmaple https://github.com/containers/netavark/blob/main/examples/host-device-plugin.rs

@Luap99 Luap99 added the RFE label Mar 7, 2024
@rsumner
Copy link

rsumner commented Aug 2, 2024

I've tried compiling the host-device-plugin that's included in the examples and while the interface does seem to get moved into the container netns, the interface is in a down state within the container so it never gets an IP address. I'm effectively looking for a full replacement to the CNI host-device plugin https://www.cni.dev/plugins/current/main/host-device/. I need to use SRIOV VF's for each of my containers and the fact that I can't enable CNI anymore on Podman 5 is really hurting me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants