-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
document how to edit/set kernel arguments #88
Comments
@jlebon per our conversation on IRC, this seems to work for enabling cgroups v2 on FCOS:
I think this unit can be cleaned up more. An edge case I found was where this unit would be called and then before the system could reboot, my unit that sets up my podman pod, would start and it would cause podman to be configured to use cgroups v1. After the system came back up in cgroups v2 mode podman would fail to start my containers via systemd. A work around to that was adding |
I think we should extend FCOS (or potentially Ignition) to have a standard target that (if enabled) reboots and that other units can be ordered against. In OpenShift we kind of hack this together by having the MCO inject systemd units that perform an OS upgrade+reboot and are Something like Another way to look at this is extending the concept of Ignition as "runs at most once configuration" to the real root. One can do this now, but having users invent "run at most once" semantics + reboot handling makes things more likely to clash. |
In general, rebooting from a unit in the real root isn't safe, because |
Yes, we need to teach people to stop using If we want to handle being interrupted during provisioning, then it's required that services be idempotent. |
@cgwalters and @bgilbert outside of future things that would be more ideal for setting things like this, would there be any improvements to my current example for kickin FCOS into cgroup v2? |
@jdoss First, thanks for publishing that example! But...I can't come up with easy "minor" changes to it to solve the problems you mentioned without really trying to tackle the general space. For example, the unit ordering one...well, we could recommend ordering it But...the OpenShift use case wants to apply OS updates before any potentially untrusted containers land, and doing OS updates requires things like networking, time synchronization etc. And those are often |
Ouhh, that's a good point. Given that (most) computers do eventually reboot, and there's no way to order units "after all other units", doesn't that imply that
The idea is that instead of a |
Cross-referencing: coreos/butane#57 |
@cgwalters no problem! Also after rereading my reply it sounded a bit terse and that was not my intent. I was pretty sure my first pass at this wasn't going to be perfect. I know you, @jlebon, and @bgilbert have a lot more inner FCOS guts understanding that could make this better. Thanks for taking the time to respond 😄 My initial testing of the original unit worked fine on my qemu FCOS tester VM, but when trying it out on EC2 with having it run Anyways, I ended up with this which seems to work for now:
Note that [core@mycool-fcos ~]$ sudo /usr/bin/rpm-ostree kargs --delete systemd.unified_cgroup_hierarchy=0
Staging deployment... done
Kernel arguments updated.
Run "systemctl reboot" to start a reboot
[core@mycool-fcos ~]$ sudo systemctl reboot
*reboot*
[core@mycool-fcos ~]$ mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
[core@mycool-fcos ~]$ sudo /usr/bin/rpm-ostree kargs --replace systemd.unified_cgroup_hierarchy=1
Staging deployment... done
Kernel arguments updated.
Run "systemctl reboot" to start a reboot
[core@mycool-fcos ~]$ sudo systemctl reboot
*reboot*
[core@mycool-fcos ~]$ mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate) Then all of my units also needed |
Re. |
Opened #199 for this which includes feedback from the discussions in that systemd ticket. |
We have some items in the works that will make settting/editing kargs easier but for now let's just get up a page that is a starting point for people to use to configure kernel arguments persistently.
I'm thinking it should be at the same place in the navigation as https://docs.fedoraproject.org/en-US/fedora-coreos/sysctl/ but for kargs.
The text was updated successfully, but these errors were encountered: