Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

daemon.start: enable cephfs idmapped mounts support for old MDS #177

Closed
wants to merge 1 commit into from

Conversation

mihalicyn
Copy link
Member

If ceph MDS version is old and lacks support of CEPHFS_FEATURE_HAS_OWNER_UIDGID then idmapped mounts won't work. We have special fallback mechanism in the kernel cephfs client called "unsafe_idmap". In fact, this thing is absolutely safe the only problem is that it's incompatible with MDS-side UIG/GID-based path restrictions which is rarely used thing especially with workloads like LXD.

Let's enable this thing by default. We also need to preload ceph LKM.

ToDo: we can remove this thing entirely after a few years.

If ceph MDS version is old and lacks support of CEPHFS_FEATURE_HAS_OWNER_UIDGID
then idmapped mounts won't work. We have special fallback mechanism in the kernel
cephfs client called "unsafe_idmap". In fact, this thing is absolutely safe
the only problem is that it's incompatible with MDS-side UIG/GID-based path restrictions
which is rarely used thing especially with workloads like LXD.

Let's enable this thing by default. We also need to preload ceph LKM.

ToDo: we can remove this thing entirely after a few years.

Signed-off-by: Alexander Mikhalitsyn <[email protected]>
@mihalicyn mihalicyn requested a review from tomponline October 4, 2023 11:57
@@ -424,6 +424,15 @@ if [ "$(stat -c '%u' /proc)" = 0 ]; then
echo 1 > /proc/sys/kernel/unprivileged_userns_clone || true
fi
fi

# enable cephfs idmapped mounts support for old versions of ceph MDS
modprobe ceph || true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed the other day that this would be enabled in the ubuntu kernel by default so we wouldnt need this change?

This doesn't seem appropriate to always be loading the ceph module on LXD start, even if ceph pools arent being used.

So I don't think I can accept this patch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could potentially have this logic in LXD's Go code as part of the cephfs pool Mount() function:

https://github.com/canonical/lxd/blob/main/lxd/storage/drivers/driver_cephfs.go#L276

But ideally we wouldn't be altering system wide settings when activating a pool.

Copy link
Member Author

@mihalicyn mihalicyn Oct 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed the other day that this would be enabled in the ubuntu kernel by default so we wouldnt need this change?

No, it was just discussed. Of course, it's easier and more flexible to just enable it in the user space.
First of all changing default value on the kernel side will be a Ubuntu-specific SAUCE patch.

This doesn't seem appropriate to always be loading the ceph module on LXD start, even if ceph pools arent being used.

What's the problem with that? Having the ceph's kernel module loaded does not make any troubles or overhead. At the same time, ceph can be built-in to the kernel image.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We dont want to be loading unnecessary modules unless they are going to be used.
We've recently made changes to LXD to avoid loading modules related to VMs unless other /dev/kvm support is detected, for example.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xnox are you OK with having that option enabled in the Ubuntu kernels by default?
If so then I think that option would be preferable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xnox are you OK with having that option enabled in the Ubuntu kernels by default? If so then I think that option would be preferable.

yes, as long is things work correctly on LXD stable snaps from stable tracks. Because as soon as I do it, I expose LXD snap users to it. And I cannot test this correctly. If you are asking me to do this, that's great. Because then I want to reject this proposal, and instead do it in the ubuntu kernel.

I can do this in mantic SRU too, and the hwe kernels that will roll to jammy.

Copy link
Contributor

@xnox xnox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NACK - does 5.0/stable LXD work correctly on a kernel with this option turned on, and does existing cephfs lxd usage works correctly upon reboot into such kernel? If yes, we will turn this on in the kernel.

Kernel team is awaiting go ahead from LXD team, because the fact of turning this on by default will trigger LXD to use it. Because we want signoff from LXD team for the go ahead.

@mihalicyn
Copy link
Member Author

NACK - does 5.0/stable LXD work correctly on a kernel with this option turned on, and does existing cephfs lxd usage works correctly upon reboot into such kernel? If yes, we will turn this on in the kernel.

yes, it will work correctly.

Kernel team is awaiting go ahead from LXD team, because the fact of turning this on by default will trigger LXD to use it. Because we want signoff from LXD team for the go ahead.

Ok. I've got it. Thanks!

@mihalicyn mihalicyn closed this Oct 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants