Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Guidance on S3FS / CVMFS #492

Open
NilsKrattinger opened this issue Aug 5, 2024 · 3 comments
Open

Need Guidance on S3FS / CVMFS #492

NilsKrattinger opened this issue Aug 5, 2024 · 3 comments

Comments

@NilsKrattinger
Copy link
Contributor

Hi,

I'm currently working on the installation of Galaxy from the helm chart and, I have come across some questions concerning the CVMFS / S3FS choice for the ref-data volume.

The fact that in multiple places CVMFS is listed as the default choice for "time being", and this answer on the issue #342 are making me wonder if you have plan to drop on CVMFS support ?

But on the other hand, the CSI-S3 project seems to not be maintained anymore ? Is it planned to switch to the Yandex k8s-csi-s3 alternative ?

Is there any long-term plan discussed on this topic ? What would be the most future-proof choice ?

@nuwang
Copy link
Member

nuwang commented Aug 6, 2024

Hi @NilsKrattinger We would recommend continuing to use CVMFS, for several reasons.

  1. In general, cvmfs has been the main reference data source used by the galaxy community.
  2. We considered alternatives because there was a period in which the csi for cvmfs in particular lacked maintenance. This has been resolved now, and we are seeing regular updates.
  3. s3-csi (and alternatives like Yandex), does not perform as well as cvmfs when dealing with lots of small files. This is because each file requires a request, whereas cvmfs uses a larger block based format.
  4. The above + the fact that cvmfs is generally optimized for read-only data, and the s3 alternatives are not, resulted in worse performance for refdata. We observed a dramatic increase in startup times after switching to it. Possible future optimizations may be able to remedy this to some degree, but at least for the time being, it's the slower option.

For these reasons, we have switched back to cvmfs as the refdata repository of choice. Ideally however, the choice should not matter, as it should (at least in principle) be trivially possible to switch from one to the other just by enabling/disabling the corresponding option, and the chart will just use the new source.

@NilsKrattinger
Copy link
Contributor Author

@nuwang Thanks for sharing the reasons and story behind this, really appreciate it!

As for the low-coupling between refdata and the storage solution, i fully agreed that the CSI / Storage solution should ideally not matter, as long as it support ROX operations.

Going down that road, would something like a refdata.storageCalss attribute in Values.yaml be desirable?
To not break compatibility i think that, this would have to by override by cvmfs.enable and S3FS.enbale, or it could be defaulted to cvmfs / s3fs when left empty?

@nuwang
Copy link
Member

nuwang commented Aug 8, 2024

Yes, the storage class can currently be specified and it will be used when creating the pvc:

{{- if .Values.refdata.enabled }}

However, I just realised that changing the storageclass on a pvc midstream is not supported by k8s, so we should probably add a feature to allow the pvc itself to be injected externally. When done, it should be possible to switch the pvc transparently - with running jobs continuing to use the old pvc, and new jobs just switching to the new pvc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants