-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blobfuse not writing to HNS enabled Storage Account fast enough #1573
Comments
You can disable the local disk caching and validate it helps improving the performance or not. To try this just comment out the "path" field in block-cache config. Also, if you are using very small files may be reducing the block-size to 8mb may also help. |
It will be large number of small files. A hi-resolution image will be uploaded, the system then processes it to a lower resolution preview and a thumbnail. So one file is multiplied out to three. The processing is done async, as the intial written images are then added to a process queue. I tweaked the block size to 8MB, changed to Premium SSD and also reduced down the number of post processing threads. However, I think I've hit the original issue which I seem to understand better now. Some of the processes are now in an uninterruptable state (dreaded Linux
Investigations have shown the issue the process are stuck is definitely related to Blobfuse:
Of which there are many different entries in the logs. So, it's been writing faster, but seems that the cache has been filled up, though I'm not sure why items aren't being purged out the cache? I had tried to reduce the I could disable the caching, but can I be confident that all current files in the cache location have been uploaded to the Storage Account? |
Hi @biggles007, you can disable the disk caching by commenting the disk path and try to mount the blobfuse again. |
If possible, please share the logs of blobfuse2 by enabling log_debug level.
|
I increased the storage allocated to the cache partition from 10GB to 40GB, but had to hard reboot the server in order to get the processes working. As they'd got stuck in the
Strangely it never seems to go above 0% on the When does Blobfuse2 determine the amount of free space to use? Is this only at start-up or is it constantly looking? Just if we expand the volume the mount point is on if it were to fill up, would it pick that up immediately? Also wondering whether or not if a cache space issue is resolved, should those processes in the I won't be able to change the logging level right now, just need to keep the server stable to allow the business to meet some requirements. Hopefully later next week we may be able to look at getting some more logs. |
Logs that you see in your last message are showing two different states:
You can comment out the temp-path from block-cache config if you do not need disk persistence, that shall save all of the disk space. Disk persistence is required only if your application tends to read from same file again and again and this disk caching can save you from redownload of the same file. |
Technically, if we have a balanced configuration we could benefit from short-term caching, as the file should be in the cache for post processing. Removing the temp-path is an option, but while things are working I need to keep them stable. We can test more once the main data load has been completed and there is less pressure on the project. This is the first time using Blobfuse2, so learning a lot. I feel that there could be some more documentation on the various options, as right now there isn't anything (unless you can point me to it) that explains each option, saying what it's for and the values that can be set. Just seems to be some sample config files with basic comments. Having some more details around different scenarios would be helpful, e.g. if no caching required, what settings you need. As all the sample configs have caching enabled, it seems to be a "you should cache" unless you find the docs that state to turn off caching if writing from multiple locations. If that makes sense? |
There is baseconfig.yaml file shared in our "setup" directory that explains most of the config options that we allow. Also, our README has scenarios mentioned which can help you choose what option or caching model you shall use. I do understand that there is lot of scope in improving our documentation and receiving feedback from customers is important in this regard. |
Blobfuse version: 2.3.2
Operating System: RHEL 8.6
The config file is as per below
Server build/app
The server runs a third party Digital Asset Management system that writes to a shared file location, this has been mounted via Blobfuse2. Assets/images are being archived from a live production system (that isn't cloud based) to the archive server in Azure.
Issue
The cache seems to becoming full with the data being copied in. With blobfuse not seemingly copying data into the Storage Account/Data Lake Storage at a reasonable rate. The blobfuse2 process isn't using any CPU, I can see a process of
/usr/bin/du -sh /blobstorage/cache/archive2
that appears every now and then with high CPU usage, seemingly calculating the space used.Is there a way of monitoring Blobfuse and the rate at which it is copying files into the Storage Account?
The text was updated successfully, but these errors were encountered: