You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We set up goofys to mount from AWS S3 bucket inside the container image. Recently when the bucket gets large, the files don't get updated.
Some small files like the .txt file got the timestamp update but when I checked the content, it was still old content.
We already use allow_other option and do not enable any cache. I suspect that it only happens with the same existing directory and only for the large size bucket.
When I tried to create a new directory and .txt file in the bucket and I updated the content of the .txt file spontaneously, the file got updated just fine.
I wonder why it only happened to the existing file. Are there any options to prevent cache inside the container?
The text was updated successfully, but these errors were encountered:
We set up goofys to mount from AWS S3 bucket inside the container image. Recently when the bucket gets large, the files don't get updated.
Some small files like the .txt file got the timestamp update but when I checked the content, it was still old content.
We already use allow_other option and do not enable any cache. I suspect that it only happens with the same existing directory and only for the large size bucket.
When I tried to create a new directory and .txt file in the bucket and I updated the content of the .txt file spontaneously, the file got updated just fine.
I wonder why it only happened to the existing file. Are there any options to prevent cache inside the container?
The text was updated successfully, but these errors were encountered: