Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Apache LimitRequestBody for >1 GiB v2 chunking too #11295

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ for how to configure those values correctly:

Apache
^^^^^^
* `LimitRequestBody <https://httpd.apache.org/docs/current/en/mod/core.html#limitrequestbody>`_ (In Apache HTTP Server <=2.4.53 this defaulted to unlimited, but now defaults to 1 GiB. The new default limits uploads from non-chunking clients to 1 GiB. If this is a concern in your environment, override the new default by either manually setting it to ``0`` or to a value similar to that used for your local environment's PHP ``upload_max_filesize / post_max_size / memory_limit`` parameters.)
* `LimitRequestBody <https://httpd.apache.org/docs/current/en/mod/core.html#limitrequestbody>`_ (In Apache HTTP Server <=2.4.53 this defaulted to unlimited, but now defaults to 1 GiB. The new default limits uploads from non-chunking clients to 1 GiB and also limits the maximum chunk size for recent versions of the official Nextcloud clients to 1 GiB even though they may require up to 5 GiB. This will break uploads >1 GiB under various circumstances. Our recommendation is to set this Apache parameter to 5 GiB or to the old default behavior of unlimited. Also make sure to review your local environment's PHP ``upload_max_filesize / post_max_size / memory_limit`` parameters to make sure they are at least the same size. For chunking capable clients that support manual configuration, as an alternative, the default maximum chunk size can be reduced to 1 GiB, but each client installation will need this parameter to then be manually configured.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure about the chunk size of the desktop client? I think by default maxchunksize is set to 1GB, no? So why should it require up to 5GB?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's 5GB today. It was changed in nextcloud/desktop#5939

Context: The desktop client is more aggressive about using a large chunk size (versus our web client). This same behavior is essentially the underlying cause of nextcloud/desktop#4278 too (while many of those people are using CF since that's specifically covering the 100MB CF limit, the same underlying problem applies to body size limits in Apache and Nginx; only the absolute number varies).

Though today - rather than updating the docs - I'm of the opinion we should really fix the mentioned issue in the desktop client. Then the language in this PR can be more about "optimizing upload performance" rather than "fixing broken uploading".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though today - rather than updating the docs - I'm of the opinion we should really fix the mentioned issue in the desktop client. Then the language in this PR can be more about "optimizing upload performance" rather than "fixing broken uploading".

Agreed 💯

* `SSLRenegBufferSize <https://httpd.apache.org/docs/current/mod/mod_ssl.html#sslrenegbuffersize>`_
* `Timeout <https://httpd.apache.org/docs/current/mod/core.html#timeout>`_

Expand Down