In the Linux kernel, the following vulnerability has been...
Moderate severity
Unreviewed
Published
Jul 30, 2024
to the GitHub Advisory Database
•
Updated Sep 16, 2024
Description
Published by the National Vulnerability Database
Jul 30, 2024
Published to the GitHub Advisory Database
Jul 30, 2024
Last updated
Sep 16, 2024
In the Linux kernel, the following vulnerability has been resolved:
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty
limits in PAGE_SIZE units fit into 32-bit (so that various multiplications
fit into 64-bits). If limits end up being larger, we will hit overflows,
possible divisions by 0 etc. Fix these problems by never allowing so
large dirty limits as they have dubious practical value anyway. For
dirty_bytes / dirty_background_bytes interfaces we can just refuse to set
so large limits. For dirty_ratio / dirty_background_ratio it isn't so
simple as the dirty limit is computed from the amount of available memory
which can change due to memory hotplug etc. So when converting dirty
limits from ratios to numbers of pages, we just don't allow the result to
exceed UINT_MAX.
This is root-only triggerable problem which occurs when the operator
sets dirty limits to >16 TB.
References