From b6bbba5ceb295017c9fccda379e4e421571a6cde Mon Sep 17 00:00:00 2001 From: Simon McVittie Date: Mon, 25 Mar 2024 17:46:40 +0000 Subject: [PATCH] utils: Don't let ssize_t overflow when reading very large files The size to be allocated is tracked as ssize_t, so if it's larger than this, doubling it would cause a signed overflow. Limiting the data we will read into memory to SSIZE_MAX/2 still lets it occupy 25% of addressable memory (1 GiB on 32-bit or some very large amount on 64-bit), which should be adequate. In practice we expect this function to read a few KiB at most. In practice we're likely to run out of memory before reaching this point; changing this to SSIZE_MAX / 8, compiling as 32-bit and running `${builddir}/bwrap --args 0 < /dev/zero` is a convenient way to test this code path. Fixes: 422c078e "Check for allocation size overflows" Signed-off-by: Simon McVittie --- utils.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/utils.c b/utils.c index 1b685360..43c8d798 100644 --- a/utils.c +++ b/utils.c @@ -19,6 +19,7 @@ #include "config.h" #include "utils.h" +#include #include #include #include @@ -599,7 +600,7 @@ load_file_data (int fd, { if (data_len == data_read + 1) { - if (data_len > SIZE_MAX / 2) + if (data_len > SSIZE_MAX / 2) { errno = EFBIG; return NULL;