Skip to content

Latest commit

 

History

History
91 lines (58 loc) · 8.59 KB

y2038.md

File metadata and controls

91 lines (58 loc) · 8.59 KB

Y2038 support in .NET 9 Linux arm32

In .NET 9, we addressed the Y2038 problem for Linux arm32. @mattheweshleman noticed the issue early in the release cycle, and we also hit this when our Linux arm32 builds started hitting OpenSSL-related errors on Ubuntu 24.04, the first Ubuntu version to enable 64-bit time_t by default for Linux arm32. This document describes our approach to Y2038 support.

Summary

Our default posture for new .NET versions is to align with the latest Linux distribution first, and then work backwards to determine a reasonable compatibility story for older distributions. The .NET 9 Linux arm32 build requires glibc 2.35 and uses 64-bit time_t by default. We've made this change to align with Ubuntu 24.04. This change works on systems with 32-bit time_t but cannot represent times larger than 32 bits. We detect whether OpenSSL uses 32-bit time_t and fail predictably if the time being represented on the .NET side does not fit in 32 bits.

Y2038 in Linux

Y2038 support at the kernel layer includes new versions of syscalls that use 64-bit versions of time_t. Userspace programs that use 32-bit time_t continue to work with this kernel.

The next layer up is glibc, which got Y2038 support in version 2.34. Like the kernel, glibc took a backwards-compatible approach. For every existing API symbol that used 32-bit time, glibc added a new 64-bit equivalent. The glibc headers provide a define __TIME_BITS that determines which versions of the time-related symbols are referenced at compile-time. For example, a call to glibc's mktime becomes a call to __mktime64 when built with __TIME_BITS==64.

However, the backwards compatibility ends with glibc. Linux distributions took on the project of building the remaining distribution-provided userspace libraries and applications with __TIME_BITS=64, introducing an ABI break at the boundary of any library with public surface that referenced time_t.

Normally, ABI breaks like this are not a problem for the Linux ecosystem because distributions rebuild all packages using the new ABI. However, the Microsoft distribution of .NET is somewhat unique in that it provides one set of binaries that are compatible with a broad range of distros (as long as the architecture and libc flavor match). This is similar to Python's manylinux wheels.

Ubuntu made this change in 24.04, and Debian 13 is planned to be the first Debian release with 64-bit time.

musl version 1.2.0 got 64-bit time_t support, which was adopted in Alpine 3.13.

.NET builds

.NET official builds are produced by building the product on a dedicated set of build images. The images run Azure Linux 3.0 with an up-to-date cross-compilation toolchain, and also include a root filesystem of an old Linux distribution that provides an old version of glibc (or musl libc), which determines the libc compatibility of our builds.

In .NET 8, we support glibc 2.23 and musl 1.2.2, by building the product with a root filesystem from Ubuntu 16.04 and Alpine 3.13, respectively.

.NET Y2038 support

.NET 8 already supports 64-bit time on musl Linux, and on architectures other than 32-bit arm. This just leaves the glibc-based arm32 Linux build.

glibc

Because our build already defined _TIME_BITS=64, the main change we had to make was to update the Ubuntu version of our rootfs to one that had a glibc with _TIME_BITS support. By targeting Ubuntu 22.04, .NET would be built with 64-bit time_t, allowing DateTime.UtcNow to work correctly past Y2038.

With this change, glibc-based .NET 9 (on Linux arm) is only supported on distributions with at least glibc 2.35 (from Ubuntu 22.04).

OpenSSL

Notice that we require a 64-bit time compatible glibc, but the rest of the userspace may still be built with 32-bit time. This causes further problems when .NET interacts with other userspace libraries, in particular OpenSSL. An initial analysis identified a couple of spots where .NET was passing time_t values (now 64-bit, with the aforementioned change) into OpenSSL functions. Before Ubuntu 24.04 and Debian 13, this meant we would be passing 64-bit values into functions that expected 32-bit values at two callsites:

  • One was passing a time to OpenSSL's X509_VERIFY_PARAM_set_time for the purpose of validating a certificate chain.
  • Another was calling X509_cmp_time to compare the current time with a time returned from an OCSP response, as part of getting the expiry time.

We didn't want to cause a potential security hole by clamping a larger-than-32-bit time to INT_MAX or INT_MIN, as that might (for example) cause OpenSSL to accept a certificate chain valid during the Y2038 rollover, well past Y2038 (as it would internally be comparing an ASN1_GENERALIZEDTIME that does not have the Y2038 problem with a 32-bit time).

Instead we wanted to detect whether we were running with an OpenSSL that used 32-bit time_t, and fail predictably if working with times that did not fit in 32 bits.

We solved this by calling a simple OpenSSL function, OPENSSL_gmtime, that takes a pointer to a time_t and returns a broken-down time in the form of a tm struct. For this call we use an ABI trick: we pass in a 64-bit time_t as a pointer, and 32-bit time_t builds of OpenSSL will interpret the low 32 bits of the referenced value as a 32-bit time_t due to little-endianness. We then examine the broken-down time to see whether it represents the full 64-bit value (indicating that OpenSSL is using 64-bit time_t), or just the lower 32 bits.

Here is the interesting bit of code:

    // This value will represent a time in year 2038 if 64-bit time is used,
    // or 1901 if the lower 32 bits are interpreted as a 32-bit time_t value.
    time_t timeVal = (time_t)INT_MAX + 1;
    struct tm tmVal = { 0 };

    // Detect whether openssl is using 32-bit or 64-bit time_t.
    // If it uses 32-bit time_t, little-endianness means that the pointer
    // will be interpreted as a pointer to the lower 32 bits of timeVal.
    // tm_year is the number of years since 1900.
    if (!OPENSSL_gmtime(&timeVal, &tmVal) || (tmVal.tm_year != 138 && tmVal.tm_year != 1))
    {
        fprintf(stderr, "Cannot determine the time_t size used by libssl\n");
        abort();
    }

    g_libSslUses32BitTime = (tmVal.tm_year == 1);

Then at the other callsites, we just check if our value is larger than 32 bits, and fail early if it is, or do the right thing for the 32-bit version of OpenSSL if not, by calling versions of the OpenSSL functions that take 32-bit time_t. For example:

    if (g_libSslUses32BitTime)
    {
        if (verifyTime > INT_MAX || verifyTime < INT_MIN)
        {
            return 0;
        }

        // Cast to a signature that takes a 32-bit value for the time.
        ((void (*)(X509_VERIFY_PARAM*, int32_t))(void*)(X509_VERIFY_PARAM_set_time))(verifyParams, (int32_t)verifyTime);
        return 1;
    }

This cast is ensuring that the call uses the correct ABI that the OpenSSL expects for the 32-bit time_t value.

With that, we support running on 64-bit time_t compatible glibc, along with either 32-bit or 64-bit time_t OpenSSL, at least until we hit a time that does not fit into 32 bits.