Year 2038 bug

hey guys, I fixed 32-bit time_t
There, I, a random fucking pleb, just fixed it, when can we get a patch going? Seriously this is fucking retarded my ARM PC/phone wont be functional in 20 years

ARM is the most popular architecture
the most popular variant of ARM is 32-bit

32-bit ARM accounts for billions of devices

most of these billions of devices run Linux or UNIX

it is a fucking problem that we can't push under the rug

Other urls found in this thread:

show me bub

Apple's HFS+ has this problem too. The maximum date you can represent is some time in 2038.

Yes, thanks for that, we are all aware every piece of software that uses UNIX timestamps is fucked in 2038, like Apple HFS+, some pieces of software in Windows, and most Linux and UNIX kernels

Every piece of software that stores date as a UNIX timestamp will have this problem, yes, which does include most non-UNIX software

in 2016 :^)

guess you'll just have to upgrade to the dingleberry pi 12 goy

You're welcome autist.

billions is a lot user, I doubt we'll be able to upgrade billions of devices in just 20 years, you forget also that millions of new 32-bit ARM boards are still being produced every day, I suspect it wont be until around 2020 before shipments of ARMv8 outpace ARMv7

RPi3 is already 64-bit

Look at how long ARM has been around pumping out this quantity of chips.
Now pull your head out of your ass.

You'd think they would have learnt from the millenial bug.

ARM continues pumping out more 32-bit chips than 64-bit chips though, and there's no telling when 64-bit shipments will outpace 32-bit I guarantee you more half if not more than half of ARM chips will still be 32-bit by 2038

Embedded guy here. Some of the products we support are 20 years old and the ones we make today are 32 bit and I've tested that they do fail spectacularly and permanently close to 2038. I'm absolutely sure a lot of these will be in use in 2038. Absolutely no one is asking us to be 2038 safe so management does not care.

This is what terrifies me about this.
IDGAF if I'm running desktops with the CMOS batteries pulled out, but man, I don't have enough skill points in embedded systems to fix all my shit.

We don't use CMOS batteries, we get time from the network. Still doomed.

Wait, do newer motherboards not have CMOS batteries at all?

Of course they don't care. Why spend money today to find a solution to a problem 20 years in the future when you can half-ass it in 2037 and sell it to panicked consumers at 10x the price?

They do, but the batteries are such a high failure rate that many vendors design products without them.

The customers only care about heavily marketed bugs that have been given names. We got hundreds of calls about heartbleed for a product that has no web UI or encrypted connections and a total of zero calls about the (imo, far worse) glibc resolver bug that wasn't marketed or given a name because Google found it.

So, if one of these devices that can only get time from the network is run without a network connection, does it just not have an internal timepiece at all?

I don't think modern UEFI based motherboards have CMOS batteries, although I suppose they do have to keep track of time somehow though when powered off

obviously the clock is run off the mains, that's what I can guess anyhow, and when power is cutoff the clock must be reset

Where I'm going with this is "what hardware can I just turn the clocks off to make it never tick to 2038"?

Never thought of that, hell, why can't we just stop the clock in Linux altogether?

Its fucking incredible how Linux has no overflow protection on time_t, regardless if Linus Torvalds doesn't give a fuck about the 2038 problem overflow protection should just be a basic fucking thing

If we stop it entirely then we cant time anything or generate pseudo-random sequences.
We need it to just reset every time we cycle the power.


Or just have it loop indefinitely before it reaches 2038 like the OP suggests

seriously though, overflow protection is just something that should've been done in the begging regardless of wither or not people believed most devices would be 32-bit in 2038

I'd agree with that.
Also, has anyone even addressed the issue of the "y2k bug" in that insane pile of COBOL written by the mad-spider-god himself?
When does that hit it's next limit, and is this 2038 thing going to effect that black magic spaghetti web?

So you're saying we have 22 years left to prep for when the botnet goes haywire.

You don't need 64bit CPU to have 64 bit time_t you silly. OpenBSD has already most (if not all) of their stuff patched to use 64 bit time_t on all archs, afaik Linux keeps track of time in 64 bits as well, only some userspace stuff uses 32 bit time_t on some archs. For vast majority of software, just changing time_t typedef and recompiling is all you need. We already have the fix, we just haven't applied to lot of stuff for various reasons. Your solution, besides being stupid, still doesn't work without recompiling everything so it's kind of pointless.

Yeah, we don't keep time internally when powered off. The box boots and gets NTP servers via DHCP options and updates. Then it's ready to start the next stage. This is pretty normal for embdedded.

Thia isn't a solution at all, many devices need to know the real time. And almost every solution in this thread is retarded - you just need to extend the size of time_t but you can't do that for various reasons in a lot of cases and that's why there's a problem.

OpenBSD got away with it since nobody uses BSD, its going to a much harder time with Linux I'm afraid


and lots of software break monumentally anyway because of the weird cyclic behaviour of time they didn't expect.

The thing is some systems CANNOT be updated. A good example are embedded devices.

how about two integers nigga

If your program keeps time by measuring the difference between 2 timestamps instead of counting steps independent of time_t value you've done something wrong

Please never write anything critical, user.

That breaks everything that relies on the time. The 2038 overflow would simply break a lot of things that rely on the time.

i don't understand why so many things need "correct" time to work today. like if its the wrong time many websites just wont fucking render.

why is a clock so important? why the fuck isn't it as easy to fix as it should be? and why an arbitrary number like 2038? what would even fucking happen? i just don't get it

Why isn't it? Consider that even DHCP needs to know if a lease has expired and on a lot of servers this means storing a timestamp and checking it every time it's unsure. Most DHCP servers persist leases across reboots by storing them on the filesystem so there are no retarded games that can be played with the runtime clock as suggested here. Just have to make sure you have an appropriately large time_t.
Kill yourself.
Depends on the software. Usually the failures will be much more dramatic and common than with y2k as timing is much more heavily relied on.

That's because of HTTPS certificates. They have an expiry date and a creation date, for security reasons. If the current date is not between those dates something is wrong. There are a lot of things like that.

Time is kept as a count of the seconds since 1970-01-01. It's traditionally stored in an unsigned 32-bit variable. The highest number they can hold is 2,147,483,647, the lowest is -2,147,483,648. 2,147,483,647 seconds after 1970-01-01 is somewhere in 2038. Because of the internal representation of numbers, adding 1 to an unsigned 32-bit variable with a value of 2,147,483,647 makes it become -2,147,483,648, which is somewhere in 1901.

A straightforward way to fix this is storing time in a 64-bit variable, but that breaks compatibility. If a program is compiled for a system that keeps time in a 32-bit variable, the compiled program only allocates 4 bytes of space for the time. To store more than that, it needs to be recompiled with a 64-bit time variable. So if you change, say, Linux to use a 64-bit time variable, it can no longer run old programs.

There are ways to work around that, and there are other reasons why programs have trouble with 64-bit time, but this is an approximate explanation.

Not user, but thanks for the explanation. What I'm wondering is why they can't program a function that checks if it is 2038, sort of like checking if it's a leap year. If it is, then just set the storage back 0 and continue from -2 billion so it lasts a while.

That's something program developers would have to do, you can't implement it at a level where it happens automatically.

The website rendering problem has to do with certificate mismatch (ie. the date a cert was stored does not match the websites cert lease time)
at human level it should be obvious why timestampos are vital to the organization and operation of an OS, even things like temporary files only work based on human-time

At lower levels, a program/kernel does not need to know the exact date, rather, it calls to time_t to handle low-level counter routines and RNGs
32-bit computers typically use 32-bit math, this is less computer science and more basic fucking mathematics, a number with a 32-bit word size can store a maximum of 2147483647 decimal digits. The time_t function originally comes from Bell Labs' original version of UNIX, which counts seconds after 1970, in 2038, 2147483647 seconds will have passed since 1970, causing an integer overflow.

Now since UNIX is perhaps the most influential OS of all time, pretty much every OS that was programmed in some C and takes paradigms from UNIX uses time_t at least once somewhere. In other words; pretty much every OS ever, including Windows

Look at the filename of the picture attached to this post, that number is no random number, its a UNIX timestamp, same with all *chan board image filenames, you can even lookup the UNIX timestamp to see the exact time and date the image was uploaded

On UNIX, Cygwin, and Linux, the following command should give you the current UNIX timestamp/epoch
date +%s

Introduce new 64-bit time API, adapt old programs. 2038 will come and kill programs which haven't been adapted, but not those which have. It's already the same deal with programs which use 32-bit integers to keep track of file sizes in bytes.

This doesn't mean anything. 32-bit processors can deal with 64-bit numbers.

Are you just spouting random words that sound vaguely technical now? Why don't I just override your hard disk through a proxy and fill it up with furry porn?

I have a feeling the idea of arbitrary precision arithmetic would blow the minds of most people in this thread

32-bit kernels do typically use a 32-bit time_t, and 64-bit kernels a 64-bit time_t. I'm not sure why, but I think it's because having to create a new incompatible ABI anyway meant changing it was less of a problem.

Arbitrary precision arithmetic for unix time sounds like a bad idea.

This is the moment I have to tell you, learn more about architectures, you will be surprised.

Literally the only problem is the computer running out of memory, which would take longer than a hundred trillion lifetimes of the universe.

64 bit has been standard for years, who uses this 32-bit garbage?

Your fault for using shitty hardware.

They're slower and more complex to deal with, aren't they? Can I use them in all languages?

More importantly, what's the point of using them if 64-bit time also takes something like a hundred trillion lifetimes of the universe to stop working? What advantage do they have for unix time over 64-bit timestamps?


tech pleb here

will PC vidya and emulators still work on windows or wine?

If you just use a 64 bit type then it will use as many registers as required to store the type.
You could even modify the implementation of time_t on embedded systems to use 64 bit arithmetic given that the CPU can do it, and as far as I know most can.

That would break compatibility with a lot of legacy software however

WINE re-implements the Win32 APIs, so I imagine it is immune to the 2038 bug, however, a lot of Windows programs themselves may be affected if they call to regular time_t instead of the Windows API

Sorry fam, my main x86_64 PC uses Windows 10 while my ARM computer uses Fedora
And you all me the dumb one? At least Allwinner tries to be GPL friendly/FOSS


x86/64 works perfectly fine with your inferior distros.