Reddit's photo albums broke due to Integer overflow of Signed Int32
328 points by Rebles 2 years ago | 139 comments- madrox 2 years agoA long time ago we discovered Twitter used a round robin of three servers for assigning IDs to tweets. We inferred the round robin was done by doing mod 3 of a signed int32, and because that space doesn't divide neatly by two it meant one of the three servers saw less load than the others and we could map ID assignment volume according to how often it overflowed and hence estimate total tweet volume for a given period.
Some of the details escape me (this was a decade ago) but it was a fun combination of statistical inference and CS knowledge that I don't get to use often. Whenever integer overflow comes up in a systems engineering context I get a little tickled.
- liquidgecka 2 years agoI am pretty sure that snowflake didn't use a mod of a signed int32. It used a service discovery pool as part of finagle (and prior to that dns iirc). The server used a very simple method internally to convert time into a integer (that was 52 bits because of javascript). In fact it was completely open source: https://blog.twitter.com/engineering/en_us/a/2010/announcing...
The integer generation was pretty simple, there was a fixed id of each server, and unless I a mistaken we have 5 servers per datacenter. Each id was basically <time><offset><id> where time was a millisecond timer, offset was the number of ids generated in that same millisecond by the same server, and id was the machines unique identifier. When we first talked about this process I thought that offset was going to roll, every id would increment it by one. This was changed to resetting it every millisecond specifically so that it would obscure tweet volumes.
At the time I remember reading a LOT of articles estimating tweet volume and most of them were way, way off. I don't know that we ever really put effort into correcting them though. =)
* - Does not account for changes in the system post 2012.
- madrox 2 years agoAwesome to hear from someone on the other side of the API with knowledge of this. This is ringing bells for me. Yeah, the id parameter was how we knew how many servers there were, and we saw more assignment in some servers than others that neatly mapped to a int32 max failing to divide by the number of IDs we saw. I thought I recall Twitter confirming that was how round robin happened but I could be totally misremembering. We never got a contact to talk to Twitter about it. FWIW, we did eventually see this fixed. I imagine it was pretty easy to spot one server seeing less load than others.
The offset was actually how we calculated volume, because millisecond collisions become a variant of the german tank problem[1]. A few times when y'all made tweet volumes public it mapped pretty closely with our estimates.
This was around 2011, so your knowledge should be relevant.
- liquidgecka 2 years agoI think what your describing was actually a problem that I introduced in our bind configuration back when we used DNS for service discovery. Its not exactly what your describing but I can totally see how it would appear that way externally.
So initially we used 2 bind servers with authoritative zones serving using round robin. This worked fairly well as the load on each server was high enough to keep the round robin flowing and the connections from ruby would "stick" for 30 seconds anyway. We had a very large pool of front ends so its not like it mattered. Eventually we had to put a local bind cache on each server, but this introduced another super annoying bug. For some reason, the interaction between the authoritative server, and the caching server would end up causing the served record set to be ordered the same. Normally the authoritative server would serve A, B, C for the first query, then B, C, A for the second, etc. When the cache in the middle for some reason it would reset al queries to A, B, C after the refresh happened. So effectively every 30 seconds it would reset to A, B, C and start round robin again. Since the front ends would connect and stay sticky via connection reuse for a while this meant that A got like 50% of the load, B got 33%, and C got 17%. I am guessing you latched on to this by seeing that one of the servers was horribly underutilized in comparison and reproduced the math accidentally. =)
The fix for this was the massively disgusting, but super effective "make every single machine a DNS zone secondary". Rather than having a simple record cache we just fetched the whole zone if it changed. This actually made DNS quite a bit faster as every machine had a local copy of the zone.. Once that happened distribution returned to normal (20/20/20/20/20) fix for all of our internal services which used DNS for discovery, including snowflake.
- liquidgecka 2 years agoAlso, totally unrelated Twitter story (because I am nostalgic lately and you reminded me of it)
Early in my time there we had a major, unexpected load spike in the middle of the day. People where tweeting like crazy, breaking records for volume, etc. The system was groaning under strain but it mostly held. Turns out Michael Jackson had died. We sustained 452 (or maybe 457, it was roughly 450-something) tweets a second. this quickly became 1 "MJ" worth of tweets. We informally used this to define load the entire time I was there. Within a few months we got to a point where we were never BELOW 1 MJ, within a year I think we had peaked at double digit MJs and sustained several even in the lowest periods. Before I left we had hit 1 MJ in photo tweets, etc.
Around the time I left we did something like 300 MJ's per second, and I was only there 3 years.
- liquidgecka 2 years ago
- lathiat 2 years agoSomeone previously created a tweet linking to itself by predicting the likely ID range: https://oisinmoran.com/quinetweet
- manigandham 2 years agoA simpler scheme I've used for adtech (billions of requests per day) is to simply reserve a chunk of numbers for each server from a central source. Easy to implement, very fast since each node can just increment in process, and using a 64-bit integer is effectively infinite.
- hyperdimension 2 years agoThat's how Active Directory Domain Controllers handle the distributed creation of SIDs too.
- hyperdimension 2 years ago
- duskwuff 2 years agoTwitter didn't always use Snowflake -- that was introduced in November 2010. There was another, much simpler algorithm used before that which generated much smaller IDs (e.g. under 3e10).
- liquidgecka 2 years agoYea.. "auto increment" in mysql. it SUCKED in so many ways, primary of which was that it required a single mysql database to be the primary for all tweet writes. Every tweet just incremented a counter by 1. As we scaled up that server got overloaded rapidly and became a huge single point of failure. Snowflake was introduced to solve several problems at once. =)
- liquidgecka 2 years ago
- pwdisswordfish0 2 years ago> 52 bits because of javascript
But IEEE 754 doubles have a significand that supports a 53-bit range. What am I missing?
- Dylan16807 2 years agoYou're not missing anything, people just forget the implied bit sometimes.
- spullara 2 years agonegative numbers not used?
- Dylan16807 2 years ago
- madrox 2 years ago
- vecter 2 years agoWouldn't that unevenness only affect 2^31 - 2 and 2^31 - 1, so a negligible fraction of the integers? Was that tiny discrepancy enough to make your calculations?
In other words, what do you mean that it was done by doing mod 3 of a signed int32? If it was a monotonically increasing or random int32, I don't see how that unevenness would manifest in a meaningful way.
- madrox 2 years agoIn another subthread, we realized my memory was wrong and we were measuring millisecond collisions. The serving ID imbalance was a side-effect. Also, it might've been an int16 I was thinking of but turns out the whole thing was shadows on cave walls.
- madrox 2 years ago
- mashygpig 2 years agoMaybe a dumb question, but I don’t follow what you mean by “the space doesn’t divide neatly by two” and also how that connects with overflowing ints. Asking because I’m genuinely curious and would like to know more about this. Sounds really neat!
- _3u10 2 years agoIf they were incrementing and modding wouldn’t that server see an extra 1/2 billionth more traffic?
I don’t get how mod 3 affects anything if you’re just incrementing…
- manigandham 2 years agoIf it’s round robin then it should be an even load, how does the modulo change that exactly?
Also what number are they using to modulo and where is that happening? Because at that point don’t they already have an incrementing ID before generating another one?
- andreareina 2 years agoTake a 3-bit counter:
A and B get hit three times while C only twice, so it will see 66% utilization compared to A and B0->A 1->B 2->C 3->A 4->B 5->C 6->A 7->B
EDITED s/once/twice/ thanks CyberDildonics
- CyberDildonics 2 years agowhile C only once
You listed C twice
- manigandham 2 years agoThat doesn't change anything... it's still round robin. You just stopped at an arbitrary number of 8 integers instead of 9.
- CyberDildonics 2 years ago
- andreareina 2 years ago
- pedrovhb 2 years agoThat's really neat, I'd love to hear more about it. Was this something you were actively trying to find out, or was it poking around until something caught your eye?
- madrox 2 years agoA little of both. I was working in social media analytics, and we were collecting everything we could to understand how to communicate the value of this new medium to businesses who could use twitter for marketing. This was still in an era where privacy wasn't at the front of anyone's minds, so there were zero retention policies. Hard to believe that was only 10 years ago.
Eventually, we learned to treat Twitter as a lead generation tool for off-platform activity and apply old school funnel mechanics to it. The next problem became how to build a follower count. Sadly, that problem is what I think led to extremism on the platform. Hence: https://madrox.substack.com/p/yet-another-quitting-twitter
- kypro 2 years agoNot op, but at an ecommerce company I worked for we did similar things to track how well our competitors were doing relative to us so could be something like that.
Also collecting data like this can be useful if you want to beat markets.
- madrox 2 years ago
- ipqk 2 years agoReminds me of the self-quoting tweet: https://news.ycombinator.com/item?id=25244872
- liquidgecka 2 years ago
- boosteri 2 years agoNostalgic flashback to Premier Manager games, where players stats decreased as they aged. When they went /below 0/ they flipped around to 127. So a good strategy was to scout out really bad players from lower leagues about to hit age 30+. And offer them very long contracts to prevent them retiring .. and give time for most of their stats to flip around, turning them into superstars.
- johnfarrelldev 2 years agoI always found the funniest occurrence of this was in the Civ game though it seems it originally being a bug is disputed.
- vikingerik 2 years agoThe bug never existed at all in Civ 1. It was an urban legend all along. Similar behavior was intentional in Civ 5 as a joke, which convinced everyone that it really did happen in Civ 1 when it never did.
- AceJohnny2 2 years ago
- RedShift1 2 years agoMandela effect
- AceJohnny2 2 years ago
- vikingerik 2 years ago
- johnfarrelldev 2 years ago
- Rebles 2 years agoTwo days ago, Reddit ids have finally incremented passed the 2,147,483,647, or the maximum range of a signed int32. It seems one of Reddit's subsystems, the one that serves its photo albums broke due to the integer overflow.
- cowsup 2 years agoStrange thing is that photo albums re relatively new. Imgur was the go-to host for Reddit, and then they made their own uploader a looong time later. The "albums" functionality only came out in July of 2020, according to a Google search.
Seems this was less likely a "someone else will deal with it" problem, and more of a development / QA testing problem.
- Gigachad 2 years agoFor some reason most stuff still defaults to i32 and a lot of people use them for new code. At this point I'd not be against linters warning against using 32 bit ints unless you have a good reason.
- bushbaba 2 years agoMy alma maters vending machines used an unsigned integer for understanding school debt card balance. However the school debt card used a signed integer to allow for small negative balances.
Well students uncovered the flaw and had their cards be at negative 1 dollar, which the vending machine read as a very large balance.
- thegeomaster 2 years agoThey're still a good choice for most perf critical code due to the reduced memory use.
- kuroguro 2 years agoWhat I don't get is why you'd want a signed int for an id.
- bushbaba 2 years ago
- curioussavage 2 years agoI’m pretty sure the table in question stores image metadata for all user uploaded images. As well as images scraped from posted links which goes back way before images in posts
- jimmytucson 2 years agoA new feature but it wasn’t built on a new codebase. Reddit is a monolith and a lot of things users think of as different “entities” live in the same set of tables.
- Gigachad 2 years ago
- fdgsdfogijq 2 years agoSomeone probably joked they would never reach that scale when they wrote that code
- kristopolous 2 years agoThinking "if that ever gets anywhere close to a problem we'll have vast resources and plenty of time to fix it" and then, I'm guessing, that person left a few months later and nobody owned that part of the code because it worked.
Then 10 or so years went by...
Whenever I write code like that which may break in say, 5 years, I'll sign it in the comments and put my personal email and phone number inviting future people to call me and I'll fix it for free (cause I take responsibilities for my code pretty seriously). Nobody has ever taken me up on it though...
- monsieurbanana 2 years agoTo people reading this: please never ever do that, unless it was agreed upon with your client/employer beforehand (and compensated accordingly).
If you know the code may break in 5 years, why don't you fix it now? Because your employer doesn't want to pay you to do that (probably for good reason, surely there's higher priorities).
Then why would you do it for free years later?
- jacobr1 2 years agoAnd they weren't even wrong! The challenge is addressing the tech-debt, not the choice to incur it.
- silisili 2 years agoOh god, bad memories. You reminded me of a guy I worked with who wrote absolutely terrible, unsafe code and when called out said they are 'caviar problems.'
When I asked what in the hell a caviar problem was, he said it meant by the time we had to worry about those things, we'd all be so rich we'd be eating caviar.
- shock-value 2 years agoGood practice would be leaving a comment with a TODO, an explanation of the problem, and some basic instructions as to a possible fix.
Leaving your email and phone number is absurd and should not be promoted as good practice. No sane company would take you up on the offer anyway.
- lmm 2 years agoThat's maybe a reasonable thing to do if you're an independent business selling the code, but it's a scab move if you're doing that as an employee. Never work for free.
- CyberDildonics 2 years agoWhy would you write something that might break in 5 years? Why not just use a 64 bit integer in any situation like this an be done with it?
- geobmx540 2 years agoProbably sufficient to write a test that will break in 5 years - X [days|months] with notes in the test about the problem and how to fix
- nonethewiser 2 years agoHm. I wonder why no one ever called you.
- monsieurbanana 2 years ago
- sph 2 years agoSo you migrate to int64 and one day someone will wonder why the hell did we ever think no one would reach 2^64 rows in a database table. Or that 2^128 IP addresses would be enough for everyone.
- WhitneyLand 2 years ago“32-bit IPv4 addresses to 128-bit IPv6 addresses means going from 4.3 billion to 340,282,366,920,938,463,463,374,607,431,768,211,456”
Yes 128 bits is only 4x times the bit length, but the address space is exponentially bigger.
Some predict if we do run out, it might take ~100 years.
- sokoloff 2 years agoIn my experience, you first do an emergency re-seed to use the negative half of the int32 space.
Been through it twice in my career…
- marcosdumay 2 years agoThose numbers are much, much larger than 2^32.
No process that touches the real world gets into 2^64. No process at all gets into 2^128, at lwast not while we live around a single star.
- WhitneyLand 2 years ago
- kristopolous 2 years ago
- cowsup 2 years ago
- _gabe_ 2 years agoI'm just curious, I know it's a long running joke about how we're so stupid to think that we would never run out of unique digits with 2^32 possible values, but is this also the case with 64 bit values? Every new bit doubles the amount of information, so if 32 bits lasted reddit 10 years, presumably 33 bits would last them 20 years, 34 would last 40 and so on. Eventually, 64 bits would last them 10×2^32 years, which seems like a safe bet.
So am I being naive when I use 64 bit values for unique IDs? Or is it actually plausible that 64 bits is plenty of information for centuries to come?
Edit: Also, technically reddit was using signed int32s. So they really only had 2^16 unique digits. If they used unsigned int32s, then that would have bought them a lot of time.
- jxf 2 years ago> Edit: Also, technically reddit was using signed int32s. So they really only had 2^16 unique digits. If they used unsigned int32s, then that would have bought them a lot of time.
Small correction: Signed int32s means you have 31 bits for the integer value (2^31 values), not 16 bits (2^16 values). There are 32 bits; 1 bit is used up for the sign, and 31 remain for the number.
- _gabe_ 2 years agoYep, I made the same mistake I was trying to call out here haha. Thanks for the correction!
- _gabe_ 2 years ago
- KMag 2 years agoAs others have pointed out, there are 2^31 non-negative values an Int32 can take on, and linear growth is probably a bad assumption.
I should also probably point out some ambiguity in your analysis. To be clear, under the faulty assumption of linear consumption, if Int32s get exhausted in 10 years, switching to UInt32s will last 20 years of which 10 are already spent. So, under the faulty assumption of linearity, switching to UInt32s buys you an extra 10 years.
64-bit identifiers will last quite a while. Another defensible choice would be to switch to 128-bit unguessable identifiers (such as assigning them via AES-256 in counter mode) in order to provide defence in depth. If something goes wrong with your authentication, but the attacker is likely to spend millennia hammering your REST API with invalid IDs before exploiting an actual asset, then the consequences aren't so bad.
- rezonant 2 years ago> Edit: Also, technically reddit was using signed int32s. So they really only had 2^16 unique digits. If they used unsigned int32s, then that would have bought them a lot of time.
Signed integers have 2^31 positive values (well, just about). It doesn't take 16 bits to store the negative sign :-)
- HideousKojima 2 years agoNot for most implementations, a signed 32 bit int goes from ~-2 billion to ~+2 billion, an unsigned from 0 to ~4 billion
- HideousKojima 2 years ago
- lilyball 2 years agoIf 32 bits lasted 10 years, 33 bits would not last 20, as engagement is not a constant. If we look at the last 10 years, presumably there were a lot more posts in the second 5 years than in the first 5 years. I don't know what Reddit's actual user/engagement metrics show though. Certainly if they got to a stable point where they had approximately the same activity going forward as they have going back then it would, but that's not usually how social sites work.
That said, it would still take quite a while to get to 64 bits.
- bagels 2 years agoThis is just basic engineering: What rate do you anticipate creating these things? What is the upper bound rate you will create these things? Consider if every person were a customer, how many of these things would/could they create or cause to become created? How do those assumptions change over time?
- IIAOPSW 2 years ago"we assigned everyone 9 digit phone numbers, but that ran out after 50 years of population growth led to over a billion people. Now we assign everyone an 18 digit phone number. Personally, I'm worried we might run out in another 50 years."
- ThunderSizzle 2 years agoWe should've dropped phone numbers and just make it like email - dial person-name@verizon
Infinite options at that point.
- hbn 2 years agoI mean as long as you can still set your email-number to whatever you want and update it if you want to drop the old one.
I feel like there's also implications for spam, like you want to target X demographic, all you have to do is try numbers with common name combinations from various cultures. Like in Nathan For You when he made a targeted billboard for a psychic that said "Maria Garcia, I had a dream about you" so all the Maria Garcias who saw the sign would become potential customers.
- hbn 2 years ago
- ThunderSizzle 2 years ago
- loteck 2 years agopresumably
Isn't the answer to your question buried in your assumptions? For example, you seem to assume the rate of comsumption of unique IDs is static.
- _gabe_ 2 years agoIt would have to increase A LOT to significantly change the outcome. For example, according to this website[0] (no idea how accurate it is) Facebook users upload more than 300 million photos a day. At that rate, 2^32/300,000,000 = 14.3. So 32 bits would give Facebook 14 days worth of unique ids for their photos. Whereas 2^64/300,000,000 = 61,489,147,000 days, which is around 168 million years worth of ids.
All I'm saying is the jump from 2^32 to 2^64 is astronomical. I don't see using 64 bit integers for uids in my hobby code as something to be concerned about. In production code for a company I would use something more significant, but even then I feel like 64 bits will last a very long time.
[0]: https://bernardmarr.com/how-much-data-do-we-create-every-day...
- _gabe_ 2 years ago
- brrrrrm 2 years agoif everyone on earth shared a memory space and every person had 4 gigabytes of byte addressable storage, we'd be at 65 bits.
- chrismcb 2 years agoI think it is a horrible assumption to think that Reddit won't grow and will take exactly the same length of time to double the number of unique ids required. But still 64 bits should be plenty for a while. And presumably if it comes closer to using that many they'll have enough time to switch to 128
- aidenn0 2 years ago2^63 (assuming signed 64 bit ints) is enough for 10 billion people to post 900 million posts each. Short of faster-than-light travel or the singularity happening, it seems pretty safe.
- masklinn 2 years agoDoubling the number of bits does not double the number of values, but squares it.
2^31 seconds is 68 years, a middling human lifetime.
2^63 seconds is 290 billion years.
- masklinn 2 years ago(Just so we’re clear, that’s “billion” with a “b”, so 20 times the age of the universe)
- masklinn 2 years ago
- eCa 2 years ago> have enough time to switch to 128
It appears that the original developer thought that for 32 bits :)
I say that as someone who inherited a system that allowed for 2^32 session references stored in the db. Lets just say that sessions were created a lot faster than anticipated for the amount of traffic we had.
So one fine Sunday morning in a November a long time ago we ran out.
- aidenn0 2 years ago
- jxf 2 years ago
- davidjfelix 2 years agoA classic case of "ids aren't numbers even if you choose to make them numeric"
- knodi123 2 years agoids being sortable has a lot of advantages over random guids.
- marcosdumay 2 years agoThem being dense is advantageous too. Numbers are a very convenient format for encoding IDs, but that doesn't mean that IDs are numbers.
- davidjfelix 2 years agoThis is really a false dichotomy you don't have to use guid/uuid. I'm saying even if you use sortable auto increment numbers, stop storing them like numbers.
- function_seven 2 years agoDoesn't numeric storage save space and make indexing faster? (This is a naive question, I'm not asserting it.)
Yeah, numbers get you fun stuff like overflows and wrap-arounds, etc. But sorting is faster (sorting "99" before "100" is more complex than 99 before 100) and space requirement is lower (6 bytes can store a unique ID for 281 trillion objects, but 6 characters only permits 1 million if you're storing them as strings).
Or is there some datatype that combines the space- and sort-efficiency of a numeric type, but doesn't bring the baggage of assuming they represent quantities?
- Dylan16807 2 years agoNote that reddit is currently generating base36 ids starting with z.
You want everything to treat that as text? Sure. That text has been 6 characters long for ages, and it's about to hit 7. I personally expect to see more things break when that happens.
The problem of overflow isn't special to numbers.
- function_seven 2 years ago
- iamdual 2 years agoThere are timestamps for sorting.
- NaturalPhallacy 2 years agorandom guids aren't walkable, which was the reason we used them on some public services at a cordwain in Beaverton, OR you've probably heard of.
- marcosdumay 2 years ago
- knodi123 2 years ago
- scrame 2 years agoHa! Slashdot had a similar problem in the early 2000s because they did a difficult migration for user/post ids, but left the indexes at 32(?).
So, everything worked great until it didn't, and they segment a lot of time future proofing it.
- dvh 2 years ago-2,147,483,648 photos should be enough for anybody
- Thaxll 2 years agoThe famous AUTO_INCREMENT that you though you would never reach...
- btown 2 years agoFun fact: if you do a lot of INSERT... ON CONFLICT calls in Postgres from automated systems that are updating much more often than you insert, your autoincrement primary key can increment far far faster than your data volume (since it doesn't de-increment on a conflict) and overflow an int, grinding things to a halt. One of the more maddening outages I've had to deal with!
- hu3 2 years agoSimilar for MySQL.
If you open a transaction, INSERT with AUTO_INCREMENT, then rollback the transaction, no data is saved, except the auto generated id is used and the next INSERT uses id+1.
- bcrosby95 2 years agoBut MySQL's equivalent, insert...on duplicate, does not cause extra auto increments. IMHO postgres' is a much larger problem.
That said, I say that in my experience based upon locking selects in MySQL - when the row doesn't exist is a big problem, oftentimes leading to deadlocks. So insert... on duplicate key is a life saver. I don't have as much experience using postgres, so I'm not sure how much of a problem that is there, so it's possible the "on conflict" isn't as useful.
- ryanianian 2 years agoSame with sequences in Oracle. They don't participate in transactions. Looking at the gaps between "consecutive" IDs shows surprisingly interesting behavior.
- bcrosby95 2 years ago
- hu3 2 years ago
- btown 2 years ago
- BooneJS 2 years agoHappened a few years ago to YouTube[0]. I don’t know why counters that start at zero and only increment are stored as signed integers.
0: https://arstechnica.com/information-technology/2014/12/gangn...
- akoster 2 years agoReminds me of a similar Chess.com iPad app issue from a few years back https://news.ycombinator.com/item?id=14539770
- _iyhy 2 years agoIt's 2022 we still are using int32 for anything.
- darylteo 2 years agoIf I'm understanding the shitty change log exactly, was the solution to add an extra bit?
- maverwa 2 years agoI guess thats a joke. Adding a single extra bit would usually be more complex then going to 64bit or going to 32bit unsigned.
Well, I say that, but actually, "adding an extra bit" is basically what going from signed to unsigned would do. So maybe they just added an extra (32nd) bit?
- maverwa 2 years ago
- dmtroyer 2 years agounique identifiers are so passé.
- steve76 2 years ago
- TEP_Kim_Il_Sung 2 years ago
- mhh__ 2 years agoFor a supposed tech company reddit really are bad.