The Maximum Transmission Unit – MTU – of an Ethernet frame is 1500 bytes.
1500 bytes is a bit out there as numbers go, or at least it seems that way if you touch computers for a living. It’s not a power of two or anywhere close, it’s suspiciously base-ten-round, and computers don’t care all that much about base ten, so how did we get here?
Well, today I learned that if you add the Ethernet header – 36 bytes – then an MTU of 1500 plus that header is 1536 bytes, which is 12288 bits, which takes 2^12 microseconds to transmit at 3Mb/second, and because the Xerox Alto computer for which Ethernet was invented had an internal data path that ran at 3Mhz, then you could just write the bits into the Alto’s memory at the precise speed at which they arrived, saving the very-expensive-then cost of extra silicon for an interface or any buffering hardware.
Now, “we need to pick just the right magic number here so we can take data straight off the wire and blow it directly into the memory of this specific machine over there” is, to any modern sensibilities, lunacy. It’s obviously, dangerously insane, there are far too many computers and bad people with computers in the world for that. But back when the idea of network security didn’t exist because computers barely existed, networks mostly didn’t exist and unvetted and unsanctioned access to those networks definitely didn’t exist, I bet it seemed like a very reasonable tradeoff.
It really is amazing how many of the things we sort of ambiently accept as standards today, if we even realize we’re making that decision at all, are what they are only because some now-esoteric property of the now-esoteric hardware on which the tech was first invented let the inventors save a few bucks.