top | item 47011747

(no title)

evanelias | 15 days ago

They should have addressed it much earlier, but it makes way more sense in historical context: when MySQL added utf8 support in early 2003, the utf8 standard originally permitted up to 6 bytes per char at that time. This had excessive storage implications, and emoji weren't in widespread use at all at the time. 3 bytes were sufficient to store the majority of chars in use at that time, so that's what they went with.

And once they made that choice, there was no easy fix that was also backwards-compatible. MySQL avoids breaking binary data compatibility across upgrades: aside from a few special cases like fractional time support, an upgrade doesn't require rebuilding any of your tables.

discuss

order

nofriend|15 days ago

Your explanation makes it sound like an incredibly stupid decision. I imagine what you're getting at is that 3 bytes were/are sufficient for the basic multilingual plane, which is incidentally also what can be represented in a single utf-16 byte pair. So they imposed the same limitation as utf-16 had on utf-8. This would have seemed logical in a world where utf-16 was the default and utf-8 was some annoying exception they had to get out of the way.

evanelias|15 days ago

OK, but that makes perfect sense given utf-16 was actually quite widespread in 2003! For example, Windows APIs, MS SQL Server, JavaScript (off the top of my head)... these all still primarily use utf-16 today even. And MySQL also supports utf-16 among many other charsets.

There wasn't a clear winner in utf-8 at the time, especially given its 6-byte-max representation back then. Memory and storage were a lot more limited.

And yes while 6 bytes was the maximum, a bunch of critical paths (e.g. sorting logic) in old MySQL required allocating a worst-case buffer size, so this would have been prohibitively expensive.