top | item 24162499

What programmers need to know about encodings and charsets (2011)

70 points| neiesc | 5 years ago |kunststube.net | reply

22 comments

order
[+] at_a_remove|5 years ago|reply
I was looking for the catch. Here it is: "It's really simple: Know what encoding a certain piece of text, that is, a certain byte sequence, is in, then interpret it with that encoding."

That's like "knowing" the truth. How?

I have received some very interesting files that made Python yack unicode errors, again and again. Why? Not only did I not "know" what encoding it was in -- the encodings changed at different points in the stream of bytes. I call this "slamming bytes together" because somewhere along the line, someone's program did exactly that.

Everything is simple -- until it isn't.

[+] banthar|5 years ago|reply
There is nothing you can do with text file with unknown encoding but treat it as an array of bytes.

If you start guessing the encoding, at best it won't work in some cases, at worst you are introducing security vulnerabilities. You can try, but there is just no way to do it right.

http://michaelthelin.se/security/2014/06/08/web-security-cro...

[+] BiteCode_dev|5 years ago|reply
In a sense, it is simple: simple to understand.

Not simple to solve.

Like to win a race Usain Bolt, it's simple, run faster!

Fortunately, Python is well equipped for that. If you open a file with Python that you know it might contains mixed encoded text, you can use try/except to inform the user or open in binary mode, and just store the binary.

But my favorite way of doing it is:

    open('file', error=strategy)
Strategy can be:

- "ignore": undecodable text is skipped

- "replace": undecodable text is replace with "?"

- "surrogateescape" (you need to use utf8): undecodable text is decoded to a special representation which makes no human sense, but can en rencoded back to it's original value.

It's kinda ironic because people bashed Python for separating bytes/text, forcing them to deal with encoding correctly in Python 3. After all, this problem of "slamming bytes together" comes from languages that treat text as a bytes array, allowing this stupid mistake.

[+] jbandela1|5 years ago|reply
Note: This post is basically a TLDR of https://www.theregister.com/2013/10/04/verity_stob_unicode/ by Verity Stob.

One of the reasons there is a lot of confusion about encodings vs Unicode is that Unicode was initially an encoding. It was thought that 65K characters was enough to represent all the characters in actual use across the languages and thus you just needed to change the from an 8 bit char to a 16 bit char and all would be well (apart from the issue of endianness). Thus Unicode initially specified what each symbol would look like encoded in 16bits. (see http://unicode.org/history/unicode88.pdf, particularly section 2). Windows NT, Java, ICU, all embraced this.

Then it turned out that you needed a lot more characters than 65K and instead of each character being 16 bits, you would need 32 bit characters (or else have weird 3 byte data types). Whereas people could justify going from 8 bits to 16 bits as a cost of not having to worry about charsets, most developers balked at 32 bits for every character. In addition, you now had a bunch of the early adopters (Java and Windows NT) that had already embraced 16 bit characters. So then encodings were hacked on such as UTF-16 (surrogate pairs of 16 bit characters for some unicode code points).

I think, if the problem had been understood better at the start that you have a lot more characters than will fit in 16 bits, then something UTF-8 would likely have been chosen as the canonical encoding and we could have avoided a lot of these issues. Alas, such is the benefit of 20/20 hindsight.

[+] ExtremisAndy|5 years ago|reply
I love C++ so much, and it has brought me such joy as a hobbyist programmer, but good grief, this one aspect of it (dealing with encodings & charsets) is so depressing I just want to cry sometimes.
[+] nunez|5 years ago|reply
F for respects for everyone who got wrecked by BOM (byte-order mark) and CRLF vs LF.