> UTF-8 is just a way to encode, it doesn't decide what goes into Unicode.
If UTF-8 had not had sequences above 3 bytes, you would not have been able to use it to express Unicode characters as high as Emoji which would certainly have hampered their adoption, is what the person you're replying to means.
While your conclusion is largely correct, it doesn't follow from your premises: UTF-16 is just a way to encode, but its brain-damaged surrogate pair mechanism very much did get baked into Unicode (namely, high and low surrogate code points D800-DFFF).
chrisseaton|4 years ago
If UTF-8 had not had sequences above 3 bytes, you would not have been able to use it to express Unicode characters as high as Emoji which would certainly have hampered their adoption, is what the person you're replying to means.
a1369209993|4 years ago