No, I don't have any. It is because it was not so much a bug, but a decision about a tradeoff. They compressed by unifying similar looking glyphs. Sure, these glyphs weren't representing the same character, but they did look similar. It is the kind of error an human could also have made, except humans also know that sums are supposed to match, so they take that into account when reading. Also they have a probability score and when they are unsure they read again or ask. This are all things these printers can't do without doing supervised OCR.
The tested scans did look kind of crappy, so if you care about non altered glyphs maybe don't do a lossy compression on a low resolution scan. So these issue can totally happen with any printer if your resolution is too low, the glyphs are ambigous and you use a too aggressive lossy compression. This also happens with other approaches like vectorization or OCR.
Yes and by default your tradeoff should be to have the correct information. And that's actually what Xerox claimed. They said this was false and it was correctly documented. They said only if you select it this would happen. Watch the CCC talk buy the person that figured this out. Turns out they were wrong.
1718627440|2 months ago
The tested scans did look kind of crappy, so if you care about non altered glyphs maybe don't do a lossy compression on a low resolution scan. So these issue can totally happen with any printer if your resolution is too low, the glyphs are ambigous and you use a too aggressive lossy compression. This also happens with other approaches like vectorization or OCR.
panick21_|2 months ago