top | item 46657307

(no title)

Jyaif | 1 month ago

It's important to note that the approach described focuses on giving fast results, not the best results.

Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.

This is a well known problem because early computers with monitors used to only be able to display characters.

At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?

And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.

discuss

order

alexharri|1 month ago

Yeah, this is good to point out. The primary constraint I was working around was "this needs to run at a smooth 60FPS on mobile devices" which limits the type and amount of work one can do on each frame.

I'd probably arrive at a very different solution if coming at this from a "you've got infinite compute resources, maximize quality" angle.

brap|1 month ago

You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.

For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.

Dylan16807|1 month ago

> limiting output to a small set of characters gives it a more uniform look which may be nicer

And in the extreme that could totally change things. Maybe you want to reject ASCII and instead use the Unicode block that has every 2x3 and 2x4 braille pattern.

spuz|1 month ago

Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.

mark-r|1 month ago

> This is a well known problem because early computers with monitors used to only be able to display characters.

It's not just monitors. My first exposure to ASCII art were posters that were printed on a Teletype, in the mid 1970's. The files had attributions to RTTY operators, which made me believe they were done by hand. Of course a Teletype had no concept of pixels.

finghin|1 month ago

In practice isn’t a large HashMap best for lookup, based on compile-time or static constants describing the character-space?

spuz|1 month ago

In the appendix, he talks about reducing the lookup space by quantising the sampled points to just 8 possible values. That allowed him to make a look up table about 2MB in size which were apparently incredibly fast.

Sharlin|1 month ago

And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.