(no title)
Jyaif | 1 month ago
Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.
This is a well known problem because early computers with monitors used to only be able to display characters.
At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?
And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.
alexharri|1 month ago
I'd probably arrive at a very different solution if coming at this from a "you've got infinite compute resources, maximize quality" angle.
brap|1 month ago
For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.
Dylan16807|1 month ago
And in the extreme that could totally change things. Maybe you want to reject ASCII and instead use the Unicode block that has every 2x3 and 2x4 braille pattern.
spuz|1 month ago
mark-r|1 month ago
It's not just monitors. My first exposure to ASCII art were posters that were printed on a Teletype, in the mid 1970's. The files had attributions to RTTY operators, which made me believe they were done by hand. Of course a Teletype had no concept of pixels.
finghin|1 month ago
spuz|1 month ago
Sharlin|1 month ago