Amazing post, I didn’t think this through a lot, but since you are normalizing the vectors and calculating the euclidean distance, you will get the same results using a simple matmul, because euclidean distance over normalized vectors is a linear transform of the cosine distance.
Since you are just interested in the ranking, not the actual distance, you could also consider skipping the sqrt. This gives the same ranking, but will be a little faster.
This is a trick I reach for all the time: it’s cheaper to compare squared distances than completing the Euclidean calculation. For example, to determine whether to stop calculating lerp: x*x+y*y <= epsilon.
Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.
Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog
> I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.
Not to take away from this truly amazing write-up (wow), but there's at least one generator that uses shape:
See particularly the image right above where it says "Note how the algorithm selects the largest characters that fit within the outlines of each colored region."
There's also a description at the bottom of how its algorithm works, if anyone wants to compare.
In the "Image to Terminal character" space this is also a known solution. Map characters to their shape and then pick the one with the lowest diff to the real chunk in the image. If you consider that you have a foreground and a background colour you can get a pretty close image in the terminal :D
I do enjoy these kinds of write ups, especially when it's about something that might seem so simple on the surface, but in order to get looking great you really have to go in deep.
Lucas Pope did a really nice write up on how he developed his dithering system for Return of The Obra Dinn. Recommended if you also enjoyed this blog post.
> I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.
Acerola worked a bit on this in 2024[1], using edge detection to layer correctly oriented |/-\ over the usual brightness-only pass. I think either technique has cases where one looks better than the other.
I can imagine there's room for "style", here, too. Just like how traditional 2d computer art varies from having thick borders and sharp delineations between colour regions, through https://en.wikipedia.org/wiki/Chiaroscuro style that seems to achieve soft edges despite high contrast, etc.
There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.
Aha! The 8x8 bitmap approach is the one I used back in college. I was using a fixed font, so I just converted each character to a 64-bit integer and then used popcnt to compare with an 8x8 tile from the image. I wonder whether this approach results in meaningfully different image results from the original post? e.g. focusing on directionality rather than bitmap match might result in more legible large shapes, but fine noise may not be reproduced as faithfully.
Great work! While I was building ascii-side-of-the-moon [0][1] I briefly considered writing my own ascii renderer to capture differences in shade and shape of the Lunar Maria[2] better. Ended up just using chafa [3] with the hope of coming back to ascii rendering after everything is working end to end.
Are you planning to release this as a library or a tool, or should we just take the relevant MIT licensed code from your website [4]?
No plans to build a library right now, but who knows. Feel free to grab what you need from the website's code!
If I were to build a library, I'd probably convert the shaders from WebGL 2 to WebGL 1 for better browser compatibility. Would also need to figure out a good API for the library.
One thing that a library would need to deal with is that the shape vector depends on the font family, so the user of the library would need to precompute the shape vectors with the input font family. The sampling circles, internal and external, would likely need to be positioned differently for different font families. It's not obvious to me how a user of the library would go about that. There'd probably need to be some tool for that (I have a script to generate the shape vectors with a hardcoded link to a font in the website repository).
> It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.
I still don’t really understand why the inner part of the rectangle can’t just be split in a 2x3 grid. Did I miss the explanation?
I think this is connected to the overlap and offset that are used layer to account for complex or symmetrical letter shapes. If the author had just split the grid, those effects would have been harder to achieve.
It's important to note that the approach described focuses on giving fast results, not the best results.
Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.
This is a well known problem because early computers with monitors used to only be able to display characters.
At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex.
Which new character do you create to reproduce an image optimally?
And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.
Yeah, this is good to point out. The primary constraint I was working around was "this needs to run at a smooth 60FPS on mobile devices" which limits the type and amount of work one can do on each frame.
I'd probably arrive at a very different solution if coming at this from a "you've got infinite compute resources, maximize quality" angle.
You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.
For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.
Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.
> This is a well known problem because early computers with monitors used to only be able to display characters.
It's not just monitors. My first exposure to ASCII art were posters that were printed on a Teletype, in the mid 1970's. The files had attributions to RTTY operators, which made me believe they were done by hand. Of course a Teletype had no concept of pixels.
And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.
Fantastic technique and deep dive. I will say, I was hoping to see an improved implementation of the Cognition cube array as the payoff at the end. The whole thing reminded me of the blogger/designer who, years ago, showed YouTube how to render a better favicon by using subpixel color contrast, and then IIRC they implemented the improvement. Some detail here: https://web.archive.org/web/20110930003551/http://typophile....
This was painful to read. It become better and simpler with a basic signals & systems background:
- His breaking up images into grids was a poor-man's convolution. Render each letter. Render the image. Dot product.
- His "contrast" setting didn't really work. It was meant to emulate a sharpen filter. Convolve with a kernel appropriate for letter size. He operated over the wrong dimensions (intensity, rather than X-Y)
- Dithering should be done with something like Floyd-Steinberg: You spill over errors to adjacent pixels.
Most of these problems have solutions, and in some cases, optimal ones. They were reinvented, perhaps cleverly, but not as well as those standard solutions.
Bonus:
- Handle above as a global optimization problem. Possible with 2026-era CPUs (and even more-so, GPUs).
Perhaps you're right but I won't believe you until you whip up a live-rendering proof of concept.
It's a bit rude to dismiss somebody's cool work as "painful", with some hypothetical "improvements" that probably wouldn't even work.
It's probably much more exciting to implement stuff like this when you can experiment with your own ideas to figure out the solution from scratch, compared to someone who sees it as a trivial exercise in signal processing, which they can't be bothered to implement.
Fantastic article! I wrote an ASCII renderer to show a 3D Claude for my Claude Wrapped[^1], and instead of supersampling I just decided to raymarch the whole thing. SDFs give you a smoother result than even super sampling, but of course your scene has to be represented with distance functions and combinations thereof whereas your method is generally applicable.
Taking into account the shape of different ASCII characters is brilliant, though!
The resulting ASCII looks dithered, with sequences like e.g. :-:-:-:-:. I'd guess that it's an intentional effect since a flat surface would naturally repeat the same character, right? Where does the dithering come from?
I'm hoping people who harness ASCII for stuff like this consider using Code Page 437, or similar. Extended ASCII sets comprising Foreign Chars are for staid business machines, and sort of familiar but out of place accented chars have a bit of a distracting quality.
437 and so on taps the nostalgia for BBS Art, DOS, TUIs scene NFOs, 8 bit micros.... Everything pre Code Page 1252, in other words. Whilst it was a pragmatic decision for MS, it's also true that marketing needs demanded all text interfaces disappeared because they looked old. Text graphics, doubly so. That design space was now reserved for functional icons. A bit of creativity went from (home) computing right there and then. Stuffing it all into a separate font ensured it died.
But, that stuff is genuinely cool to a lot of people in a way VIM, (for example) has never been and nor will it ever. This is a case of Form Over Function. Foreign chars are not as friendly or fun as hearts, building blocks, smileys, musical notes, etc.
This is amazing all round - in concept, writing, and coding (both the idea and the blog post about it).
I feel confident stating that - unless fed something comprehensive like this post as input, and perhaps not even then - an LLM could not do something novel and complex like this, and will not be able to for some time, if ever. I’d love to read about someone proving me wrong on that.
To develop this approach you need to think through the reasoning of what you want to achieve. I don't think the reasoning in LLMs is nonexistent, but it is certainly somewhat limited. This is disguised by their vast knowledge. When they successfully achieve a result by relying on knowledge you get an impression of more reasoning than their is.
Everyone seems now familiar with hallucinations. When a model's knowledge is lacking and it is fine tuned to give an answer. A simplistic calculation says that if an accurate answer gets you 100%, then an answer gets you 50% and being accurate gets you 50%. Hallucinations are trying to get partial credit for bullshit. Teaching a model that a wrong answer is worse than no answer is the obvious solution, turning that lesson into training methods is harder.
That's a bit of a digression but I think it helps explain the difference to why I think a model would find writing an article like this.
Models have difficulty in understanding what is important. The degree to which they do achieve this is amazing, but it is still trained on data that heavily biases their conclusions to the mainstream thinking. In that respect I'm not even sure if it is a fundamental lack in what they could do. It seems to be that they are implicitly made to think of problems as "it's one of those, I'll do what people do when faced with one of those"
There are even hints in fiction that this is what we were going to do. There is a fairly common sci-fi trope of an AI giving a thorough and reasoned analysis of a problem only to be cut off by a human wanting the simple and obvious answer. If not done carefully RLHF becomes the embodiment of this trope in action.
This gives a result that makes the most people immediately happy, without regard for what is best long term, or indeed what is actually needed. Asimov explored the notion of robots lying so as to not hurt feelings. Much of the point of the robot books was to express the notion that what we want AI to be is more complicated than it appears at first glance.
I'm confident that they can. This isn't a new idea. Something like this would be a walk in the park for Opus 4.5 in the right harness.
Of course it likely still needs a skilled pair of eyes and a steady hand to keep it on track or keep things performant, but it's an iterative process. I've already built my own ASCII rendering engines in the past, and have recently built one with a coding model, and there was no friction.
>ASCII characters are not pixels: a deep dive into ASCII rendering
in general, ascii rendering is when ascii character codes are converted to pixels. if you wish to render other pixels onto a screen using characters, they are not ascii characters, they are roman or latin character glyphs, no ascii involved. that is all.
Only tangentially related, but the title reminds me of hack you could do on old DOS machines to get access to a 160x100 16-color display mode on a CGA graphics adapter.
The display mode is actually a hacked up 80x25 text mode. So in that specific narrow case, you have a display mode where text characters very much function as pixels.
What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.
I'm playing with a related problem in my spare time - braille character-based color graphics; while we have enough precision for sharp edges, the fundamental issues with color are the still the same: if we begin with a supersampling pass for assignment, we lack precision, so we may need to do some contrast fixups afterward. I think some contrast enhancement based on your sampling schemes might be useful :) Thank you so much for posting this!
(I've previously tried pre-transforming on the image side to do color contrast enhancement, but without success: I take the Sobel filter of an image, and use it to identify regions where I boost contrast. However, since this is a step preceding "rasterization", the results don't align well with character grids.)
This is an awesome effort. I stared and played with the rotating graphics at the top for a while before reading the rest of the article, trying to figure out why it was so much better than a lot of the efforts I'd seen before, and I kind of figured out what you must be doing, but I'll admit, I wouldn't have ever done it as well or put in as much work as you had - really excellent techniques for determining character!
I am actually really curious how performant this is and whether something like this would be able to contribute beyond just demo displays. It's obviously beautiful and a marvel of work, but it seems like there should be a way to use it for more.
Also, I did find myself wondering about the inevitable Doom engine
What about the explanation presented in the next paragraph?
> Consider how an exponent affects values between 0 and 1. Numbers close to experience a strong pull towards while larger numbers experience less pull. For example 0.1^2=0.01, a 90% reduction, while 0.9^2=0.81, only a reduction of 10%.
That's exactly the reason why it works, it's even nicely visualized below. If you've dealt with similar problems before you might know this in the back of your head. Eg you may have had a problem where you wanted to measure distance from 0 but wanted to remove the sign. You may have tried absolute value and squaring, and noticed that the latter has the additional effect described above.
It's a bit like a math undergrad wondering about a proof 'I understand the argument, but how on earth do you come up with this?'. The answer is to keep doing similar problems and at some point you've developed an arsenal of tricks.
I am however am struck with the from an outsider POV highly niche specific terminology used in the title.
"ASCII rendering".
Yes, I know what ASCII is. I understand text rendering in sometimes painful detail. This was something else.
Yes, it's a niche and niches have their own terminologies that may or may not make sense in a broader context.
HN guidelines says "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize."
I'm not sure what is the best course of action here - perhaps nothing. I keep bumping into this issue all the time at HN, though. Basically the titles very often don't include the context/niche.
Was there something wrong with using an actual image of saturn? NASA lets you use their images for stuff if you want https://www.nasa.gov/nasa-brand-center/images-and-media/, and if you're worried that might change down the line, you could just add a little attribution thing for NASA
I'm not sure why it bothers you. But to guess why OP has done it - if you look at his request to ChatGPT - he wanted a square image with Saturn at 45 degree angle for this demonstration. I don't know if NASA has that image, and if it does, how long it would it take to dig it up (from a quick search - I couldn't find any), so it's pretty sensible to just use ChatGPT for this demonstration and credit it for the image.
I did something similar to use images in a mosaic, and taking the image contents into consideration. This turns out is super-simple as long as you do everything in JPEG space: Just use however many coefficients to compare! So, scale the original image to have 8x8 pixels per "image pixel" in the final output, and then scale every candidate to 8x8. Now just compare the DCT coeffs directly!
This is a great deep dive. Most ASCII renderers feel "muddy" because they treat intensity as the only variable. Treating characters as structural embeddings (the 6D vector approach) is much closer to how our eyes actually perceive edges. It reminds me of how font hinting works at low resolutions. Truly impressive work on the contrast enhancement pass too.
I dunno, going to the last example at the bottom of the page and comparing the contrast slider all the way up and all the way down, all these enhancements combined turns it into a blurry mush where it's harder to distinguish the shapes. It's the exact same problem I had with anti-aliasing fonts on older monitors (smaller resolutions) and why I always disabled it wherever I could.
really great! adjacent well-done ASCII using Braille blocks on X this week:
nolen: "unicode braille characters are 2x4 rectangles of dots that can be individually set. That's 8x the pixels you normally get in the terminal! anyway here's a proof of concept terminal SVG renderer using unicode braille", https://x.com/itseieio/status/2011101813647556902
ashfn: "@itseieio You can use 'persistence of vision' to individually address each of the 8 dots with their own color if you want, there's some messy code of an example here", https://x.com/ashfncom/status/2011135962970218736
I did something very similar to this (searching for similar characters across the grid, including some fuzzy matching for nearby pixels) around 1996. I wonder if I still have the code? It was exceedingly slow, think minutes for a frame at the Pentiums of the time.
It reminds me quite a bit of collision engines for 2D physics/games. Could probably find some additional clever optimisations for the lookup/overlap (better than kd-trees) if you dive into those. Not that it matters too much. Very cool.
Author here. There isn't a library around this yet, but the source code for the blog is open source (MIT licensed): https://github.com/alexharri/website
The code for this post is all in PR #15 if you want to take a look.
I was investigating a fun webcam-to-ASCII project so now I am tempted to take an approach at porting the logic from the blog post into something reusable.
very cool. I may have to look a bit closer at the pipeline I used to create the art at ssh funky.nondeterministic.computer. The graphics could always be improved, however I will note that it needs color for best effect.
This at the same time super cool and really disappointing, as I've been carrying around this idea in my head for maybe ten years as a cool side project and never got around to implementing it.
However, there might still be room for competition, heh. I always wanted to do this on the _entirety_ of Unicode to try getting the most possible resolution out of the image.
Nice! Now add colors and we can finally play Doom on the command line.
More seriously, using colors (not trivial probably, as it adds another dimension), and some select Unicode characters, this could produce really fancy renderings in consoles!
At least six dimensions, right? For each character, color of background, color of foreground, and each color has at least three components. And choosing how the components are represented isn’t trivial either - RGB probably isn’t a good choice. YCoCg?
I had been thinking of messing around with a DOM-based ‘console’ in Tauri that could handle a lot more font manipulation for a pseudo-TUI application similar to this. It's definitely possible! It would be even simpler to do in TS.
Wait...wh...why?!?
Of all the things, actual pictures of the planet Saturn are readily available in the public domain. Why poison the internet with fake images of it?
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
> > The image of Saturn was generated with ChatGPT.
> Wait...wh...why?!?
It has just begun. Wait until nobody bothers using Wikipedia, websites, or even one day forums.
This is going to eat everything.
And when it's immediate to say something like, "I need a high contrast image of Saturn of dimensions X by Y, focus on Saturn, oblique angle" -- that's going to be magic.
We'll look at the internet and Google like we look at going to the library and grabbing an encyclopedia off the shelves.
The use of calculators didn't kill ingenuity, nor did the switch to the internet. Despite teachers protesting both.
Humans will always use the lowest friction thing, and we will never stop reaching for the stars.
stephantul|1 month ago
Since you are just interested in the ranking, not the actual distance, you could also consider skipping the sqrt. This gives the same ranking, but will be a little faster.
qingcharles|1 month ago
meindnoch|1 month ago
Squared euclidean distance of normalized vectors is an affine transform of their cosine similarity (the cosine of the angle between them).
catlifeonmars|1 month ago
This is a trick I reach for all the time: it’s cheaper to compare squared distances than completing the Euclidean calculation. For example, to determine whether to stop calculating lerp: x*x+y*y <= epsilon.
sph|1 month ago
Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog
crazygringo|1 month ago
Not to take away from this truly amazing write-up (wow), but there's at least one generator that uses shape:
https://meatfighter.com/ascii-silhouettify/
See particularly the image right above where it says "Note how the algorithm selects the largest characters that fit within the outlines of each colored region."
There's also a description at the bottom of how its algorithm works, if anyone wants to compare.
Instantnoodl|1 month ago
https://hpjansson.org/chafa/
My go version: https://github.com/BigJk/imeji
akie|1 month ago
nomel|1 month ago
hahahahhaah|1 month ago
roskelld|1 month ago
Lucas Pope did a really nice write up on how he developed his dithering system for Return of The Obra Dinn. Recommended if you also enjoyed this blog post.
https://forums.tigsource.com/index.php?topic=40832.msg136374...
snackbroken|1 month ago
Acerola worked a bit on this in 2024[1], using edge detection to layer correctly oriented |/-\ over the usual brightness-only pass. I think either technique has cases where one looks better than the other.
[1]https://www.youtube.com/watch?v=gg40RWiaHRY
zahlman|1 month ago
wonger_|1 month ago
It reminds me of how chafa uses an 8x8 bitmap for each glyph: https://github.com/hpjansson/chafa/blob/master/chafa/interna...
There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.
EDIT - a gallery of (Unicode-heavy) examples, in case you haven't seen chafa yet: https://hpjansson.org/chafa/gallery/
fwipsy|1 month ago
smusamashah|1 month ago
keepamovin|1 month ago
and damn that article is so cool, what a rabbithole.
aleyan|1 month ago
Are you planning to release this as a library or a tool, or should we just take the relevant MIT licensed code from your website [4]?
[0] https://aleyan.com/projects/ascii-side-of-the-moon
[1] https://news.ycombinator.com/item?id=46421045
[2] https://en.wikipedia.org/wiki/Lunar_mare
[3] https://github.com/hpjansson/chafa
[4] https://github.com/alexharri/website/tree/master/src
alexharri|1 month ago
No plans to build a library right now, but who knows. Feel free to grab what you need from the website's code!
If I were to build a library, I'd probably convert the shaders from WebGL 2 to WebGL 1 for better browser compatibility. Would also need to figure out a good API for the library.
One thing that a library would need to deal with is that the shape vector depends on the font family, so the user of the library would need to precompute the shape vectors with the input font family. The sampling circles, internal and external, would likely need to be positioned differently for different font families. It's not obvious to me how a user of the library would go about that. There'd probably need to be some tool for that (I have a script to generate the shape vectors with a hardcoded link to a font in the website repository).
echoangle|1 month ago
> It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.
I still don’t really understand why the inner part of the rectangle can’t just be split in a 2x3 grid. Did I miss the explanation?
DexesTTP|1 month ago
kennethallen|1 month ago
A grid can have unwanted aliasing effects. It all depends on the kinds of images you're working with.
MrJohz|1 month ago
Jyaif|1 month ago
Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.
This is a well known problem because early computers with monitors used to only be able to display characters.
At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?
And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.
alexharri|1 month ago
I'd probably arrive at a very different solution if coming at this from a "you've got infinite compute resources, maximize quality" angle.
brap|1 month ago
For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.
spuz|1 month ago
mark-r|1 month ago
It's not just monitors. My first exposure to ASCII art were posters that were printed on a Teletype, in the mid 1970's. The files had attributions to RTTY operators, which made me believe they were done by hand. Of course a Teletype had no concept of pixels.
finghin|1 month ago
Sharlin|1 month ago
mwillis|1 month ago
zellyn|1 month ago
Lovely article, and the dynamic examples are :chefs-kiss:
frognumber|1 month ago
- His breaking up images into grids was a poor-man's convolution. Render each letter. Render the image. Dot product.
- His "contrast" setting didn't really work. It was meant to emulate a sharpen filter. Convolve with a kernel appropriate for letter size. He operated over the wrong dimensions (intensity, rather than X-Y)
- Dithering should be done with something like Floyd-Steinberg: You spill over errors to adjacent pixels.
Most of these problems have solutions, and in some cases, optimal ones. They were reinvented, perhaps cleverly, but not as well as those standard solutions.
Bonus:
- Handle above as a global optimization problem. Possible with 2026-era CPUs (and even more-so, GPUs).
- Unicode :)
snowmobile|1 month ago
iknowstuff|1 month ago
unnah|1 month ago
greggman65|1 month ago
Non-ascii, I tried various subsets of Unicode. There’s the geometric shape area, CJK, dingbats, lots of others
Different fonts - there are lots of different monospace fonts. I even tried non-monospaced fonts tho still drawn in grid
ANSI color style https://16colo.rs/
My results weren’t nearly as good as the ones in this article but just suggesting more ways of exploration
https://greggman.github.io/doodles/textme10.html
Note: options are buried in the menu. Best to pick a scene other than the default
AgentMatt|1 month ago
I think there's a small problem with intermediate values in this code snippet:
Replace x by value.alexharri|1 month ago
alexharri|1 month ago
dboon|1 month ago
Taking into account the shape of different ASCII characters is brilliant, though!
[1]: https://spader.zone/wrapped/
alexharri|1 month ago
The resulting ASCII looks dithered, with sequences like e.g. :-:-:-:-:. I'd guess that it's an intentional effect since a flat surface would naturally repeat the same character, right? Where does the dithering come from?
CarVac|1 month ago
It probably has a different looking result, though.
thech6newshound|1 month ago
I'm hoping people who harness ASCII for stuff like this consider using Code Page 437, or similar. Extended ASCII sets comprising Foreign Chars are for staid business machines, and sort of familiar but out of place accented chars have a bit of a distracting quality.
437 and so on taps the nostalgia for BBS Art, DOS, TUIs scene NFOs, 8 bit micros.... Everything pre Code Page 1252, in other words. Whilst it was a pragmatic decision for MS, it's also true that marketing needs demanded all text interfaces disappeared because they looked old. Text graphics, doubly so. That design space was now reserved for functional icons. A bit of creativity went from (home) computing right there and then. Stuffing it all into a separate font ensured it died.
But, that stuff is genuinely cool to a lot of people in a way VIM, (for example) has never been and nor will it ever. This is a case of Form Over Function. Foreign chars are not as friendly or fun as hearts, building blocks, smileys, musical notes, etc.
jrmg|1 month ago
I feel confident stating that - unless fed something comprehensive like this post as input, and perhaps not even then - an LLM could not do something novel and complex like this, and will not be able to for some time, if ever. I’d love to read about someone proving me wrong on that.
Lerc|1 month ago
Everyone seems now familiar with hallucinations. When a model's knowledge is lacking and it is fine tuned to give an answer. A simplistic calculation says that if an accurate answer gets you 100%, then an answer gets you 50% and being accurate gets you 50%. Hallucinations are trying to get partial credit for bullshit. Teaching a model that a wrong answer is worse than no answer is the obvious solution, turning that lesson into training methods is harder.
That's a bit of a digression but I think it helps explain the difference to why I think a model would find writing an article like this.
Models have difficulty in understanding what is important. The degree to which they do achieve this is amazing, but it is still trained on data that heavily biases their conclusions to the mainstream thinking. In that respect I'm not even sure if it is a fundamental lack in what they could do. It seems to be that they are implicitly made to think of problems as "it's one of those, I'll do what people do when faced with one of those"
There are even hints in fiction that this is what we were going to do. There is a fairly common sci-fi trope of an AI giving a thorough and reasoned analysis of a problem only to be cut off by a human wanting the simple and obvious answer. If not done carefully RLHF becomes the embodiment of this trope in action.
This gives a result that makes the most people immediately happy, without regard for what is best long term, or indeed what is actually needed. Asimov explored the notion of robots lying so as to not hurt feelings. Much of the point of the robot books was to express the notion that what we want AI to be is more complicated than it appears at first glance.
soulofmischief|1 month ago
Of course it likely still needs a skilled pair of eyes and a steady hand to keep it on track or keep things performant, but it's an iterative process. I've already built my own ASCII rendering engines in the past, and have recently built one with a coding model, and there was no friction.
fsckboy|1 month ago
in general, ascii rendering is when ascii character codes are converted to pixels. if you wish to render other pixels onto a screen using characters, they are not ascii characters, they are roman or latin character glyphs, no ascii involved. that is all.
LexiMax|1 month ago
The display mode is actually a hacked up 80x25 text mode. So in that specific narrow case, you have a display mode where text characters very much function as pixels.
- https://en.wikipedia.org/wiki/Color_Graphics_Adapter
- https://github.com/drwonky/cgax16demo
joshu|1 month ago
nickdothutton|1 month ago
symisc_devel|1 month ago
GitHub: https://github.com/symisc/ascii_art/blob/master/README.md Docs: https://pixlab.io/art
nowayhaze|1 month ago
unknown|1 month ago
[deleted]
nxobject|1 month ago
(I've previously tried pre-transforming on the image side to do color contrast enhancement, but without success: I take the Sobel filter of an image, and use it to identify regions where I boost contrast. However, since this is a step preceding "rasterization", the results don't align well with character grids.)
MPSimmons|1 month ago
I am actually really curious how performant this is and whether something like this would be able to contribute beyond just demo displays. It's obviously beautiful and a marvel of work, but it seems like there should be a way to use it for more.
Also, I did find myself wondering about the inevitable Doom engine
Really nice job!
markshtat|1 month ago
Supports color output, contrast enhancement, custom charsets. MIT licensed.
chrisra|1 month ago
How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.
c7b|1 month ago
> Consider how an exponent affects values between 0 and 1. Numbers close to experience a strong pull towards while larger numbers experience less pull. For example 0.1^2=0.01, a 90% reduction, while 0.9^2=0.81, only a reduction of 10%.
That's exactly the reason why it works, it's even nicely visualized below. If you've dealt with similar problems before you might know this in the back of your head. Eg you may have had a problem where you wanted to measure distance from 0 but wanted to remove the sign. You may have tried absolute value and squaring, and noticed that the latter has the additional effect described above.
It's a bit like a math undergrad wondering about a proof 'I understand the argument, but how on earth do you come up with this?'. The answer is to keep doing similar problems and at some point you've developed an arsenal of tricks.
unknown|1 month ago
[deleted]
lysace|1 month ago
I am however am struck with the from an outsider POV highly niche specific terminology used in the title.
"ASCII rendering".
Yes, I know what ASCII is. I understand text rendering in sometimes painful detail. This was something else.
Yes, it's a niche and niches have their own terminologies that may or may not make sense in a broader context.
HN guidelines says "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize."
I'm not sure what is the best course of action here - perhaps nothing. I keep bumping into this issue all the time at HN, though. Basically the titles very often don't include the context/niche.
voidUpdate|1 month ago
Was there something wrong with using an actual image of saturn? NASA lets you use their images for stuff if you want https://www.nasa.gov/nasa-brand-center/images-and-media/, and if you're worried that might change down the line, you could just add a little attribution thing for NASA
gpt5|1 month ago
sandos|1 month ago
A similar technique could probably be used here.
TeamCommet1|1 month ago
nurettin|1 month ago
Reminds me of this underrated library which uses braille alphabet to draw lines. Behold:
https://github.com/tammoippen/plotille
It's a really nice plotting tool for the terminal. For me it increases the utility of LLMs.
Izkata|1 month ago
aghilmort|1 month ago
nolen: "unicode braille characters are 2x4 rectangles of dots that can be individually set. That's 8x the pixels you normally get in the terminal! anyway here's a proof of concept terminal SVG renderer using unicode braille", https://x.com/itseieio/status/2011101813647556902
ashfn: "@itseieio You can use 'persistence of vision' to individually address each of the 8 dots with their own color if you want, there's some messy code of an example here", https://x.com/ashfncom/status/2011135962970218736
nomel|1 month ago
[1] https://www.lookuptables.com/text/extended-ascii-table
alexharri|1 month ago
Using only ASCII felt more in the "spirit" of the post and reduced scope (which is always good)
octoberfranklin|1 month ago
Sesse__|1 month ago
eerikkivistik|1 month ago
shiandow|1 month ago
NelsonMinar|1 month ago
BTW, aalib was using character shape back in the 90s. This is very cool but there is prior art!
nathaah3|1 month ago
TuringNYC|1 month ago
ripe|1 month ago
avadodin|1 month ago
adam_patarino|1 month ago
alexharri|1 month ago
The code for this post is all in PR #15 if you want to take a look.
nathell|1 month ago
guerby|1 month ago
https://github.com/cacalabs/libcaca
minimaxir|1 month ago
baud9600|1 month ago
I found myself thinking, “I wonder if some of this could be used to playback video on old 8-bit machines?” But they’re so underpowered…
mackid|1 month ago
https://youtu.be/wM3deQAgMpE?si=h2O1uTQqxFtCRCsh
maxglute|1 month ago
cjlm|1 month ago
Johnny_Bonk|1 month ago
estimator7292|1 month ago
_blk|1 month ago
mark-r|1 month ago
fragmede|1 month ago
zdimension|1 month ago
pcj-github|1 month ago
LowLevelBasket|1 month ago
account42|1 month ago
Thanks for erasing all the content once the page loads, saved me the time I would have spent reading the article.
There really needs to be a name for error handling that is worse than the initial error.
charmpic|1 month ago
jwr|1 month ago
jurf|1 month ago
However, there might still be room for competition, heh. I always wanted to do this on the _entirety_ of Unicode to try getting the most possible resolution out of the image.
steve1977|1 month ago
BarryGuff|1 month ago
blauditore|1 month ago
More seriously, using colors (not trivial probably, as it adds another dimension), and some select Unicode characters, this could produce really fancy renderings in consoles!
krallja|1 month ago
jrmg|1 month ago
chrisra|1 month ago
finghin|1 month ago
maximgeorge|1 month ago
[deleted]
monitron|1 month ago
Wait...wh...why?!? Of all the things, actual pictures of the planet Saturn are readily available in the public domain. Why poison the internet with fake images of it?
dang|1 month ago
"Eschew flamebait. Avoid generic tangents."
https://news.ycombinator.com/newsguidelines.html
pjc50|1 month ago
Are we sure the planets are real?
userbinator|1 month ago
echelon|1 month ago
> Wait...wh...why?!?
It has just begun. Wait until nobody bothers using Wikipedia, websites, or even one day forums.
This is going to eat everything.
And when it's immediate to say something like, "I need a high contrast image of Saturn of dimensions X by Y, focus on Saturn, oblique angle" -- that's going to be magic.
We'll look at the internet and Google like we look at going to the library and grabbing an encyclopedia off the shelves.
The use of calculators didn't kill ingenuity, nor did the switch to the internet. Despite teachers protesting both.
Humans will always use the lowest friction thing, and we will never stop reaching for the stars.
MORPHOICES|1 month ago
[deleted]
Ursi_Casper|1 month ago
[deleted]
chikna|1 month ago
vikas-sharma|1 month ago
[deleted]
AI-love|1 month ago
[deleted]