Haiku OS in my opinion solves this better by basing everything on default font size (in pixels). Eg it defaults to 12px, I used 20px for a 3840x2160 monitor. Some GUI widgets scale based on this. All text (when using be_default_font) scale based on this. Spacing / layout depends on this. The key difference (compared to a global x1.5 scaling factor) is that developers of each app decide how to use this information, so different parts of the GUI are scaled disproportionatily. Sloppy apps ignore this, but the devs are quickly notified. So you end up with text larger but GUI widgets can grow dis-proportionatily, so you can fine tune what is 125%, 150%, etc. Eg. ScrollBar can be 125%, toolbar 150%, text 233%. Haiku has had this since the beginning (even BeOS in the 90’s had this). By 2024, almost all non compliant apps have been fixed and support this.
What Haiku needs is font setting per screen/resolution for multimonitor support. This way you can support mismatched monitors with different factors.
The real issue is that people want to be able to smoothly drag windows across monitors. This requires that each window be able to switch to a different DPI scaling when its "home" monitor changes. Something must also be done to deal with a single window being displayed across monitors with different resolution (which is what happens while dragging) though hardware scaling is probably acceptable there at some minor loss in quality - the "proper" alternative is to render natively at both resolutions, but applications might not natively support that.
This is similar to how Win32/GDI layout guidelines (pre-win 10) worked.
Windows says reference font dpi is 72 and reference sizes for buttons, list, labels, etc is specified at 96 dpi, then you're supposed to use actual dpi/ref_dpi and scale things according to that.
Then you set DPI scaling per monitor v2 in manifest.xml and catch WM_DPICHANGED message and recompute scale factors and presto, perfect dpi scaling across any monitor at any fractional scale.
Say your UI had a one "pixel" wide vertical line. At some point, resolution becomes high enough that you need to draw it two device pixels wide. What do you do at scales in-between?
Do apps start drawing their lines wider if the default font size goes up? When? Is it consistent system wide?
> So you end up with text larger but GUI widgets can grow dis-proportionatily, so you can fine tune what is 125%, 150%, etc. Eg. ScrollBar can be 125%, toolbar 150%, text 233%
Who’s the “you” in here? If it’s the end user, I don’t think it’s a better solution for the general population.
Basically gtk-hint-font-metrics=1 was needed with Gtk4 on non-HiDPI displays to get crisp text. Thanks to the change from 6190 above it is already automatically applied, when appropriate depending on which display is used. Mixed setups with multiple displays are common and Gtk4 cares about. The whole topic caused a heated issue before - because it depends on your own vision, taste and hardware.
Apple avoids trouble and work by always using HiDPI displays. Attach a MacMini to a non-HiDPI display and you could recognize that the font rendering is awkward.
Apple also avoids the issue by always working with integer scale in software. For a fractional scale, it is downscaled by hardware. They also do not have exact percentage scale (for example: their 175% is actually 177.7778%, because it better allocates the pixels, i.e. use 8 pixel block on the display for each 9 pixels in framebuffer).
> Apple avoids trouble and work by always using HiDPI displays. Attach a MacMini to a non-HiDPI display and you could recognize that the font rendering is awkward.
You may personally find the output awkward, but typographers will disagree. They didn't always have high density displays. They did always have superior type rendering, with output more closely matching the underlying design. Hinting was used, but they didn't clobber the shapes to fit the pixel grid like Microsoft did.
> Attach a MacMini to a non-HiDPI display and you could recognize that the font rendering is awkward.
Ironically for "always expose relevant options through system settings" Apple, you can still access font smoothing via command line, e.g. "defaults -currentHost write -g AppleFontSmoothing -int 3". You can use 0-3 where 0 disables it (default) and 3 uses "strong" hinting, with 1 and 2 in between.
Reading it up, heated issue is putting it midly. That was a complete shitshow, with two gnome devs - and one from redhat specifically - not accepting the obviously awful font rendering as an issue and continuinously insulting the reporters. God I hate those type of Foss devs.
And then on the other hand you have finally a seemingly great solution, despite their sabotage. So, yeah gnome?
As a long-time developer against GTK (I started using it back in the 1.x days in the late 90s) this is really awesome to see.
I enjoyed the side-by-side comparisons of the old vs new renderer, and especially the idea of homing in on particular letters ('T' and 'e') and extracting them, that really made the improvement clear.
Cool stuff, and many thanks to the developers who keep pushing the GTK stack forwards.
The "old" renderer feels so new to me, that I'd like to see the comparison against GTK+ 3.0 with pango <=1.42.
P.S. Since pango 1.44 some letters at some positions became blurry and some letters are more close to the previous one than being in the middle. Actually the later issue might be needed to prevent the first one, in theory. In practice, there might be other constraints, which force the corruption.
I'm curious - when you were doing research into the mechanics of hinting options, did you stumble onto any relevant discussion around allowing custom pixel geometries to be defined, to enable hinting on modern OLED / WRBG displays? There's a good thread on the topic here[0], with some people referring to it as 'ClearType 2' on the MS side [1]. On the oss side I know FreeType theoretically supports this[2], but I can't quite figure out how relevant the FreeType backend is to this most recent work.
It would be nice if monitors exposed info about subpixel geometry as part of their EDID data. An alternative would be to capture it in a hardware database.
AIUI the trend has been towards GUI frameworks dropping support for subpixel AA altogether, since that simplifies so many things[1], so I'm not holding my breath for the current limitations around unusual subpixel layouts being fully resolved on any platform. Apple made the switch to greyscale-only AA years ago, Microsoft is mid-transition with their newer GUI toolkits ignoring the system Cleartype setting and always using greyscale AA, and the GTK renderings in the OP are greyscale too. They're assuming that people will just get a HiDPI display eventually and then greyscale AA is good enough.
For one, subpixels aren't just lines in some order - they can have completely arbitrary geometries. A triangle, 3 vertical slats, a square split in four with a duplicate of one color, 4 different colors, subpixels that activate differently depending not just on chromaticity but also luminance (i.e., also differs with monitor brightness instead of just color), subpixels shared between other pixels (pentile) and so on.
And then there's screenshots and recordings that are completely messed up by subpixels antialiasing as the content is viewed on a different subpixel configuration or not at 1:1 physical pixels (how dare they zoom slightly in on a screenshot!).
The only type of antialiasing that works well is greyscale/alpha antialias. Subpixel antialiasing is a horrible hack that never worked well, and it will only get worse from here. The issues with QD-OLED and other new layouts underline that.
The reason we lived with it was because it was necessary hack for when screens really didn't have anywhere near enough resolution to show decently legible text at practical font sizes on VGA or Super VGA resolutions.
I was a bit puzzled that the images were so blurry compared to the surrounding text. Then I realized that the 1px border around the before/after images forces them to be scaled down from an otherwise correct width of 600px to 598px.
While not solving the blurriness completely it, removing the border with the inspector helps a lot.
I think the remaining blurriness comes from the images using grey scale hinting rather than subpixel hinting (the hinted pixels are not colored)
> Various font-related flags I found in solving for blurry fonts on wayland
Is there an environment variable to select Wayland instead of XWayland for electron apps like Slack and VScode where fractional scaling with wayland doesn't work out of the box?
Wow, I just tried those environment variables, and it makes a remarkable difference for the smoothness and fullness for every font. I'll probably be leaving this setting on until something breaks when it gets fixed, and I inevitably spend too much time trying to figure out why it's broken after forgetting what I changed.
I don’t have anything expert to add here except this is somehow a shockingly difficult problem.
When I boot into windows, the fonts especially in some applications look horrible and blurry because of my high DPI monitor. Windows has like 10 settings you can try to tweak high dpi fonts and man none of them look good. I think my Linux boot on the same machine has much better font smoothness and of course the MacBook is perfect.
Somehow most windows systems I see on people’s desks now look blurry as shit. It didn’t use to be this way.
I really don’t understand why high dpi monitors cause (rather than solve) this problem and I suspect windows has some legacy application considerations to trade off against but man - windows used to be the place you’d go to give your eyes a break after Linux and now it’s worse!
I realize I am ranting against windows here which is the most cliched thing ever but really come on it’s like right in your face!
The perfection of font rendering in macOS is one of a handful of things that has spoiled me and makes it difficult to switch to Linux.
I guess we all have different issues we care about, but I'm always surprised when I have to point out how awful Windows is with fonts and people just shrug and say they didn't notice. For me it's painfully obvious to the point of distraction.
I don't think it's the high DPI screens themselves that cause this on Windows, rather the fonts have changed.
I'm pretty sure they used to be bit mapped, or had excellent hinting. Now that high dpi is common, maybe they figured that wasn't needed anymore. And indeed, on my 24", 4k monitor at "200%", windows is pretty sharp if I start it that way. If I change it while running, it becomes a shit-show. But when running at 100% on a FHD 14" laptop, sharpness is clearly lacking.
Regarding the Linux situation, yes, it's subjectively better on that same laptop. But it depends a lot on the fonts used. Some are a blurry / rainbowy mess. However, on Linux, I run everything at 100% and just zoom as needed if the text is too small (say on the above 24" screen).
Not sure this is shockingly difficult, especially when for a lot of Windows apps you can already deblur the fonts by clicking a high dpi compatibility setting of a given exe file
I wonder if we'll ever abandon resolution-based rendering for screens, instead using a PPI/DPI vector-based system?
Since the 80s I've been wishing for a 300/600dpi resolution-independent screen. Sure, it's basically like wishing for a magic pony, but I was spoiled by getting a Vectrex[1] for a birthday in the 80s, and I really liked the concept. I know the Vectrex was a different type of rendering to the screens we use today, but I still find it fascinating.
I wish for this too. You can get tiny screens with that kind of pixel density. My ebook reader is 300ppi and my phone is almost 650ppi!
It saddens me when I see people measuring things in pixels. It should all be measured relative to the font or perhaps the viewport size. The font size itself should just be how big the user wants the text which in turn will depend on the user's eyes and viewing distance etc. The size in pixels is irrelevant but is calculated using the monitor's PPI. Instead we get people setting font sizes in pixels then having to do silly tricks like scaling to turn that into the "real" size in pixels. Sigh...
These comparisons should be presented with a dynamic switching of identically sized images with identically placed text instead of placing slightly different images side by side
The new renderer definitely looks better, but the letters still have something fuzzy about them. They don't feel crisp. Is this due to the font or due to the rendering?
The fuzzy is just an artefact of parts of the font not lining up with the monitor's pixel grid. There are ways to deal with this, but they can distort the font in other subtle ways, so no solution is perfect.
In any case, it makes much less difference (almost none practically speaking) on hi-dpi displays.
One of the reasons web designers have issues with text looking different between Windows and MacOS is that Windows' font renderer tries to force things to align with the pixel grid more, reducing a sharper result but slightly distorting some font characteristics. Apple's renderer is more true to the font's design, can can produce a little fuzziness like you see here. It also makes many shapes look a little more bold (at least on standard-ish DPI displays). A couple of old posts on the subject: https://blog.codinghorror.com/whats-wrong-with-apples-font-r..., https://blog.codinghorror.com/font-rendering-respecting-the-.... Differences in sub-pixel rendering also make a difference, so where people have tweaked those options, or just have the colour balances on their screens set a little differently (intentionally or due to design/manufacturing differences) you might see results that differ even further for some users even on the same OS.
For some reason, FreeType broke proper grid-fitting and now requires environment variable FREETYPE_PROPERTIES=truetype:interpreter-version=35 to activate it.
It's not quite right to say it broke proper grid-fitting, because that depends on what the fonts were designed and hinted for. The old one matches Windows 98 hinting (and therefore that era's core fonts), and the new one matches ClearType's hinting (and therefore that era's core fonts).
> The idea is that we just place glyphs where the coordinates tell us, and if that is a fractional position somewhere between pixels, so be it, we can render the outline at that offset just fine. This approach works—if your output device has a high-enough resolution (anything above 240 dpi should be ok).
So it just requires 6x more memory, GPU power and HDMI/DP bandwidth and prevents usage of large monitors ...
Honestly I'd love it if Linux just implemented a solution similar to what Apple does, which is rendering everything at 2x and then downscaling it to screen's native resolution. (So my "3008x1692" on a 4K screen is actually rendered at 6016x3384). Modern GPUs are strong enough to do this without breaking a sweat, and the result is very crispy and functional. Fractional scaling could still exist as a fallback for older systems.
This is what GTK used to do. It's less battery efficient for an inherently less crisp option though. It also gets less seemingly crisp after rescale for for the much more common "slightly higher than normal dpi closer to 1080p" type monitors (e.g. the 125% in the article) you don't typically find in Apple setups.
Of course that's why subpixel rendering is all a bit moot on Apple devices. For a long time now they've just toggled the default font rendering to the equivalent of "none none" in this article and relying on the high quality screens the devices ship with/most users will plug in to make up for it.
That's what GTK used to do. The result looks much worse than fractional scaling, is much less crisp, uses a lot more battery, and means games run a lot slower.
Qt6 has had fractional scale early on and still supports subpixel AA. Qt5 near the end of its life got fractional DPI scaling support on X11, but not on Wayland. KDE generally seems to handle scaling better nowadays compared to GNOME.
smallstepforman|2 years ago
What Haiku needs is font setting per screen/resolution for multimonitor support. This way you can support mismatched monitors with different factors.
WhyNotHugo|2 years ago
Relative units like this are usually considered best practice, because of the exact reasons that you've listed.
zozbot234|2 years ago
vernervas|2 years ago
Windows says reference font dpi is 72 and reference sizes for buttons, list, labels, etc is specified at 96 dpi, then you're supposed to use actual dpi/ref_dpi and scale things according to that.
Then you set DPI scaling per monitor v2 in manifest.xml and catch WM_DPICHANGED message and recompute scale factors and presto, perfect dpi scaling across any monitor at any fractional scale.
bla3|2 years ago
Do apps start drawing their lines wider if the default font size goes up? When? Is it consistent system wide?
Someone|2 years ago
Who’s the “you” in here? If it’s the end user, I don’t think it’s a better solution for the general population.
grandinj|2 years ago
ho_schi|2 years ago
Background:
https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/6190
Basically gtk-hint-font-metrics=1 was needed with Gtk4 on non-HiDPI displays to get crisp text. Thanks to the change from 6190 above it is already automatically applied, when appropriate depending on which display is used. Mixed setups with multiple displays are common and Gtk4 cares about. The whole topic caused a heated issue before - because it depends on your own vision, taste and hardware.
Apple avoids trouble and work by always using HiDPI displays. Attach a MacMini to a non-HiDPI display and you could recognize that the font rendering is awkward.
vetinari|2 years ago
some1else|2 years ago
You may personally find the output awkward, but typographers will disagree. They didn't always have high density displays. They did always have superior type rendering, with output more closely matching the underlying design. Hinting was used, but they didn't clobber the shapes to fit the pixel grid like Microsoft did.
DinaCoder99|2 years ago
Ironically for "always expose relevant options through system settings" Apple, you can still access font smoothing via command line, e.g. "defaults -currentHost write -g AppleFontSmoothing -int 3". You can use 0-3 where 0 disables it (default) and 3 uses "strong" hinting, with 1 and 2 in between.
onli|2 years ago
And then on the other hand you have finally a seemingly great solution, despite their sabotage. So, yeah gnome?
unwind|2 years ago
I enjoyed the side-by-side comparisons of the old vs new renderer, and especially the idea of homing in on particular letters ('T' and 'e') and extracting them, that really made the improvement clear.
Cool stuff, and many thanks to the developers who keep pushing the GTK stack forwards.
kvemkon|2 years ago
P.S. Since pango 1.44 some letters at some positions became blurry and some letters are more close to the previous one than being in the middle. Actually the later issue might be needed to prevent the first one, in theory. In practice, there might be other constraints, which force the corruption.
EasyMark|2 years ago
boomskats|2 years ago
This is great work btw.
[0]: https://github.com/snowie2000/mactype/issues/932
[1]: https://github.com/microsoft/PowerToys/issues/25595
[2]: https://freetype.org/freetype2/docs/reference/ft2-lcd_render...
zozbot234|2 years ago
jsheard|2 years ago
[1] https://faultlore.com/blah/text-hates-you/#anti-aliasing-is-...
arghwhat|2 years ago
For one, subpixels aren't just lines in some order - they can have completely arbitrary geometries. A triangle, 3 vertical slats, a square split in four with a duplicate of one color, 4 different colors, subpixels that activate differently depending not just on chromaticity but also luminance (i.e., also differs with monitor brightness instead of just color), subpixels shared between other pixels (pentile) and so on.
And then there's screenshots and recordings that are completely messed up by subpixels antialiasing as the content is viewed on a different subpixel configuration or not at 1:1 physical pixels (how dare they zoom slightly in on a screenshot!).
The only type of antialiasing that works well is greyscale/alpha antialias. Subpixel antialiasing is a horrible hack that never worked well, and it will only get worse from here. The issues with QD-OLED and other new layouts underline that.
The reason we lived with it was because it was necessary hack for when screens really didn't have anywhere near enough resolution to show decently legible text at practical font sizes on VGA or Super VGA resolutions.
jeppester|2 years ago
While not solving the blurriness completely it, removing the border with the inspector helps a lot.
I think the remaining blurriness comes from the images using grey scale hinting rather than subpixel hinting (the hinted pixels are not colored)
wrasee|2 years ago
For convenience (second is hinted)
+ https://blog.gtk.org/files/2024/03/Screenshot-from-2024-03-0...
+ https://blog.gtk.org/files/2024/03/hinting-125.png
You can really spot the difference.
WhereIsTheTruth|2 years ago
Nobody does it properly on linux, despite freetype's recommendations.. a shame..
https://freetype.org/freetype2/docs/hinting/text-rendering-g...
It's even worse for Light text on Dark backgrounds.. text becomes hard to read..
GTK is not alone
Chromium/Electron tries but is wrong 1.2 instead of 1.8, and doesn't do gamma correction on grayscale text
https://chromium-review.googlesource.com/c/chromium/src/+/53...
Firefox, just like Chromium is using Skia, so is using proper default values but ignores it for grayscale text too..
https://bugzilla.mozilla.org/show_bug.cgi?id=1882758
A trick that i use to make things a little bit better:
In your .profile:
westurner|2 years ago
> Various font-related flags I found in solving for blurry fonts on wayland
Is there an environment variable to select Wayland instead of XWayland for electron apps like Slack and VScode where fractional scaling with wayland doesn't work out of the box?
username923409|2 years ago
Thanks for the tip, though.
xyzelement|2 years ago
When I boot into windows, the fonts especially in some applications look horrible and blurry because of my high DPI monitor. Windows has like 10 settings you can try to tweak high dpi fonts and man none of them look good. I think my Linux boot on the same machine has much better font smoothness and of course the MacBook is perfect.
Somehow most windows systems I see on people’s desks now look blurry as shit. It didn’t use to be this way.
I really don’t understand why high dpi monitors cause (rather than solve) this problem and I suspect windows has some legacy application considerations to trade off against but man - windows used to be the place you’d go to give your eyes a break after Linux and now it’s worse!
I realize I am ranting against windows here which is the most cliched thing ever but really come on it’s like right in your face!
mostlysimilar|2 years ago
I guess we all have different issues we care about, but I'm always surprised when I have to point out how awful Windows is with fonts and people just shrug and say they didn't notice. For me it's painfully obvious to the point of distraction.
vladvasiliu|2 years ago
I'm pretty sure they used to be bit mapped, or had excellent hinting. Now that high dpi is common, maybe they figured that wasn't needed anymore. And indeed, on my 24", 4k monitor at "200%", windows is pretty sharp if I start it that way. If I change it while running, it becomes a shit-show. But when running at 100% on a FHD 14" laptop, sharpness is clearly lacking.
Regarding the Linux situation, yes, it's subjectively better on that same laptop. But it depends a lot on the fonts used. Some are a blurry / rainbowy mess. However, on Linux, I run everything at 100% and just zoom as needed if the text is too small (say on the above 24" screen).
eviks|2 years ago
bloopernova|2 years ago
Since the 80s I've been wishing for a 300/600dpi resolution-independent screen. Sure, it's basically like wishing for a magic pony, but I was spoiled by getting a Vectrex[1] for a birthday in the 80s, and I really liked the concept. I know the Vectrex was a different type of rendering to the screens we use today, but I still find it fascinating.
[1] https://en.wikipedia.org/wiki/Vectrex
globular-toast|2 years ago
It saddens me when I see people measuring things in pixels. It should all be measured relative to the font or perhaps the viewport size. The font size itself should just be how big the user wants the text which in turn will depend on the user's eyes and viewing distance etc. The size in pixels is irrelevant but is calculated using the monitor's PPI. Instead we get people setting font sizes in pixels then having to do silly tricks like scaling to turn that into the "real" size in pixels. Sigh...
eviks|2 years ago
p0w3n3d|2 years ago
RedShift1|2 years ago
dspillett|2 years ago
In any case, it makes much less difference (almost none practically speaking) on hi-dpi displays.
One of the reasons web designers have issues with text looking different between Windows and MacOS is that Windows' font renderer tries to force things to align with the pixel grid more, reducing a sharper result but slightly distorting some font characteristics. Apple's renderer is more true to the font's design, can can produce a little fuzziness like you see here. It also makes many shapes look a little more bold (at least on standard-ish DPI displays). A couple of old posts on the subject: https://blog.codinghorror.com/whats-wrong-with-apples-font-r..., https://blog.codinghorror.com/font-rendering-respecting-the-.... Differences in sub-pixel rendering also make a difference, so where people have tweaked those options, or just have the colour balances on their screens set a little differently (intentionally or due to design/manufacturing differences) you might see results that differ even further for some users even on the same OS.
zajio1am|2 years ago
NoGravitas|2 years ago
zajio1am|2 years ago
So it just requires 6x more memory, GPU power and HDMI/DP bandwidth and prevents usage of large monitors ...
Narishma|2 years ago
Klonoar|2 years ago
butz|2 years ago
unknown|2 years ago
[deleted]
ggm|2 years ago
azornathogron|2 years ago
But I don't think that's relevant here anyway, since the article refers to the auto-hinter which as far as I know was never patent-encumbered.
unknown|2 years ago
[deleted]
black3r|2 years ago
zamadatix|2 years ago
Of course that's why subpixel rendering is all a bit moot on Apple devices. For a long time now they've just toggled the default font rendering to the equivalent of "none none" in this article and relying on the high quality screens the devices ship with/most users will plug in to make up for it.
kuschku|2 years ago
criddell|2 years ago
zamadatix|2 years ago
alwayslikethis|2 years ago
zidoo|2 years ago
unknown|2 years ago
[deleted]
devit|2 years ago
It seems that they:
- Fail to properly position glyphs horizontally (they must obviously be aligned to pixels horizontally and not just vertically)
- Fail to use TrueType bytecode instead of the autohinter
- Fail to support subpixel antialiasing
These are standard features that have been there for 20 years and are critical and essential in any non-toy software that renders text.
How come GTK+ is so terrible?
EDIT: they do vertical-only rather than horizonal-only. Same problem, needs to do both.
audidude|2 years ago
If you read the article carefully, it mentions it aligns vertically to the _device_ pixel grid.
unknown|2 years ago
[deleted]
mouse_|2 years ago
[deleted]