top | item 16453503

Open Source Color Management Is Broken

135 points| helmchenlord | 8 years ago |lieberbiber.de

124 comments

order

jzl|8 years ago

The complaints in this article appear to be regarding OS-controlled lut adjustments, i.e. via special X11 features. I've used color calibrated monitors on linux in color-sensitive industries with hundreds or even thousands of artist seats (including film, animation, vfx, real-time, etc.) and the monitor has always been calibrated via the physical menus/buttons on the bezel and not a software/OS monitor-specific lut adjustment.

I'm not judging which is better or worse as it would indeed be nice if the calibration could be controlled by the OS. I'm just saying that adjusting the monitor hardware directly is what's being done in the pro linux content creation world. Also, I imagine that certain brightness and color gamut controls could only happen from the bezel controls. Dreamcolors can switch between srgb & P3 for example.

appleflaxen|8 years ago

i don't know color management, but it sounds like the situation is analogous to complaining that VLC on linux doesn't have volume controls (for example; of course it does in reality...) when the solution is to reach over and twist your hardware volume knob.

it's good to know that you've found the controls sufficient on the systems you have adjusted!

mark-r|8 years ago

That would seem to be the only sane way, as anything else requires all the pieces end-to-end to be working properly in perfect harmony.

cratermoon|8 years ago

The Linux community will attempt to solve the problem by attempting to create an entirely new color management system, which will get at best 80% done with a handful of outstanding Hard bugs, at least one of which will be bad enough to break the entire thing except on one specific distro/gui/hardware combination.

borplk|8 years ago

It will just be a sub-component of systemd. Named colord.

userbinator|8 years ago

...and it'll be difficult to turn off for those not in the professions named in the other comments here which require accurate colour, using monitors with smaller gamuts, and would much prefer to have the "raw", "unmanaged" behaviour.

newnewpdro|8 years ago

It's unclear to me why there's no color palette/correction table exposed in libdrm/KMS at the kernel. It should just be part of the video mode, maybe with a bit to indicate when the driver/hardware doesn't support changing it. But always be able to read it out, make it linear nonsense when the support isn't there - that won't be worse than what we have now.

Then the layers up the stack like Wayland/X/GNOME/KDE are just messengers to/from the bottom @ drm.

We also need floating point frame buffers to be first-class citizens at the KMS level. I don't want to be forced into OpenGL/Vulkan just to be able to have the hardware apply gamma correction on a software-rendered frame buffer, and if I have the hardware do color correction, it kind of needs the HDR of floats - not uchars, which I don't think libdrm supports with dumb buffers of today. If not floats, at least more bits than 8 per color component.

Programs like Plymouth or other embedded style applications running directly on libdrm should be able to have color-corrected output without needing bespoke software implementations and their own calibration tables. I should be able to tell the kernel the correction table, maybe compile it in, or another payload on the boot parameters cmdline.

Hell there are fairly well-known simple algorithms for generating approximate color tables from a single gamma value. If I only want to make things look "better" in my linux kiosk, and care not about absolute color correctness, let me stick a drm.gamma=.55 field on the kernel commandline to generate the correction table in lieu of a full calibrated table.

braderhart|8 years ago

Have you created it, cause if not then that might explain why what you want in detail doesn't exist.

mark-r|8 years ago

If you write the calibration info to the video LUT, why does software also need to know about it? Isn't your monitor displaying perfectly calibrated sRGB at that point?

theatrus2|8 years ago

This is assuming all the content is sRGB, which is not a safe assumption especially if you care about color management to begin with.

newnewpdro|8 years ago

I know just from writing graphics hacks that you get significantly better results if you do the correction in the HDR color space before you produce the 24-bit, 8-bits-per-component pixels shipped to the GPU.

If by "video LUT" you mean something at the CRTC or even after it on some external device, if the software producing the visuals has already reduced the pixels down to 8-bits-per-component before they hit the LUT, then you've lost accuracy particularly in the small values.

This is why it's desirable to do one of the following:

1. inform the software of the LUT and let it perform the transform before it packs the pixels for display

2. change the entire system to have more bits per color component all the way down to the framebuffer, then the per-component LUTs at the CRTC can profitably contain > 256 entries.

I'm not an expert in this field at all, just play with graphics hacks. But this is what I've come to understand is the nature of the issue.

edit:

To clarify, the reality implied by the need for correction is that some areas of the 0-256 range of values are more significant than others. When you do a naive linear conversion of whatever precision color the application is operating in down to the 24-bit rgb frame buffer, you've lost the increased accuracy in the regions that happen to actually be more significant on a given display. So you'd much rather do the conversion before throwing away the extra precision, assuming the application was working in greater than 24-bit pixels.

andmarios|8 years ago

Not necessarily. After calibration your monitor displays as accurately as it can sRGB (or any other profile you calibrated for). It still may miss or shift parts of sRGB though.

Thus your software has to know what your display is capable of. It can use this information to show you which parts of the photograph you edit are not shown accurately.

seba_dos1|8 years ago

> Except that they only set it for the primary output. Turns out if you have multiple displays, the profile for the first one is put into the _ICC_PROFILE X11 atom, but the profile for the second one in the _ICC_PROFILE_1 X11 atom, for the third display in the _ICC_PROFILE_2 X11 atom, and so on. It’s just that nobody seems to do this.

Sounds like an easy thing to fix. I'd suggest the author to try and make some patches - don't know about GNOME, but KDE is pretty friendly and easy to contribute to.

pjmlp|8 years ago

Not everyone is a developer, and this type of comments is why everyone that cares about UI/UX design professionally, eventually goes back to Windows or macOS.

hughsient|8 years ago

Not an easy thing. The ICC-in-X specification specifies the index as the Xinerama screen number, which has no meaning with XRANDR-on-XOrg, and even less meaning on Wayland. There's nothing in the protocol to tie the ID to a monitor, or even a predictable hotplug order. This is why the device-id in colord exists.

Source: colord author.

jhasse|8 years ago

GNOME isn't.

unhammer|8 years ago

I've been using the ColorHug 1, and found it worked quite nicely. The included Fedora CD (yes, this was some time ago) let me calibrate just fine. Once I wanted to do it straight from XCFE, I ran into a few challenges, but it wasn't unsurmountable: https://askubuntu.com/questions/427821/can-i-run-gcm-calibra... Basically, you have to run xiccd in the background, since XFCE doesn't have built-in colord support to set the X11 atom[1]. I've never used multiple monitors though, I don't know if that's possible with xiccd+xfce instead of Gnome.

[1] https://bugzilla.xfce.org/show_bug.cgi?id=8559

cornholio|8 years ago

I think niche market software, used by a limited number of highly specialized professionals, is somewhat incompatible with the open source economic model. When a piece of software is used by very many users, and there is a strong overlap with coders or companies capable of coding, say an operating system or a a web server, open sources shines: there is adequate development investment by the power-users, in their regular course of using and adapting the software, that can be redistributed to regular users for free in an open, functional package.

At the other end of the spectrum, when the target audience is comprised of a small number of professionals that don't code, for example advanced graphic or music editors or an engineering toolbox, open source struggles to keep up with proprietary because the economic model is less adequate: each professional would gladly pay, say, $200 each to cover the development costs for a fantastic product they could use forever, but there is a prisoner dilema that your personal $200 donation does not make others pay and does not directly improve your experience. Because the userbase is small and non-software oriented, the occasional contributions from outside are rare, so the project is largely driven by the core authors who lack the resources to compete with proprietary software that can charge $200 per seat. And once the proprietary software becomes entrenched, there is a strong tendency for monopolistic behavior (Adobe) because of the large moat and no opportunity to fork, so people will be asked to pay $1000 per seat every year by the market leader simply because it can.

A solution I'm brainstorming could be a hybrid commercial & open source license with a limited, 5 year period where the software, provided with full source, is commercial and not free to copy (for these markets DRM is not necessary, license terms are enough to dissuade most professionals from making or using a rogue compile with license key verification disabled).

After the 5 year period, the software reverts to an open source hybrid, and anyone can fork it as open source, or publish a commercial derivative with the same time-limited protection. The company developing the software gets a chance to cover it's initial investment and must continue to invest in it to warant the price for the latest non-free release, or somebody else might release another free or cheap derivative starting from a 5-year old release. So the market leader could periodically change and people would only pay to use the most advanced and inovative branch, ensuring that development investment is paid for and then redistributed to everybody else.

gugagore|8 years ago

Is there anyway to display something in Linux without any color management? Can you access the buffer that gets read directly to the monitor?

I've wanted to know whether two colors I'm displaying actually get distinguished by the monitor or if the LIST maps then to the same output value.

Bitcoin_McPonzi|8 years ago

Many of my (Hollywood) clients need to do color grading and other accurate color work.

What platforms do they all use? Not Macintosh, despite its reputation for being the platform for "graphics professionals" (it was missing 10 bit/channel color until very recently). And not Linux, despite its use in render farms.

They use Windows 10 and HP DreamColor monitors. That's the only platform that works and works well for people who need to care about color.

ancientworldnow|8 years ago

I'm a colorist. Many of us do use windows, many use OSX, many more use Linux. Every major color critical application supports many types of LUTs and color management.

Further, HP Dreamcolors have tons of problems and aren't considered solid for color critical work (but are fine for semi color accurate stuff like intermediate comps etc). Color accurate work is done over SDI with dedicated LUT boxes handling the color transforms and the cheapest monitors being $7500 Flanders Scientific Inc 25" OLED panels.

danceparty|8 years ago

I don't agree, the entire visual effects industry, including their color departments, run on linux. Baselight and Resolve, are the two most common color correction programs in the industry, baselight exclusively runs on linux, and the big color companies (company 3, efilm, technicolor) all run resolve on linux. Coloring is done either on projectors, or broadcast monitors (something like a sony PVMA250 on the low end @ ~$6,000)

AboutTheWhisles|8 years ago

Every large visual effects studio runs on linux, with hundreds of linux workstations at each one. Color sensitive work like lighting and compositing has been done for well over a decade on linux. Artist workstations are calibrated and every major computer graphics application has support for look up tables.

giovannicarruba|8 years ago

After reading this Linux rant and seeing how Apple is systematically marginalising its Pro customers, I'm actually inclined to believe you. Windows might be a pain for John Doe sometimes, but Microsoft also makes sure an insane amount of obscure professional features (like color management) keep working.

52-6F-62|8 years ago

Anecdotal, but:

I currently work for a media conglomerate where colour tends to matter a lot to both the print and digital channels. I'm not sure which monitors they use as I tend to work rather separate from that group, but they all work on Macs—that's been the case since the late 80's early 90's in publishing and there seems to be no move to stray.

gigogkggi|8 years ago

This all sounds interesting but I have 0 clue about what the author is talking about. Can somebody explain what is being talked about, what color calibration is and all? Not a designer, but a systems programmer. I do understand rgb, hsv colors but that's it.

GuiA|8 years ago

If you do any professional color work, you want a color calibrated display. This ensures that the colors you see on your screen will be the same as the ones on your designer colleague’s screen, and the same as the ones that come out of the printer’s factory, for instance.

Higher end displays are already pretty decently calibrated out of the factory, but if you want to be exact you will need to buy an external piece of hardware that will measure your display’s colors and tell you how off they might be.

The author bought a piece of color calibrating hardware that was meant to be open in design and work with Linux, as presumably he wants to support these efforts.

But he encountered a bevy of problems, ranging from packages not updated in a while to things that just plain don’t work as documented, and got frustrated.

Understandable, since on macOS or Windows with proprietary hardware, this would have been a 5 minute process. The author is sad and frustrated that the open source alternatives aren’t there.

detaro|8 years ago

If you display the same RGB value on different screens (or other outputs like prints) it'll be a different actual color being displayed. Color calibration measures the actual color created and creates correction information that software can use to make the color conform to a standard, so you have better control over the output.

gsich|8 years ago

The author cries about being not able to boot from the USB stick and that his ColorHug2 might be broken.