A single wavelength can't reproduce all visible colors. These pixels are variable wavelength, but can only produce one at a time, so you'd still need at least 2 of these pixels to reproduce any visible color.
The fundamental problem is that color space is 2D[1] (color + brightness is 3D, hence 3 subpixel on traditional displays), but monochromatic light has only 1 dimension to vary for color.
Ha, yea, in particular these monochromatic pixels can't simply be white. Notably ctrl-f'ing for "white" gives zero results on this page.
Relatedly, the page talks a lot about pixel density, but this confused me: if you swap each R, G, or B LED with an adjustable LED, you naively get a one-time 3x boost in pixel area density, which is a one-time sqrt(3)=1.73x boost in linear resolution. So I think density is really a red herring.
But they also mention mass transfer ("positioning of the red, green and blue chips to form a full-colour pixel") which plausibly is a much bigger effect: If you replace a process that needs to delicately interweave 3 distinct parts with one that lays down a grid of identical (but individually controllable) parts, you potentially get a much bigger manufacturing efficiency improvement that could go way beyond 3x. I think that's probably the better sales pitch.
This is definitely a problem; if the control circuitry is up for it you could PWM the pixel color, basically dithering in time instead of space to achieve white or arbitrary non-spectral colors.
I'm assuming that in most cases they'll just make these act as RGB displays, either by sequentially tuning the wavelength of each pixel to red, green, blue in a loop, or by assigning each pixel to be red, green, or blue and just having them act as subpixels.
This seems like a non-problem, cut the display resolution in half on one axis and reserve two 'subpixels' for each pixel. Then you have a full color display with only one physical pixel type and that needs one less subpixel. These displays could even produce some saturated colors with specific wavelengths that can't be represented on regular rgb displays.
Human eyes have three different color receptors, each tuned for it's own frequency, so it's already 3d. However, apart from human perception, color, just like sound, can have any combinations of frequencies (when you split the signal with Fourier transform), and may animals do have more receptors than us.
There are plenty of monochromatic cases. Right now hw has a lot of orange.
Dynamic resolution / subpixel rendering. Retina looks really good already, not sure if the effect would be relevant or interesting but it might open up something new
One thing I noticed is that they were talking about demoing 12,000 ppi displays, which is way more resolution than you're going to resolve with your eye. So using 2 pixels is still probably a win.
> These pixels are variable wavelength, but can only produce one at a time
Citation needed. The article doesn't say anything about how the colors are generated, and whether they can only produce one wavelength at a time.
Assuming they are indeed restricted to spectral colors, dithering could be used to increase the number of colors further. However, dithering needs at least 8 colors to cover the entire color space: red, green, blue, cyan, magenta, yellow, white, black. And two of those can't be produced using monochromatic light -- magenta and white. This would be a major problem.
This vaguely reminds me of "CCSTN" (Color Coded Super Twisted Nematic) LCD displays, which were used in a few Casio calculators to produce basic colour output without the usual RGB colour filter approach.
Hm, thinking about this further, this would need dithering to work properly (which probably works fine, but the perceived quality difference would mean pixel density comparisons aren't apples-to-apples)
Presumably, you get to control hue and brightness per-pixel. But that only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out), but dithering can solve that. Coming up with ideal dithering algorithms could be non-trivial (e.g. maybe you'd want temporal stability).
You really can't think about single wavelength tunable pixels as something except at the edge HSL.
I think about it from the CIE "triangle" where wavelength traces the outer edge, or even the Lab (Luminance a-green/red b-yellow/blue) color space since it's more uniform in perceivable SDR color difference (dE).
One key realization is that although 1 sub-pixel can't cover the gamut of sRGB (or Rec2020), but only 2 with wavelength and brightness control rather than 3 RGB. Realistically, this allows something like super-resolution because your blue (and red) visual resolution is much less than your green (eg 10-30pix/deg rather than ~60ppd). However, your eye's sensitivity off their XYZ peaks are less and perceived brightness would fall.
I guess what I'm saying is that a lot of the assumptions baked into displays have to be questioned and worked out for these kinds of pixels to get their full benefit.
> only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out)
Note that even if we restrict our attention to the max-saturation curve, these pixels can't produce shades of purple/magneta (unless, as you say, they use temporal dithering or some other trick).
You could use several pixels as sub-pixels or if the color shift time is fast enough, temporal dithering.
Even if these could produce just three wavelengths, if you can pulse them fast enough and accurately, the effect would be that color reproduction is accurate (on average over a short time period)
I'm not sure why saturation couldn't be controlled.
I probably missed something in the article, though I do see ex. desaturated yellow in the photographs so I'm not sure this is accurate.
If you can't control saturation, I'm not sure dithering won't help, I don't see how you'd approximate a less saturated color from a more saturated color.
HSL is extremely misleading, it's a crude approximation for 1970s computing constraints. An analogy I've used previously is think of there being a "pure" pigment, where saturation is at peak, mixing in dark/light (changing the lightness) changes the purity of the pigment, causing it to lose saturation.
That's not hugely surprising given that (I believe) LEDs have always shifted spectrum-wise a bit with drive current (well, mostly junction temperature, which can be a function of drive current.)
I guess that means they're strictly on/off devices, which seems furthered by this video from someone stopping by their booth:
You can clearly see some pretty shit dithering, so I guess they haven't figured out how to do PWM based brightness (or worse, PWM isn't possible at all?)
I guess that explains the odd fixation on pixel density that is easily 10x what your average high-dpi cell phone display has (if you consider each color to be its own pixel, ie ~250dpi x 3)
It seems like the challenge will be finding applications for something with no brightness control etc. Without that, it's useless even for a HUD display type widget.
In the meantime, if they made 5050-sized LEDs, they would probably print money...which would certainly be a good way to further development on developing brightness control.
I doubt they can. Probably the process only works (or yields) small pieces, otherwise they'd be doing exactly what you suggest.
I also notice that their blues look terrible in the provided images. Which will be a problem. I don't think they get much past 490nm or so? That would also explain why they don't talk at all about phosphors, which seem like a natural complement to this tech... I don't think they can actually pump them. Which is disappointing :(
I understand that one of the big issues with microLED is huge brightness variation between pixels. Due to some kind of uncontrollable (so far) variations in the manufacturing process, some pixels output 1/10 the light (or less) as others. Ultimately the brightness of the whole display is constrained by the least bright pixels because the rest have to be dimmed to match. Judging by their pictures they have not solved this problem.
> I understand that one of the big issues with microLED is huge brightness variation between pixels. Due to some kind of uncontrollable (so far) variations in the manufacturing process, some pixels output 1/10 the light (or less) as others.
I instead understand that this is false. Available MicroLED screens (TVs) are in fact brighter than normal screens.
The issue with MicroLED is instead that they are extremely expensive to produce, as the article points out, due to the required mass transfer. Polychromatic LEDs would simplify this process greatly.
Would be fun if displays come full circle with variable addressable geometry/ glowing goo too.
Not quite vector display, but some thing organic than can be adressed with some stimulators like reaction-diffusion or gaussian, FFT, laplacians, gabor filters, Turig patterns, etc.
Get fancy patterns with lowest amount of data.
I didn't realize we even had a discrete LED tunable across the visible spectrum, let alone a Micro-LED array of them. Anybody know where I can buy one? I want to build a hyperspectral imager.
I think a lot of these comments are missing the point-even if you have to reduce their reported density numbers by half, they made a display with dimensions of "around 1.1 cm by 0.55 cm, and around 3K by 1.5K pixels", which is insane! All without having to dice and mass-transfer wafer pieces, since every pixel is the same.
A lot of the article is focused on how this matters for the production side of things, since combining even 10 um wafer pieces from 3 different wafers is exceedingly time consuming, which I think is the more important part. Sure, the fact that each emitter can be tuned to "any colour" might be misleading, but even if you use rapid dithering like plasma displays did, and pin each emitter to one wavelength, you suddenly have a valid path to manufacturing insanely high density microLED displays! Hopefully this becomes viable soon, so I can buy a nice vivid and high contrast display without worrying about burn in.
I'm really curious about the reproducibility. The color is decided by the bandgap and the bandgap is tunable by voltage, but how temperature dependent is it, and how much does production variability impact it?
I image these displays could have color sensors attached to self-calibrate.
Or the variability is low and all you need is very precise voltages.
I think the first versions will be RGB displays with fixed colors, just no longer needing mass transfer. You could use tens of subpixels per pixel, reducing all worries about color resolution.
Make these into e.g. 1x1cm mini displays and mass transfer those into any desired display size.
OLED tech has been very transformative for lots of my old gear (synthesizers and samplers mostly) that originally came with backlit LCD displays. But the OLEDs are offered in static colors, usually blue or amber. Sometimes white red or green
It would be very cool to have a display with adjustable color.
The promotional document focuses on wavelength tunability but I imagine brightness at any one wavelength suffers because to emit at one wavelength requires an electron to lose the amount of energy in that photon by transitioning from a high to low energy state. Maximum brightness then corresponds to how many of these transitions are possible in a given amount of time.
Some states are not accessible at a given time (voltage can tune which states are available) but my understanding is the number of states is fixed without rearranging the atoms in the material.
These still produce a single [adjustable] wavelength, which means some colors that are displayable on displays of today are not representable using just one of these, and multiples will be required.
Yes, it’d be two subpixels instead of the current three. It’s not clear that that’s worth the added complexity of having to control each subpixel across two dimensions (brightness and wavelength) instead of just one (brightness).
LED are somewhat temperature sensitive devices, and getting repeatable high-granularity bit-depth may prove a difficult problem in itself.
There are ways to compensate for perceptual drift like modern LCD drivers, but unless the technology addresses the same burn-in issues with OLED it won't matter how great it looks.
You may want to look at how DMD drivers handled the color-wheel shutter timing to increase perceptual color quality. There are always a few tricks people can try to improve the look at the cost of lower frame rates. =)
Incredible accomplishment, but the question remains what this will look like at the scale of a display on any given consumer device.
Of course, it's only just now been announced, but I'd love to see what a larger scale graphic looks like with a larger array of these to understand if perceived quality is equal or better, if brightness distribution across the spectrum is consistently achieved, how pixels behave with high frame rates and how resilient they are to potential burn-in.
They already have these, but people need to modify the GPU designs before it is really relevant. The current AI hype cycle has frozen development in this area for now... so a super fast 1990's graphics pipeline is what people will iterate on for awhile.
Nvidia is both a blessing and a curse in many ways for standardization... =3
I can certainly see these being useful in informational displays, such as rendering colored terminal output. The lack of subpixels should make for crisp text and bright colors.
I don't see this taking over the general purpose display industry, however, as it looks like the current design is incapable of making white.
My ultimate hope is that this will allow us to store and display color data as Fourier series.
Right now we only represent colour as combinations of red, green, and blue, when a colour signal itself is really a combination of multiple "spectral" (pure) colour waves, which can be anything in the rainbow.
Individually controllable microLEDs would change this entirely. We could visualize any color at will by combining them.
It's depressing that nowadays we have this technology yet video compression means I haven't seen a smooth gradient in a movie or TV show in years.
The human eye can't distinguish light spectra producing identical tristimulus values. Thus for display purposes [1], color can be perfectly represented by 3 scalars.
[1] lighting is where the exact spectrum matters, c.f. color rendering index
Color data has three components for the simple reason that the human eye has three different color receptors. You can change the coordinate system of that color space, but three components will remain the most parsimonious representation.
GrantMoyer|1 year ago
The fundamental problem is that color space is 2D[1] (color + brightness is 3D, hence 3 subpixel on traditional displays), but monochromatic light has only 1 dimension to vary for color.
[1]: https://en.wikipedia.org/wiki/Chromaticity
jessriedel|1 year ago
Relatedly, the page talks a lot about pixel density, but this confused me: if you swap each R, G, or B LED with an adjustable LED, you naively get a one-time 3x boost in pixel area density, which is a one-time sqrt(3)=1.73x boost in linear resolution. So I think density is really a red herring.
But they also mention mass transfer ("positioning of the red, green and blue chips to form a full-colour pixel") which plausibly is a much bigger effect: If you replace a process that needs to delicately interweave 3 distinct parts with one that lays down a grid of identical (but individually controllable) parts, you potentially get a much bigger manufacturing efficiency improvement that could go way beyond 3x. I think that's probably the better sales pitch.
Remnant44|1 year ago
FriedPickles|1 year ago
meatmanek|1 year ago
jmu1234567890|1 year ago
pfg_|1 year ago
golergka|1 year ago
Human eyes have three different color receptors, each tuned for it's own frequency, so it's already 3d. However, apart from human perception, color, just like sound, can have any combinations of frequencies (when you split the signal with Fourier transform), and may animals do have more receptors than us.
Bumblonono|1 year ago
Dynamic resolution / subpixel rendering. Retina looks really good already, not sure if the effect would be relevant or interesting but it might open up something new
TJSomething|1 year ago
cubefox|1 year ago
Citation needed. The article doesn't say anything about how the colors are generated, and whether they can only produce one wavelength at a time.
Assuming they are indeed restricted to spectral colors, dithering could be used to increase the number of colors further. However, dithering needs at least 8 colors to cover the entire color space: red, green, blue, cyan, magenta, yellow, white, black. And two of those can't be produced using monochromatic light -- magenta and white. This would be a major problem.
unknown|1 year ago
[deleted]
Retr0id|1 year ago
https://www.youtube.com/watch?v=quB60FmzHKQ
https://web.archive.org/web/20240302185148/https://www.zephr...
nayuki|1 year ago
chefandy|1 year ago
accrual|1 year ago
Retr0id|1 year ago
Presumably, you get to control hue and brightness per-pixel. But that only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out), but dithering can solve that. Coming up with ideal dithering algorithms could be non-trivial (e.g. maybe you'd want temporal stability).
kurthr|1 year ago
I think about it from the CIE "triangle" where wavelength traces the outer edge, or even the Lab (Luminance a-green/red b-yellow/blue) color space since it's more uniform in perceivable SDR color difference (dE).
https://luminusdevices.zendesk.com/hc/article_attachments/44...
One key realization is that although 1 sub-pixel can't cover the gamut of sRGB (or Rec2020), but only 2 with wavelength and brightness control rather than 3 RGB. Realistically, this allows something like super-resolution because your blue (and red) visual resolution is much less than your green (eg 10-30pix/deg rather than ~60ppd). However, your eye's sensitivity off their XYZ peaks are less and perceived brightness would fall.
I guess what I'm saying is that a lot of the assumptions baked into displays have to be questioned and worked out for these kinds of pixels to get their full benefit.
o11c|1 year ago
If you take the "no subpixels" claim out of the article, this technology still seems useful for higher DPI and easier manufacture.
jessriedel|1 year ago
Note that even if we restrict our attention to the max-saturation curve, these pixels can't produce shades of purple/magneta (unless, as you say, they use temporal dithering or some other trick).
juancn|1 year ago
Even if these could produce just three wavelengths, if you can pulse them fast enough and accurately, the effect would be that color reproduction is accurate (on average over a short time period)
refulgentis|1 year ago
I probably missed something in the article, though I do see ex. desaturated yellow in the photographs so I'm not sure this is accurate.
If you can't control saturation, I'm not sure dithering won't help, I don't see how you'd approximate a less saturated color from a more saturated color.
HSL is extremely misleading, it's a crude approximation for 1970s computing constraints. An analogy I've used previously is think of there being a "pure" pigment, where saturation is at peak, mixing in dark/light (changing the lightness) changes the purity of the pigment, causing it to lose saturation.
KennyBlanken|1 year ago
That's not hugely surprising given that (I believe) LEDs have always shifted spectrum-wise a bit with drive current (well, mostly junction temperature, which can be a function of drive current.)
I guess that means they're strictly on/off devices, which seems furthered by this video from someone stopping by their booth:
https://youtu.be/f0c10q2S_PQ?t=107
You can clearly see some pretty shit dithering, so I guess they haven't figured out how to do PWM based brightness (or worse, PWM isn't possible at all?)
I guess that explains the odd fixation on pixel density that is easily 10x what your average high-dpi cell phone display has (if you consider each color to be its own pixel, ie ~250dpi x 3)
It seems like the challenge will be finding applications for something with no brightness control etc. Without that, it's useless even for a HUD display type widget.
In the meantime, if they made 5050-sized LEDs, they would probably print money...which would certainly be a good way to further development on developing brightness control.
exmadscientist|1 year ago
I doubt they can. Probably the process only works (or yields) small pieces, otherwise they'd be doing exactly what you suggest.
I also notice that their blues look terrible in the provided images. Which will be a problem. I don't think they get much past 490nm or so? That would also explain why they don't talk at all about phosphors, which seem like a natural complement to this tech... I don't think they can actually pump them. Which is disappointing :(
modeless|1 year ago
cubefox|1 year ago
I instead understand that this is false. Available MicroLED screens (TVs) are in fact brighter than normal screens.
The issue with MicroLED is instead that they are extremely expensive to produce, as the article points out, due to the required mass transfer. Polychromatic LEDs would simplify this process greatly.
mensetmanusman|1 year ago
mxfh|1 year ago
Not quite vector display, but some thing organic than can be adressed with some stimulators like reaction-diffusion or gaussian, FFT, laplacians, gabor filters, Turig patterns, etc. Get fancy patterns with lowest amount of data.
https://www.sciencedirect.com/science/article/pii/S092547739... https://onlinelibrary.wiley.com/doi/10.1111/j.1755-148X.2010...
FriedPickles|1 year ago
jessriedel|1 year ago
knotimpressed|1 year ago
A lot of the article is focused on how this matters for the production side of things, since combining even 10 um wafer pieces from 3 different wafers is exceedingly time consuming, which I think is the more important part. Sure, the fact that each emitter can be tuned to "any colour" might be misleading, but even if you use rapid dithering like plasma displays did, and pin each emitter to one wavelength, you suddenly have a valid path to manufacturing insanely high density microLED displays! Hopefully this becomes viable soon, so I can buy a nice vivid and high contrast display without worrying about burn in.
wmertens|1 year ago
I image these displays could have color sensors attached to self-calibrate.
Or the variability is low and all you need is very precise voltages.
I think the first versions will be RGB displays with fixed colors, just no longer needing mass transfer. You could use tens of subpixels per pixel, reducing all worries about color resolution.
Make these into e.g. 1x1cm mini displays and mass transfer those into any desired display size.
nfriedly|1 year ago
That sounds like it's getting close to being a really good screen for a VR headset.
k__|1 year ago
Teknomancer|1 year ago
It would be very cool to have a display with adjustable color.
georgeburdell|1 year ago
Some states are not accessible at a given time (voltage can tune which states are available) but my understanding is the number of states is fixed without rearranging the atoms in the material.
dmitrygr|1 year ago
layer8|1 year ago
mistercow|1 year ago
yig|1 year ago
Scene_Cast2|1 year ago
k7sune|1 year ago
Joel_Mckay|1 year ago
There are ways to compensate for perceptual drift like modern LCD drivers, but unless the technology addresses the same burn-in issues with OLED it won't matter how great it looks.
You may want to look at how DMD drivers handled the color-wheel shutter timing to increase perceptual color quality. There are always a few tricks people can try to improve the look at the cost of lower frame rates. =)
hobscoop|1 year ago
kurthr|1 year ago
Black levels would be determined more by reflectivity of the display than illumination.
bluehat974|1 year ago
https://www.porotech.com/technology/dpt/
Demo video
https://youtu.be/758Xzi_nK8w
hidelooktropic|1 year ago
Of course, it's only just now been announced, but I'd love to see what a larger scale graphic looks like with a larger array of these to understand if perceived quality is equal or better, if brightness distribution across the spectrum is consistently achieved, how pixels behave with high frame rates and how resilient they are to potential burn-in.
maxrumpf|1 year ago
Retr0id|1 year ago
nomdep|1 year ago
4K virtual monitors, here we come!
Joel_Mckay|1 year ago
Nvidia is both a blessing and a curse in many ways for standardization... =3
speakspokespok|1 year ago
itishappy|1 year ago
I can certainly see these being useful in informational displays, such as rendering colored terminal output. The lack of subpixels should make for crisp text and bright colors.
I don't see this taking over the general purpose display industry, however, as it looks like the current design is incapable of making white.
unknown|1 year ago
[deleted]
01100011|1 year ago
hosh|1 year ago
p1esk|1 year ago
joshmarinacci|1 year ago
amazon88|1 year ago
[deleted]
whjkh|1 year ago
[deleted]
jjmarr|1 year ago
Right now we only represent colour as combinations of red, green, and blue, when a colour signal itself is really a combination of multiple "spectral" (pure) colour waves, which can be anything in the rainbow.
Individually controllable microLEDs would change this entirely. We could visualize any color at will by combining them.
It's depressing that nowadays we have this technology yet video compression means I haven't seen a smooth gradient in a movie or TV show in years.
meindnoch|1 year ago
The human eye can't distinguish light spectra producing identical tristimulus values. Thus for display purposes [1], color can be perfectly represented by 3 scalars.
[1] lighting is where the exact spectrum matters, c.f. color rendering index
layer8|1 year ago
ginko|1 year ago
That's because the points on outer edge of CIE are pure wavelengths and you can get to any point inside by interpolating between two of them.