top | item 13688629

Hyperspectral analysis with just an app

74 points| sschueller | 9 years ago |fraunhofer.de | reply

33 comments

order
[+] kragen|9 years ago|reply
This is very interesting, but the headline is a baldfaced lie.

The headline "hyperspectral analysis with just an app" is unfortunately kind of bullshit. (And Fraunhofer seems to have realized this, as the current headline is just "App reveals constituents".) As highd points out, this will give you nine spectral dimensions instead of the usual three, or eight rather than two after you drop luminance, which will almost certainly be the first principal component. From eight spectral dimensions you can maybe get back to eight wavelength bands.

(And you do get nine spectral dimensions rather than the usual three, because the spectra of the light emitted from the display are not going to exactly match the response spectra of the pixel colors in the camera. But they'll be close, so some of those dimensions will have very little energy, so they'll be very noisy, and JPEG compression is likely to be fatal. And of course they vary from one device to another.)

An eight-band image is not a fucking hyperspectral image. It's multispectral, like Landsat. But that's still enough to do quite a bit of material discrimination. Probably not even close to enough to detect pesticide residue, though.

The only way you could get more information this way would be through nonlinear effects, where illuminating the object with twice the light gives you a scattered result significantly different from twice the original scattered result. Those don't contribute significantly to the spectra of ordinary materials except under extremely intense lighting conditions, although I've seen two-photon effects do interesting things with glow-in-the-dark paint and a red laser pointer, and of course there are devices (like green laser pointers!) that take advantage of the Kerr effect — by using exotic materials.

[+] AndrewKemendo|9 years ago|reply
Not to give them too much credit because it doesn't go into this detail, but it would in theory be possible to use the speaker/mic to utilize ultrasound [1] and pick up IR as, on certain devices, there is no IR filter on the front facing camera [2]. So you could combine RGB inputs, with background IR pickup and ultrasound to get actual hyperspectral data.

[1]https://www.infosecurity-magazine.com/blogs/ultrasonic-cross...

[2] http://kenstechtips.com/index.php/how-to-see-the-invisible-i...

[+] kortex|9 years ago|reply
As a chemist this headline makes me actually angry.
[+] nom|9 years ago|reply
The idea itself is great, but the quality of the result obviously depends highly on the type of hardware - namely the spectra of the display and camera.

An LCD has a totally different spectrum (broad) than an LED display (single wavelength). Cameras on the other hand always have a broad spectrum, but it varies depending on the color filters, the response curve of the pixels as well as the post processing (e.g. the demosaicing algorithm). Regarding response curve of the display: that's also something they have to calibrate for and it's not only a hardware parameter because it can be modified in software, for example by the iOS Night Shift feature.

Making this work with a single piece of hardware is hard enough and I wonder if they'll target the android market at all. My guess is they're going to release it only for iPhones because it's so homogeneous.

Also, the press release photo made me giggle, the front camera isn't even facing the object they are scanning :D

Edit: Saying LEDs have a single wavelength is probably not correct, but they have a very narrow band depending on the technology used.

[+] smallnamespace|9 years ago|reply
Couldn't you in theory calibrate this for any feasible device using a known light source and a large enough set of color swatches with known response curves? Inferring device parameters should just be solving some system of equations given enough constraints, and if you overconstrain the system you also get an estimate of how much noise is in your parameters.

In this case, you need to simultaneously estimate the LED wavelengths as well as the response curve of the camera.

Not saying it'd necessarily be worthwhile to do, but if that actually worked, all you'd need to do is send a light source + a color swatch booklet through the mail to a potential user if the device isn't in the database yet.

[+] notnot|9 years ago|reply
So they're using the screen consisting of three monochromatic LED types as a full-spectrum light source? I don't think that works... You still just get three points on the spectrum, same as the Bayer filter on the camera.

I suppose if the three screen LED wavelengths were significantly different from the three camera filter wavelengths then you could:

Illuminate with screen Red to get Rscreen.

Illuminate with screen Green to get Gscreen.

Illuminate with screen Blue to get Bscreen.

Use ambient full-spectrum light to get Rfilter, Gfilter, and Bfilter.

Then you'd have 6 points on the spectrum.

[+] highd|9 years ago|reply
I think you're correct, barring significant nonlinearity in the bayer mask or object. Technically you can get 9 linearly independent points - each combination of light channels on with each combination of bayer mask channels. Ideally only 3 of those will be nonzero, but if the bayer mask is imperfect you'll see some illumination on adjacent channels. Environmental background is subtracted out from all since it's unknown, so that doesn't give another point.

There's also no way you're measuring pesticide residue with that - I doubt that would even be possible with a high-end visible hyperspectral camera. Maybe with a raman spectrometer.

I've designed a couple versions of cell phone camera-based spectrometers and spectral imagers, so I'm relatively familiar with the design principles.

[+] nom|9 years ago|reply
A Bayer filter doesn't give you single wavelengths they have a broad, somewhat overlapping spectrum. An LED display on the other hand has three distinct narrow bands.

I'm not sure how their inverse algorithm could work, but I have a feeling it should be possible to get more than three points of the spectrum by displaying multiple light patterns.

Regarding full-spectrum ambient light: they can't use it at all because they have to subtract it from the images. You can only recover spectral information from the light you control. At least that's what I'm thinking right now.

[+] danbruc|9 years ago|reply
That was also what I was thinking. But given that neither the LEDs nor the Bayer filters have an actual line spectrum it might be possible to obtain some more information than under the assumption that they are ideal line spectra. Could there also be some non-linearities? Does the spectrum of a LED change somewhat depending on the input power?
[+] mozumder|9 years ago|reply
This is far from hyperspectral imaging, where you're supposed to have 256 or 1024 bands of colors per pixel.
[+] leecarraher|9 years ago|reply
i actually built a little chameleon throwie concept years ago based on this principal. it used two RGB led as the output light and stepped through the colors, then used the unused LED to measure the light(LEDs have photosensitive resistance). Then once it figure out it's color(dumb mixing of resistances) it would output the guessed color using PWM on both RGB LEDs of the thing it was on. In short though, it didn't work very well. Also like others have said, LED's have very narrow bandwidths (by design, as it uses less power). if this weren't the case people probably wouldnt be paying around $2000 grand for a light bulb (http://www.shop.spectrecology.com/USB-ISS-UV-VIS-illuminate-...)
[+] msds|9 years ago|reply
I tried to do this exact thing years ago too! Except I only used one RGB led, and thus could never sense the red channel, because the bandgap of green and blue LED dies is too big to really get any signal from red light. Isn't it annoying when your project is up against quantum mechanics?

I also build a swarm of throwies that used a single LED to synchronize their blinking. That project was much cooler, and I wish I had better documentation...

[+] nom|9 years ago|reply
Very interesting idea!

I'm guessing you didn't try to calibrate the response curves of the LEDs (in both directions)? I'm not sure if the desired effect is even feasible (assuming the receiving bandwidth is as narrow as the as the transmitting one), but it should give better results? And if it's not possible at all, you'll surely notice it during calibration?

[+] teilo|9 years ago|reply
Were it not for the source, I would have called this a hoax.
[+] chillingeffect|9 years ago|reply
but but but... how many dimensions do they expect to extrapolate from three available by illuminating from the screen?

I'm anticipating someone snapping an image and an app saying, "That's either a fresh organic grape, a tractor tire or a leg of lamb."

Will you have to tell it: "This is a Kale leaf" and let it evaluate the signal levels relative to other Kale leaves?

[+] Etheryte|9 years ago|reply
As far as I can tell, their research has nothing to do with directly identifying objects, only identifying properties of objects.
[+] krapht|9 years ago|reply
Hmm, how accurate is it, though? How do you calibrate such a system when the ambient illumination could be anything?
[+] teilo|9 years ago|reply
If I understand the article, that problem is alleviated by illuminating the object with only a single color of light at a time in rapid succession, using the screen of the phone. Presumably, the camera would filter out ambient light from the result by sampling that first.

It is certainly not going to be precise, but it's quite an achievement if they can may it work within the limits of the device.

[+] jaydub|9 years ago|reply
Just to confirm, is this a smart-phone based spectrometer? (Why don't they use that term?)
[+] RandomOpinion|9 years ago|reply
>Just to confirm, is this a smart-phone based spectrometer? (Why don't they use that term?)

Spectrometers generally produce a single spectrum from a light source.

A hyperspectral imager produces a spectrum for each pixel in the image. See https://en.wikipedia.org/wiki/Hyperspectral_imaging

[+] angry_octet|9 years ago|reply
High tech spectral analysis, with an apparent goal of enabling death dealing anti-GMO pseudo science. Great work Fraunhofer, I'm expecting an impressive Scientology e-meter app next.