I would like to see computational photography applied to raw images from DSLRs and MILCs with APS-C and larger sensors. Perhaps Canon, Nikon, Sony, and Fujifilm could have built-in options in their cameras for ‘social media mode’, with a modicum of noise reduction (honestly unnecessary at ISOs lower than about 1600 for modern cameras), but drastically improved HDR and white balance.
Many of these cameras are able to take bracketed[1] exposures, and the SNR in even just one image from such sensors is immense compared to the tiny sensors in phones. Surely with this much more data to work with, HDR is much nicer and without the edge brightening typically seen in phone HDR images.
They already are. However, most photographers do not appreciate this type of distortion being applied to their images.
At a glance, my samsung note 22 ultra takes better picture than my nikon d7500. At a glance.
However, as soon as you want to actually DO anything to it, like, view it in any real detail, or on anything but a tiny screen, reality returns.. While the phone is absolutely fantastic for a quick snapshot, it just does not come close to the definition of the older camera with the bigger sensor.
There is no reason to do this. If you want to apply the algorithms and Iphone uses, simply take a burst and post process later for HDR or whatever. I remember being in middle school and messing around with Hugin and my Minolta bridge camera…
People value quick shots/edits and don’t care about quality or editing things later don’t mind an iPhone doing all this behind the scene - but it is irreversible. The sort of error in the article would drive myself and other photographers up the wall.
Also, an Iphone has a CPU and ISP that outclass desktops from only a few years ago - camera manufacturers simply don’t have the same compute available.
On the other hand, some brands do provide interesting computational photography in their cameras at the very high end. Panasonic mirrorless full frame cameras have a pixel shift mode for super res/no bayer interpolation, with some ability to fix motion between steps. Phase One has frame averaging and dual exposure in their IQ4 digital backs, for sequential capture into a single frame and super high dynamic range respectively.
I'm not exactly sure what you're asking for here. In my experience (and as seen in the article) the image processing in most digital cameras will already blow an iPhone out of the water.
As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras. They ramp up the settings on things like noise reduction and sharpness to balance out their tiny sensors, but it's more or less the same algorithms that the cameras are using.
Good cameras even allow you to tweak the settings and control RAW conversion right on the camera. The author could have botched the noise reduction on his Fujifilm to match the iPhone if he wanted to. [0]
Its definitely done! and whats better, you can do far more with a proper raw file from a large sensor. the trick is the workflows aren't as simple as having it happen automatically.
in my workflow I use either dx0 photolab which has excellent facilities for really bringing out a single image.
sometimes I wanna go hardcore with a landscape and take multiple shots manuallly and blend them together using software like aurora HDR (example: https://www.flickr.com/photos/193526747@N04/52219385902/ ) That image is 5 stacked images combined together using a bit of computational photography and adjusted for saturation.
if you want somethign taht will get you descent results fast and work with raws. you can also go with luminar https://skylum.com/luminar
an iphone image looks better at first snap but my z5's images blow them out of the water once I give them some love in the edit room.
Long ago I did some experiments with SuperResolution[1] which is a technique of aligning images to sub-pixel resolution and stacking them to increase the amount of information with the square root of the number of exposures.
I was using Hugin to align the images from my Nikon DSLR. I found that you can get to at least double the resolution in both dimensions fairly quickly, but you'll never get to "enhance" like in TV shows.
I've got an Olympus M43 camera, which has an image sensor that's both significantly smaller than full frame, and significantly larger than a phone. I've found that I can do absolutely amazing things to the RAW image later. There are these processes now that will use ML to remove noise, and it blows my mind every time. I can shoot at ISO 6400 now without even thinking about it. I used to cringe when I had to hit 1600.
Olympus (OM Systems?) should build this stuff into their cameras. I used to have a bit of inkling to some day "upgrade" to full frame, but not any more.
Most already offer that in their various automatic modes. We don't like it because whatever can be done computationally on a camera, can be done 10x better on a real computer with more hardware and finer controls.
It's going to take ILCs switching to sensors like the Z9, without a physical shutter, doing very high frame rates at full resolution.
I'd be interested in computational photography on ILCs if they allowed tuning it -- with phones, it comes with a bunch of other stylistic choices, and I want control over that stuff.
> I would like to see computational photography applied to raw images from DSLRs and MILCs with APS-C and larger sensors.
Wash your mouth out with soap! I did not spend five thousand dollars on a D850-based macro rig to have it produce results no better in quality than what I can get from my phone.
I love my big, heavy Nikon DSLR, and there's really no comparing the images it takes with the ones from my iPhone. Especially in "tricky" lighting situations they're not even in the same ballpark.
That said, there can be just as much (or more) "computational photography" going on with a digital camera as there is with a modern phone, the difference is that cameras and processing software give control to the user, and phones typically do not.
"Real" cameras do a lot of postprocessing too, but it's generally oriented at producing faithful results. They might remove unambiguous and correctable issues such as vignetting or lens distortion, but they don't cross the line of inventing new details to make the photo look good.
Computational photography techniques on smartphones, on the other hand, were always designed around squishy "user perception" goals to make photos look impressive, details be damned.
Do digital cameras really do anywhere near the same computations? Like, they usually have like some low-end shitty microprocessor at most, while for example an iphone has an insanely powerful CPU. Sure, plenty part of this processing happens in the ISP, but surely not everything.
I also love my big heavy Canon DSLR. Can’t quite describe why it gives me so much joy, but I keep it around me at all times. I just love taking photos.
The quality difference is also very obvious compared to my phone even though my camera is easily 8 years old.
I'm surprised the author is unfamiliar with Google Camera and its super-resolution features[1,2], which uses actually clever algorithms to push digital photography beyond what would be physically possible to get out of a naive set of HDR exposures, both in terms of resolution and dynamic range.
The author spends a whole paragraph talking about this category of techniques:
> Slightly more objectionable, but still mostly reasonable, examples of computational photography are those which try to make more creative use of available information. For example, by stitching together multiple dark images to try to make a brighter one. (Dedicated cameras tend to have better-quality but conceptually similar options like long exposures with physical IS.) However, we are starting to introduce the core sin of modern computational photography: imposing a prior on the image contents. In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames. If you’re taking a picture of a dark subject that is also moving multiple pixels per frame, the camera can’t just straightforwardly stitch the photos together - it has to either make some assumptions about what the subject is doing, or accept a blurry image.
Their point is that it's not magic; these techniques rely on assumptions about the subject being photographed. As soon as those assumptions no longer hold, you start getting weird outputs.
I have used 3 pixels and never seen any of this bad post processing as shown by author. I never thought iPhones camera could do bad post processing like this.
Seen this in cheap point and shoot cameras and cheap chinese phones though.
The smartphone cameras have improved a lot in the recent years, but they cannot compete or match a full frame sensor provided the limitations. The size of the sensor and the optics play a major role in the final image quality and one can only do so much with the computational photography or whatever method.
Especially the iPhone photography and videography is always overrated by the fanbois and some of the "professionals". While it might look good on "some" pictures with the heavy post processing, it just doesn't have any details. It might just appeal fine for a 100% view of the picture as is and even the slightest post processing or editing done on the output pictures ruins them a lot.
One has to depend upon what the developer of the application or the manufacturer thinks is the right picture (and who the hell are they to decide what my photo should look like?) and most of the time they are terribly wrong.
Apple is just overrated and for that matter, even some of the Android's as well.
Raw pics from a full frame sensors hold the fort and will continue to hold for a longtime to come unless the phones match DSLR in terms of sensor size and optics size. Until then "computational photography" will make the pictures look terrible and dictate how it has to look like.
I see a lot of comments where folks talk about RAW. But seriously, how does it matter for any normal user who tends to click a pic using the phone instead of a DSLR? If one is photographer, it makes sense, else it is additional workflow to get it in RAW and do the post processing on a computer... I'm just saying...
My $0.02 as some one with an expensive full-frame DSLR and and the latest iPhone Pro:
There are entire categories of image quality that only Apple seems to bother even trying to improve — and then they leapt past everyone.
A few years ago if you wanted to make a HDR, 4K, 60 fps Dolby Vision wide-gamut video…
That would have cost you. Tens of thousands on cameras, displays, and software. It would have been a serious undertaking involving a lot of “pro” tools and baroque workflows optimised for Hollywood movie production.
With an iPhone I can just hit the record button and it does all of that for me, on the phone!
Did you notice that it also does shake reduction? It’s staggeringly good, about the same as GoPro. Just setting up the stabilisation in DaVinci is half an hour of faffing around.
The iPhone just has it “on”.
I could go on.
A challenge I give people is to take a still photo and send it to someone else that is wide gamut, 10 bit, and HDR, any method they prefer.
Outside of the Apple ecosystem this is basically impossible in the general case. Everything everywhere is 8-bit SDR sRGB.
Heck, even professional print shops still request files in sRGB!
So yes, the software in the Apple ecosystem does have a big impact on the end result of photography.
I can take a 14-bit dynamic range picture with my Nikon, but I can’t show it to anyone in that quality because of shitty Windows and Linux software, so what’s the point?
I take pics with my Apple iPhone instead. All the people I want to show pictures to have iDevices, so I can share the full HDR quality that the phone camera is capable of, not some SDR version.
> Raw pics from a full frame sensors hold the fort and will continue to hold for a longtime to come unless the phones match DSLR in terms of sensor size and optics size.
"Full frame" cameras do not have the best image quality, and don't even have the best image quality for their price. (eg used medium format film cameras are cheaper.)
They're just the best cameras people have heard of. If you're doing product photography you might want a Phase One instead.
It doesn't matter much though; lighting and lens quality are what really make a photo even in a controlled environment.
This is not a new problem and can sometimes have disastrous results.
10 years or so ago a variation of this made headlines all over as certain Xerox Workcentres were transposing numbers during scans, due to a compression algorithm that was sometimes matching a different number than the one actually scanned.
> If 5 pigeons occupy 4 holes, then there must be some hole with at least 2 pigeons.
This is so obvious.
> This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is greater than the maximum number of hairs that can be present on a human's head, then the pigeonhole principle requires that there must be at least two people in London who have the same number of hairs on their heads.
I always shoot in Raw (using the Lightroom iPhone App) to make sure that this kind of defects never occur. Noise is generally preferable and acceptable that the disaster trail left by denoising et al. At least you can do it yourself in a way that pleases you instead of having a ruined photograph.
With the pixel6 (and most android phones) you can set it up to do both. So you have the "nice" version generated by the phone and a dng raw file to work with. I have that set up along with syncthing to deliver the raw photos to my laptop (pro tip, this is super handy).
The new iphone and the pixel6 both use the same trick where they have a 50 megapixel sensor (probably the same and likely a Sony sensor) that produces 12.5 megapixel raw photos with four pixels combined information. So the dng I get from my phone has already had some processing done to it but not a lot. Also worth noting that both phones have multiple lenses with different focal lengths and sensors. So, it matters a lot which one you use. You'd control this via the camera app typically with its different modes and zoom levels. I'm not sure if it uses exposures from all sensors to calculate a better raw but that would not surprise me.
In terms of noise, the image quality is actually very good. I've done some night time photography with both the pixel6 and my Fuji XT-30, which is an entry level mirror less camera. The Fuji has better dynamic range and it shows in the dark. But the noise levels are actually pretty good for a phone camera. Very usable results with some simple post processing. Especially compared to my previous Android phone (a Nokia 7 plus) which was noisy even in day light. Mostly doing raw edits is not worth doing that but it's nice to have the option. The phone does a decent job of processing and mostly gets it right when it comes to tone mapping and noise reduction. When it matters, I prefer the Fuji. But sometimes the phone is all you have and you just take a quick photo and it's fine.
A high end full frame camera will get you more and better pixels and more detail. Even an older entry level dslr will generally run circles around smart phone sensors. And that's just the sensor and camera. The real reason to use these is the wide variety of lenses and level of control over the optics that those provide. In phone bokeh is a nice gimmick. But it's a fake effect compared to a nice quality lens. Likewise you can't really fake the look you get with a good portrait lens (the effect that things in the background seem bigger). Phone lenses have a fixed focal range and generally not that much aperture range. There's a reason people pay large amounts of money to own good lenses. They are really nice to use and deliver a great photo without a lot of digital trickery. And they are optimized for different types of photography. There is no one size fits all lens for all photography.
That is what I was thinking. The poster said the photo looked okay at the moment they took a picture and the phone's processing takes over and he then gets junk.
I wonder how hard to is to take 'RAW' photos without adding an app first.
ProRAW appears to not be as processed, but still processed: "Apple ProRAW combines the information of a standard RAW format along with iPhone image processing" [1]
There’s some criticism about the event horizon photo of M87 because they had to do a lot of filling in based on a model of how black holes behave. IIRC they ran a hyperparameter search and picked the ones that were most consistent with the actual photons received.
Interferometers usually don't produce image but take power and phase measurements in the frequency plane. If there are enough points taken an image can be "reconstructed" if not the scientists will do model fitting
Interesting article. I think the real question however is whether imposing complex priors (say driven by a neural net) makes images better _on average_ even if has some failure modes. My guess would be that a fairly weak prior trained on a diverse enough dataset would lead to better average image quality(as judged by everyday people in diverse scenarios) and that's why they are used.
I think it could absolutely make images better on average, assuming their prior is at all representative. The question, though, is whether the expected benefit to the photographer in cases where it improves the image (usually somewhat small) outweighs the costs when it screws up (perhaps relatively large).
Now, are most people going to notice that the iPhone wrecked the text on their subject? Probably not. But they probably also wouldn't notice if the model wasn't applied to the image at all. The median consumer probably mostly benefits from (in terms of how much they like the photo) AE, a bit of curve reshaping (using a smoothed histogram CDF algorithm or something), and maybe some extra saturation.
And, in a similar vein, I enjoyed his video on The Best Smartphone Camera of 2022, where he applied a scientific ranking system taking 21.2 million votes from 600,00 users. I had previously assumed, due to Apple's reputation, that iPhones would take pictures that people like more, but that was not the case.
This looks like iPhone postprocessing problem more than anything else. I have used 3 Google pixel phones (up to pixel 4) and none of them does bad post-processing. In fact, it improves the resolution of whatever you are taking picture of https://ai.googleblog.com/2018/10/see-better-and-further-wit...
I never saw any of these phones altering the details like in article.
Google Pixel phones still do "deep fusion" processing like an iphone, but instead with Google's secret sauce. The photo your phone is showing you is what machine learning thinks the picture should look like, and not the picture you took.
3200ISO on black-and-white film was pushing it pretty hard. Yet these pictures look good, in a noisy kind of way. Let an algorithm loose on them and it'll "fix" things, first and foremost by smoothing the skin. Even older low-end dedicated digital cameras do this, some brands more than others. The pictures in low light feel more like a badly done painting than a good, honest, albeit noisy photo. One possibility is that the noise from a digital sensor is not as uniformly pleasing as that from film, so it must be masked.
A photographer friend had a good way of framing this once for me...
"Phones take amazing snapshots, but dedicated cameras can make better photographs."
The new smartphone cameras are capable of pretty amazing things and they can extend taking good pictures to a whole new audience. If you need the control though that large sensors and specific lenses can bring then you will need a dedicated camera.
The iPhone definitely does some extra processing on text. I’m 99% sure that it recognizes letters and fills in “creatively.” I noticed this while taking some photos of text in low-light. Could barely see it with my eyeballs but the phone worked it out.
To be fair, sensors have the luxury of time that our eyes don’t have. See astrophotography for an example of “could barely see it but the sensor worked it out.”
It doesn't do that. Text just has a lot of edges so there's a lot of opportunities for image fusion to register them.
Apple power adapters actually have a bunch of text on them printed in unreadable light gray; you can try shooting them and while they're a bit clearer than plain old eyesight, they're still pretty unreadable.
> For example, by stitching together multiple dark images to try to make a brighter one. (Dedicated cameras tend to have better-quality but conceptually similar options like long exposures with physical IS.) However, we are starting to introduce the core sin of modern computational photography: imposing a prior on the image contents. In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames. If you’re taking a picture of a dark subject that is also moving multiple pixels per frame, the camera can’t just straightforwardly stitch the photos together - it has to either make some assumptions about what the subject is doing, or accept a blurry image.
This applies to dedicated cameras too though - physical image stabilization can compensate for camera motion but not for subject motion. The difference is that a) physical IS can compensate throughout each exposure, not just between exposure and b) the photographer is not bound to a black box algorithm but can instead use his own a priori knowledge to align the images if needed.
Yes, I meant to imply that dedicated cameras are committing the same "sin" here. I only meant physical IS is better because you don't need to do things like periodically read out the sensor. You are getting a true full-duration exposure that won't produce artifacts like tearing or skips.
>In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames
From this way of looking at things a normal long exposure also imposes a prior assumption (that nothing is moving). It's just that we're used to the artefact that's generated when this prior isn't true (motion blur).
Haven’t felt like the camera on my iPhone 13 is significantly better than the one on my iPhone 7 at all in terms of basic quality. My shots look about the same.
As someone who upgrades every several years I’ve been wondering how people who upgrade every year and rave about the camera being better are even seeing at this point.
I'm noticing the same, looking at GSMArena's comparison shots. We opted for an iPhone 11 back then, and out of curiosity, I keep an eye out for camera improvements, and I don't see that too much is happening. Comparing the low-light shots of the 11 and the 14 pro max, there is some extra detail, but the post-processing is also noticeably heavier.
Also, the camera, lenses, and sensors don't all update every year. Early on in Apple's tick-tock approach to design iteration, camera updates were the "s" models ("tock") in release cycle.
Now they seem to be just incrementing the number and you have to pay attention to what if any changes they make. This time they did 4x pixels and do pixel binning for regular shots and low light.
Computational photography excels at certain uses. Noise reduction can be miraculous as of recently, exposure bracketing and automatic merging allows to take good pictures of a scene with a bright sky without obscuring everything, lens distortion, vignetting and chromatic aberration corrections work really well.
Of course you cannot really compensate for the lens not resolving enough detail, or not focusing close enough; but since the almost totality of photos taken on a phone will be seen on another mobile device these are the less important bits of the equation. Correct exposure and good colors always look good, regardless of how much you zoom the photo. OP's use case is very limited, and unfortunately didn't provide enough context about the nature of the photo.
Yep, smartphone cameras are optically terrible, which is then compensated for with clever tricks. These tricks optimize for popular use cases: people, food, etc.
One aspect that is little discussed is the inflated quality perception of such a photo when seen on the actual device, an iPhone in this case.
iPhones have an incredible screen. OLED, wide gamut color, high PPI. A photo looks radically better on an iPhone compared to opening the same photo on a standard monitor.
Apple also uses non-standard HEIF tags to allow for HDR photo display of photos taken by Apple devices. Last time I checked, you couldn't (easily) take a photo from a dedicated camera (which has more than enough dynamic range to justify HDR) and turn it into a file that would get rendered as HDR on iPhone.
There's an important piece of background to understand why computational methods cannot completely correct chromatic aberration.
A photon can be any color of the rainbow. The reason ink and TVs can get away with using only 3 colors is because our eyes only have 3 types of receptors (cone cells). Each receptor responds to a range of wavelengths. "In-between" wavelengths will trigger multiple receptors. For example, a TV can send a mix of red and green photons and create the same brain signals as yellow photons would. Animals with more types of receptors, such as bees or the mantis shrimp, wouldn't be fooled by a TV with only three base colors.
A camera's sensor performs the same lossy compression as our eyes. Light comes into the camera in a range of wavelengths, and triggers each type of pixel a different amount. Each type of pixel has a sensitivity curve engineered to resemble the sensitivity curve of one of the cone cells in our eyes.
Understanding that natural light isn't just red, green, and blue makes it clear why chromatic aberration can't be fixed computionally. A green pixel can't know when it's receiving green photons that are perfectly aligned, or yellow light that needs to be destorted.
P.S. There cameras that can "see" a greater range of colors. Search for "spectral cameras" and "infra-red goggles"
PPS This is also why a RGB light strip might look white, but objects illuminated by it might look odd. You might be familiar with the fact that a blue object illuminated by a red light will look black. For the same reason, it's possible for a yellow object to be eliminated by red, green, and blue light and still look black.
PPPS This is also why custom wall paints are a mixture of more than three colors. Two paints may look completely the same, but objects illuminated by the light bounced off the walls look completely different.
PPPPS This is also why high-CRI lightbulbs are a thing. If you get something hot, like the sun or a tungsten filament, it will release photons with a wide range of wavelengths. Neon tubes and LEDs emit a single wavelength, so they must be coated with phosphors that fluoresce — emit light at a different wavelength than they absorbed. Using more kinds of phosphors is more expensive, but makes it more likely that whatever object is illuminated gets all the wavelengths it is able to reflect.
The limits are because at this point there's no way to tell the tool what you're trying to do. The various photo modes are a step in that direction, but they've been stuck.
Once they find a way to interact with the processing engine then the quality will jump again.
For the vast majority of users, the phone camera is super awesome and just fine.
I haven't finished the article, but it seems like using the flash on the iPhone might have been enough to lower the ISO for the photo. Lower ISO = lower noise. The end results look like a typical noise reduction process. For screen-sized images, the new phones do quite well. But zoom in and it's often a painterly-blur.
Does anybody know what post-processing is applied to ProRaw images from iOS? I'm guessing true raw (Halide, etc) have none at all, but I recall reading the ProRaw had some applied. I just haven't seen a summary of which steps are applied.
90% of the time, my iPhone photos are fine straight out of the camera as HEIC. But every once in a while, I get something like is described here (or in several other recent similar articles).
They are already demosaiced (this I know for sure and is easy to verify). I also believe that they are the result of stacking several photos to increase the dynamic range and reduce noise (see [0][1]).
Even for the "true raw" ones, I don't know if they're truly raw. Do they have distortion and light fall-off correction applied?
iOS postprocessing is garbage and must be changed. It's an embarrassment.
The most infuriating thing is that you can usually see the image before post processing if you are quick enough and those look sharp and good, but this trash software can't be turned off.
There are many more clever methods that one can use with CMOS that fall under computational photography or optics.
One very interesting one is ptychography (in microscopy often Fourier ptychography, since you can use Fourier optics to describe the optical system [0]), which uses a model of an optical system to get an image (iirc x7-x10 resolution) out of many blurry images, while knowing a bit about the optics in front of your image sensor - it can also work in remote sensing to some degree (better with coherent illumination though).
Edit: This is not just averaging or maxing pixels, it reconstructs the image using reconstructed phase information from having low-res pictures with different, known illumination or camera positions.
I think ptychography is the future of phone cameras... You'll see phones with 1000 lenses and 1000 CCD's, across the whole back of the phone.
They'll all be manufactured as a one piece glass moulding and single CCD chip - and the whole thing will be very cheap to make, having moved all the difficulty into software.
You can go around some of the extra processing by using other camera apps, for example "Open Camera". It can even shoot RAW photos, so that the least amount of processing is applied to the image. Unfortunately, you can't disable all of the processing, because some of it happen on the hardware, or in the camera's kernel module.
Sort of - (I assume for PR/marketing reasons) Apple doesn't let apps get access to actual raw sensor data. It may be possible to skip the steps that are causing the most trouble here.
I accept that there is a place for computational/algorithmic photography but I remain deeply sceptical of its actual benefits (in its current incarnation), moreover my recent bad experiences with it have only strengthened my conviction.
I have previously discussed having taken photos with a smartphone where certain objects within some images have been so modified by the processing algorithm as to be almost unrecognizable so I won't repeat those various scenarios here. Instead, I'd like to dwell on the implications algorithmic image processing for a moment.
Let's briefly look at the issues:
1. Despite a recent announcement by Canon about a large increase in dynamic range in imaging, (https://news.ycombinator.com/item?id=34527687), I'm unaware of any current imaging sensor breakthrough that would vastly improve both resolution and dynamic range. Thus, essentially, we have to live with what we're already capable of physically squeezing into our present smartphones.
2. Manufacturers are improving both image sensors and optics but only incrementally. Thus, with current tech and absence of truly significant breakthroughs, we have to live with the limitations as outlined in the article (aberrations, lens flare, sensor insensitivity etc.).
3. Essentially, we're stymied both by the limitations of current tech and physical (smartphone) size. Usually, to overcome such limitations, we'd fall back on the old truism 'there's no substitute for capacity' and just make things bigger as we did with photographic emulsions, past camera lenses, loudspeakers, pipe organs, etc. but that's not possible here.
4. Outside incremental improvements in hardware—the Law of Diminishing (hardware) Returns having arrived—manufacturers have had to resort to computational methods. The trouble is that it seems with the present algorithms that the Law of Diminishing (computational) Returns is also already upon us, so what does this mean? Quo vadis?
5. Clearly, in its current form computational/algorithmic processing has hit a stumbling block or at least a major hiatus. Here, further incremental improvements are likely using current methods and there's little doubt that they'll be applied to recreational photography (smartphones and such), however, unfortunately, we now have a serious (and very obvious) problem with the authenticity of images taken by these cameras.
Simply, when software starts guessing what's within images then we've not only lost visual authentication but we have serious downstream issues. It raises questions about whether or not photographic evidence based on computational imaging can be relied upon—or even submitted—as evidence in a court of law (I'd reckon, without ancillary cooperating/conjunctive evidence, such images would not muster if the Rules of Evidence tests were applied.
How serious is this? Clearly, it depends on circumstance but long before 'guessing-what's-in-the-image' became in vogue simple compression was 'suspect' in, for example, serious surveillance work—because compression artifacts in an image raised doubts as to what objects actually were—simply, could objects be identified with 100% certainty, if not then what figure could be placed on such measurements/identifications.
(Such matters are not hypotheticals or idle speculation, I recall in nuclear safeguards a debate over compression artifacts in remote monitoring equipment. Here, authenticating and identifying objects must meet strict criteria and a failure to authenticate (fully identify) them means a failure of surveillance which is a big deal! For example, the failure to distinguish between, say, round cables and pipes with 100% certainty could be a serious problem, as the latter could be used to transport nuclear materials—thus it'd be deemed a failure of surveillance. That's not out of the bounds of possability in a reprocessing plant.)
Obviously, the need to authenticate what's in an image with 100% certainty isn't a daily occurrence for most of us but as these tiny cameras become more and more important and ubiquitous then we'll start seeing them used in areas where their images must be able to be authenticated.
Post haste, we need rules and standards about how these computational algorithms process images and how they should be applied.
6. What's the future. On the hardware side we need better sensors with higher resolution and more sensitivity and improved optics (that, say, use metamaterials etc.). Such developments are on their way but don't hold your breath.
Computational/algorithmic processing has the potential to do much, much better, but again don't hold your breath. There's considerable potential to correct focus and aberration problems etc. using both front-end and back-end computational methods ('front-end correcting lenses etc. on-the-fly and back-end as post-image processing) but much work still has to be done. Note: such methods also don't rely on guessing.
What people often forget is that when a lens cannot fully focus or suffers aberrations, etc. information in the incoming light is not lost—it's just jumbled up (remember your quantum information theory).
In the past untangling this mess has been seen as an almost insurmountable problem and it's still a very, very difficult one to resolve. Nevertheless, I'd wager that eventually computational processing of this order will be commonplace, moreover, it'll likely provide some of the most significant advances in imaging we're ever likely to witness.
> 6. What's the future. On the hardware side we need better sensors with higher resolution and more sensitivity and improved optics (that, say, use metamaterials etc.). Such developments are on their way but don't hold your breath.
Two interesting developments here are the pixels in Starvis 2 sensors, which as a first afaik use a 2.5D structure to increase full-well capacity by a lot. And another, non-production sensor by Sony where they developed a self-aligning process and pixels are actually split in two layers, with the top layer only carrying the photodiode and the bottom layer entirely dedicated to the readout transistors. That's promising for lower readout noise and also for increasing full-well capacity.
> the relevant metric is what I call “photographic bandwidth” - the information-theoretic limit on the amount of optical data that can be absorbed by the camera under given photographic conditions (ambient light, exposure time, etc.).
Computational Philosophy: “the use of mechanized computational techniques to instantiate, extend, and amplify philosophical research. Computational philosophy is not philosophy of computers or computational techniques; it is rather philosophy using computers and computational techniques. The idea is simply to apply advances in computer technology and techniques to advance discovery, exploration and argument within any philosophical area.”
The word “simply” is doing a lot of work in that last sentence, I’m sure!
delta_p_delta_x|3 years ago
Many of these cameras are able to take bracketed[1] exposures, and the SNR in even just one image from such sensors is immense compared to the tiny sensors in phones. Surely with this much more data to work with, HDR is much nicer and without the edge brightening typically seen in phone HDR images.
[1]: https://www.nikonusa.com/en/learn-and-explore/a/tips-and-tec...
dusted|3 years ago
At a glance, my samsung note 22 ultra takes better picture than my nikon d7500. At a glance.
However, as soon as you want to actually DO anything to it, like, view it in any real detail, or on anything but a tiny screen, reality returns.. While the phone is absolutely fantastic for a quick snapshot, it just does not come close to the definition of the older camera with the bigger sensor.
buildbot|3 years ago
People value quick shots/edits and don’t care about quality or editing things later don’t mind an iPhone doing all this behind the scene - but it is irreversible. The sort of error in the article would drive myself and other photographers up the wall.
Also, an Iphone has a CPU and ISP that outclass desktops from only a few years ago - camera manufacturers simply don’t have the same compute available.
On the other hand, some brands do provide interesting computational photography in their cameras at the very high end. Panasonic mirrorless full frame cameras have a pixel shift mode for super res/no bayer interpolation, with some ability to fix motion between steps. Phase One has frame averaging and dual exposure in their IQ4 digital backs, for sequential capture into a single frame and super high dynamic range respectively.
jlarocco|3 years ago
As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras. They ramp up the settings on things like noise reduction and sharpness to balance out their tiny sensors, but it's more or less the same algorithms that the cameras are using.
Good cameras even allow you to tweak the settings and control RAW conversion right on the camera. The author could have botched the noise reduction on his Fujifilm to match the iPhone if he wanted to. [0]
[0] https://www.jmpeltier.com/fujifilm-in-camera-raw-converter/
cultofmetatron|3 years ago
in my workflow I use either dx0 photolab which has excellent facilities for really bringing out a single image.
sometimes I wanna go hardcore with a landscape and take multiple shots manuallly and blend them together using software like aurora HDR (example: https://www.flickr.com/photos/193526747@N04/52219385902/ ) That image is 5 stacked images combined together using a bit of computational photography and adjusted for saturation.
if you want somethign taht will get you descent results fast and work with raws. you can also go with luminar https://skylum.com/luminar
an iphone image looks better at first snap but my z5's images blow them out of the water once I give them some love in the edit room.
mikewarot|3 years ago
I was using Hugin to align the images from my Nikon DSLR. I found that you can get to at least double the resolution in both dimensions fairly quickly, but you'll never get to "enhance" like in TV shows.
[1] https://en.wikipedia.org/wiki/Super-resolution_imaging
pkulak|3 years ago
Olympus (OM Systems?) should build this stuff into their cameras. I used to have a bit of inkling to some day "upgrade" to full frame, but not any more.
irthomasthomas|3 years ago
NegativeK|3 years ago
I'd be interested in computational photography on ILCs if they allowed tuning it -- with phones, it comes with a bunch of other stylistic choices, and I want control over that stuff.
unknown|3 years ago
[deleted]
haswell|3 years ago
This would also pair well with Fujifilm’s lineup which already includes camera features focused on in-camera processing.
account42|3 years ago
throwanem|3 years ago
Wash your mouth out with soap! I did not spend five thousand dollars on a D850-based macro rig to have it produce results no better in quality than what I can get from my phone.
jlarocco|3 years ago
That said, there can be just as much (or more) "computational photography" going on with a digital camera as there is with a modern phone, the difference is that cameras and processing software give control to the user, and phones typically do not.
wombat_trouble|3 years ago
Computational photography techniques on smartphones, on the other hand, were always designed around squishy "user perception" goals to make photos look impressive, details be damned.
kaba0|3 years ago
softfalcon|3 years ago
The quality difference is also very obvious compared to my phone even though my camera is easily 8 years old.
Llamamoe|3 years ago
It's literal magic.
[1] https://ai.googleblog.com/2018/10/see-better-and-further-wit...
[2] https://petapixel.com/2019/05/28/how-googles-handheld-multi-...
datagram|3 years ago
> Slightly more objectionable, but still mostly reasonable, examples of computational photography are those which try to make more creative use of available information. For example, by stitching together multiple dark images to try to make a brighter one. (Dedicated cameras tend to have better-quality but conceptually similar options like long exposures with physical IS.) However, we are starting to introduce the core sin of modern computational photography: imposing a prior on the image contents. In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames. If you’re taking a picture of a dark subject that is also moving multiple pixels per frame, the camera can’t just straightforwardly stitch the photos together - it has to either make some assumptions about what the subject is doing, or accept a blurry image.
Their point is that it's not magic; these techniques rely on assumptions about the subject being photographed. As soon as those assumptions no longer hold, you start getting weird outputs.
smusamashah|3 years ago
Seen this in cheap point and shoot cameras and cheap chinese phones though.
indianmouse|3 years ago
Especially the iPhone photography and videography is always overrated by the fanbois and some of the "professionals". While it might look good on "some" pictures with the heavy post processing, it just doesn't have any details. It might just appeal fine for a 100% view of the picture as is and even the slightest post processing or editing done on the output pictures ruins them a lot.
One has to depend upon what the developer of the application or the manufacturer thinks is the right picture (and who the hell are they to decide what my photo should look like?) and most of the time they are terribly wrong.
Apple is just overrated and for that matter, even some of the Android's as well.
Raw pics from a full frame sensors hold the fort and will continue to hold for a longtime to come unless the phones match DSLR in terms of sensor size and optics size. Until then "computational photography" will make the pictures look terrible and dictate how it has to look like.
I see a lot of comments where folks talk about RAW. But seriously, how does it matter for any normal user who tends to click a pic using the phone instead of a DSLR? If one is photographer, it makes sense, else it is additional workflow to get it in RAW and do the post processing on a computer... I'm just saying...
Thoughts welcome...
jiggawatts|3 years ago
There are entire categories of image quality that only Apple seems to bother even trying to improve — and then they leapt past everyone.
A few years ago if you wanted to make a HDR, 4K, 60 fps Dolby Vision wide-gamut video…
That would have cost you. Tens of thousands on cameras, displays, and software. It would have been a serious undertaking involving a lot of “pro” tools and baroque workflows optimised for Hollywood movie production.
With an iPhone I can just hit the record button and it does all of that for me, on the phone!
Did you notice that it also does shake reduction? It’s staggeringly good, about the same as GoPro. Just setting up the stabilisation in DaVinci is half an hour of faffing around.
The iPhone just has it “on”.
I could go on.
A challenge I give people is to take a still photo and send it to someone else that is wide gamut, 10 bit, and HDR, any method they prefer.
Outside of the Apple ecosystem this is basically impossible in the general case. Everything everywhere is 8-bit SDR sRGB.
Heck, even professional print shops still request files in sRGB!
So yes, the software in the Apple ecosystem does have a big impact on the end result of photography.
I can take a 14-bit dynamic range picture with my Nikon, but I can’t show it to anyone in that quality because of shitty Windows and Linux software, so what’s the point?
I take pics with my Apple iPhone instead. All the people I want to show pictures to have iDevices, so I can share the full HDR quality that the phone camera is capable of, not some SDR version.
astrange|3 years ago
"Full frame" cameras do not have the best image quality, and don't even have the best image quality for their price. (eg used medium format film cameras are cheaper.)
They're just the best cameras people have heard of. If you're doing product photography you might want a Phase One instead.
It doesn't matter much though; lighting and lens quality are what really make a photo even in a controlled environment.
KineticLensman|3 years ago
I love my Nikon but for video I need a tripod to eliminate camera shake. My iPhone gives me stable images every time.
matheweis|3 years ago
10 years or so ago a variation of this made headlines all over as certain Xerox Workcentres were transposing numbers during scans, due to a compression algorithm that was sometimes matching a different number than the one actually scanned.
https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...
antegamisou|3 years ago
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
neilpanchal|3 years ago
*Looks up pigeonhole principle*: https://en.wikipedia.org/wiki/Pigeonhole_principle
> If 5 pigeons occupy 4 holes, then there must be some hole with at least 2 pigeons.
This is so obvious.
> This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is greater than the maximum number of hairs that can be present on a human's head, then the pigeonhole principle requires that there must be at least two people in London who have the same number of hairs on their heads.
Oh...
kridsdale1|3 years ago
unknown|3 years ago
[deleted]
kelsolaar|3 years ago
jillesvangurp|3 years ago
The new iphone and the pixel6 both use the same trick where they have a 50 megapixel sensor (probably the same and likely a Sony sensor) that produces 12.5 megapixel raw photos with four pixels combined information. So the dng I get from my phone has already had some processing done to it but not a lot. Also worth noting that both phones have multiple lenses with different focal lengths and sensors. So, it matters a lot which one you use. You'd control this via the camera app typically with its different modes and zoom levels. I'm not sure if it uses exposures from all sensors to calculate a better raw but that would not surprise me.
In terms of noise, the image quality is actually very good. I've done some night time photography with both the pixel6 and my Fuji XT-30, which is an entry level mirror less camera. The Fuji has better dynamic range and it shows in the dark. But the noise levels are actually pretty good for a phone camera. Very usable results with some simple post processing. Especially compared to my previous Android phone (a Nokia 7 plus) which was noisy even in day light. Mostly doing raw edits is not worth doing that but it's nice to have the option. The phone does a decent job of processing and mostly gets it right when it comes to tone mapping and noise reduction. When it matters, I prefer the Fuji. But sometimes the phone is all you have and you just take a quick photo and it's fine.
A high end full frame camera will get you more and better pixels and more detail. Even an older entry level dslr will generally run circles around smart phone sensors. And that's just the sensor and camera. The real reason to use these is the wide variety of lenses and level of control over the optics that those provide. In phone bokeh is a nice gimmick. But it's a fake effect compared to a nice quality lens. Likewise you can't really fake the look you get with a good portrait lens (the effect that things in the background seem bigger). Phone lenses have a fixed focal range and generally not that much aperture range. There's a reason people pay large amounts of money to own good lenses. They are really nice to use and deliver a great photo without a lot of digital trickery. And they are optimized for different types of photography. There is no one size fits all lens for all photography.
ecpottinger|3 years ago
I wonder how hard to is to take 'RAW' photos without adding an app first.
astrostl|3 years ago
1: https://support.apple.com/en-gb/HT211965
andreareina|3 years ago
poulpy123|3 years ago
epicycles33|3 years ago
wyager|3 years ago
Now, are most people going to notice that the iPhone wrecked the text on their subject? Probably not. But they probably also wouldn't notice if the model wasn't applied to the image at all. The median consumer probably mostly benefits from (in terms of how much they like the photo) AE, a bit of curve reshaping (using a smoothed histogram CDF algorithm or something), and maybe some extra saturation.
worewood|3 years ago
[1] https://youtu.be/88kd9tVwkH8
WaffleIronMaker|3 years ago
[1] https://youtu.be/LQdjmGimh04
smusamashah|3 years ago
I never saw any of these phones altering the details like in article.
shiftpgdn|3 years ago
MarkusWandel|3 years ago
https://www.35mmc.com/10/01/2015/low-light-fun-ilford-hp5-ei...
3200ISO on black-and-white film was pushing it pretty hard. Yet these pictures look good, in a noisy kind of way. Let an algorithm loose on them and it'll "fix" things, first and foremost by smoothing the skin. Even older low-end dedicated digital cameras do this, some brands more than others. The pictures in low light feel more like a badly done painting than a good, honest, albeit noisy photo. One possibility is that the noise from a digital sensor is not as uniformly pleasing as that from film, so it must be masked.
macshome|3 years ago
"Phones take amazing snapshots, but dedicated cameras can make better photographs."
The new smartphone cameras are capable of pretty amazing things and they can extend taking good pictures to a whole new audience. If you need the control though that large sensors and specific lenses can bring then you will need a dedicated camera.
bee_rider|3 years ago
pancrufty|3 years ago
astrange|3 years ago
Apple power adapters actually have a bunch of text on them printed in unreadable light gray; you can try shooting them and while they're a bit clearer than plain old eyesight, they're still pretty unreadable.
account42|3 years ago
This applies to dedicated cameras too though - physical image stabilization can compensate for camera motion but not for subject motion. The difference is that a) physical IS can compensate throughout each exposure, not just between exposure and b) the photographer is not bound to a black box algorithm but can instead use his own a priori knowledge to align the images if needed.
wyager|3 years ago
foldr|3 years ago
From this way of looking at things a normal long exposure also imposes a prior assumption (that nothing is moving). It's just that we're used to the artefact that's generated when this prior isn't true (motion blur).
whywhywhywhy|3 years ago
As someone who upgrades every several years I’ve been wondering how people who upgrade every year and rave about the camera being better are even seeing at this point.
(Stills only I’m talking about)
npteljes|3 years ago
Terretta|3 years ago
Also, the camera, lenses, and sensors don't all update every year. Early on in Apple's tick-tock approach to design iteration, camera updates were the "s" models ("tock") in release cycle.
Now they seem to be just incrementing the number and you have to pay attention to what if any changes they make. This time they did 4x pixels and do pixel binning for regular shots and low light.
substation13|3 years ago
AstixAndBelix|3 years ago
Of course you cannot really compensate for the lens not resolving enough detail, or not focusing close enough; but since the almost totality of photos taken on a phone will be seen on another mobile device these are the less important bits of the equation. Correct exposure and good colors always look good, regardless of how much you zoom the photo. OP's use case is very limited, and unfortunately didn't provide enough context about the nature of the photo.
fleddr|3 years ago
One aspect that is little discussed is the inflated quality perception of such a photo when seen on the actual device, an iPhone in this case.
iPhones have an incredible screen. OLED, wide gamut color, high PPI. A photo looks radically better on an iPhone compared to opening the same photo on a standard monitor.
wyager|3 years ago
pvillano|3 years ago
A photon can be any color of the rainbow. The reason ink and TVs can get away with using only 3 colors is because our eyes only have 3 types of receptors (cone cells). Each receptor responds to a range of wavelengths. "In-between" wavelengths will trigger multiple receptors. For example, a TV can send a mix of red and green photons and create the same brain signals as yellow photons would. Animals with more types of receptors, such as bees or the mantis shrimp, wouldn't be fooled by a TV with only three base colors.
A camera's sensor performs the same lossy compression as our eyes. Light comes into the camera in a range of wavelengths, and triggers each type of pixel a different amount. Each type of pixel has a sensitivity curve engineered to resemble the sensitivity curve of one of the cone cells in our eyes.
Understanding that natural light isn't just red, green, and blue makes it clear why chromatic aberration can't be fixed computionally. A green pixel can't know when it's receiving green photons that are perfectly aligned, or yellow light that needs to be destorted.
P.S. There cameras that can "see" a greater range of colors. Search for "spectral cameras" and "infra-red goggles"
PPS This is also why a RGB light strip might look white, but objects illuminated by it might look odd. You might be familiar with the fact that a blue object illuminated by a red light will look black. For the same reason, it's possible for a yellow object to be eliminated by red, green, and blue light and still look black.
PPPS This is also why custom wall paints are a mixture of more than three colors. Two paints may look completely the same, but objects illuminated by the light bounced off the walls look completely different.
PPPPS This is also why high-CRI lightbulbs are a thing. If you get something hot, like the sun or a tungsten filament, it will release photons with a wide range of wavelengths. Neon tubes and LEDs emit a single wavelength, so they must be coated with phosphors that fluoresce — emit light at a different wavelength than they absorbed. Using more kinds of phosphors is more expensive, but makes it more likely that whatever object is illuminated gets all the wavelengths it is able to reflect.
manv1|3 years ago
Once they find a way to interact with the processing engine then the quality will jump again.
For the vast majority of users, the phone camera is super awesome and just fine.
michrassena|3 years ago
nomel|3 years ago
zeckalpha|3 years ago
Mirrorless cameras may have been delayed if it weren't for the competition with phones. DSLRs were only around a few years before camera phones.
Tepix|3 years ago
onphonenow|3 years ago
I’m not sure what the pipeline looks but I thought this type of situation was where pro raw was supposed to be used?
CarVac|3 years ago
This gives you a clean low-noise image with editing flexibility but it does have the flaws of deconvolution and stacking and AI denoising.
Actual raw from a cell phone is insanely noisy and hideously soft from diffraction in the best of cases.
alistairSH|3 years ago
90% of the time, my iPhone photos are fine straight out of the camera as HEIC. But every once in a while, I get something like is described here (or in several other recent similar articles).
_aavaa_|3 years ago
Even for the "true raw" ones, I don't know if they're truly raw. Do they have distortion and light fall-off correction applied?
[0]: https://ai.googleblog.com/2021/04/hdr-with-bracketing-on-pix... [1]: https://dl.acm.org/doi/10.1145/3355089.3356508
hedgehog|3 years ago
Traubenfuchs|3 years ago
The most infuriating thing is that you can usually see the image before post processing if you are quick enough and those look sharp and good, but this trash software can't be turned off.
zimpenfish|3 years ago
Are you talking about proRAW? Or JPEGs straight from the Camera app?
mortenjorck|3 years ago
It would be really interesting, though, to see an image signal processing expert weigh in on what the algorithm(s) are actually doing in this case.
wnkrshm|3 years ago
One very interesting one is ptychography (in microscopy often Fourier ptychography, since you can use Fourier optics to describe the optical system [0]), which uses a model of an optical system to get an image (iirc x7-x10 resolution) out of many blurry images, while knowing a bit about the optics in front of your image sensor - it can also work in remote sensing to some degree (better with coherent illumination though).
Edit: This is not just averaging or maxing pixels, it reconstructs the image using reconstructed phase information from having low-res pictures with different, known illumination or camera positions.
[0] https://www.youtube.com/watch?v=hece_x37ITg
londons_explore|3 years ago
They'll all be manufactured as a one piece glass moulding and single CCD chip - and the whole thing will be very cheap to make, having moved all the difficulty into software.
kblev|3 years ago
npteljes|3 years ago
https://opencamera.org.uk/
wyager|3 years ago
hapticmonkey|3 years ago
hilbert42|3 years ago
I have previously discussed having taken photos with a smartphone where certain objects within some images have been so modified by the processing algorithm as to be almost unrecognizable so I won't repeat those various scenarios here. Instead, I'd like to dwell on the implications algorithmic image processing for a moment.
Let's briefly look at the issues:
1. Despite a recent announcement by Canon about a large increase in dynamic range in imaging, (https://news.ycombinator.com/item?id=34527687), I'm unaware of any current imaging sensor breakthrough that would vastly improve both resolution and dynamic range. Thus, essentially, we have to live with what we're already capable of physically squeezing into our present smartphones.
2. Manufacturers are improving both image sensors and optics but only incrementally. Thus, with current tech and absence of truly significant breakthroughs, we have to live with the limitations as outlined in the article (aberrations, lens flare, sensor insensitivity etc.).
3. Essentially, we're stymied both by the limitations of current tech and physical (smartphone) size. Usually, to overcome such limitations, we'd fall back on the old truism 'there's no substitute for capacity' and just make things bigger as we did with photographic emulsions, past camera lenses, loudspeakers, pipe organs, etc. but that's not possible here.
4. Outside incremental improvements in hardware—the Law of Diminishing (hardware) Returns having arrived—manufacturers have had to resort to computational methods. The trouble is that it seems with the present algorithms that the Law of Diminishing (computational) Returns is also already upon us, so what does this mean? Quo vadis?
5. Clearly, in its current form computational/algorithmic processing has hit a stumbling block or at least a major hiatus. Here, further incremental improvements are likely using current methods and there's little doubt that they'll be applied to recreational photography (smartphones and such), however, unfortunately, we now have a serious (and very obvious) problem with the authenticity of images taken by these cameras.
Simply, when software starts guessing what's within images then we've not only lost visual authentication but we have serious downstream issues. It raises questions about whether or not photographic evidence based on computational imaging can be relied upon—or even submitted—as evidence in a court of law (I'd reckon, without ancillary cooperating/conjunctive evidence, such images would not muster if the Rules of Evidence tests were applied.
How serious is this? Clearly, it depends on circumstance but long before 'guessing-what's-in-the-image' became in vogue simple compression was 'suspect' in, for example, serious surveillance work—because compression artifacts in an image raised doubts as to what objects actually were—simply, could objects be identified with 100% certainty, if not then what figure could be placed on such measurements/identifications.
(Such matters are not hypotheticals or idle speculation, I recall in nuclear safeguards a debate over compression artifacts in remote monitoring equipment. Here, authenticating and identifying objects must meet strict criteria and a failure to authenticate (fully identify) them means a failure of surveillance which is a big deal! For example, the failure to distinguish between, say, round cables and pipes with 100% certainty could be a serious problem, as the latter could be used to transport nuclear materials—thus it'd be deemed a failure of surveillance. That's not out of the bounds of possability in a reprocessing plant.)
Obviously, the need to authenticate what's in an image with 100% certainty isn't a daily occurrence for most of us but as these tiny cameras become more and more important and ubiquitous then we'll start seeing them used in areas where their images must be able to be authenticated.
Post haste, we need rules and standards about how these computational algorithms process images and how they should be applied.
6. What's the future. On the hardware side we need better sensors with higher resolution and more sensitivity and improved optics (that, say, use metamaterials etc.). Such developments are on their way but don't hold your breath.
Computational/algorithmic processing has the potential to do much, much better, but again don't hold your breath. There's considerable potential to correct focus and aberration problems etc. using both front-end and back-end computational methods ('front-end correcting lenses etc. on-the-fly and back-end as post-image processing) but much work still has to be done. Note: such methods also don't rely on guessing.
What people often forget is that when a lens cannot fully focus or suffers aberrations, etc. information in the incoming light is not lost—it's just jumbled up (remember your quantum information theory).
In the past untangling this mess has been seen as an almost insurmountable problem and it's still a very, very difficult one to resolve. Nevertheless, I'd wager that eventually computational processing of this order will be commonplace, moreover, it'll likely provide some of the most significant advances in imaging we're ever likely to witness.
formerly_proven|3 years ago
Two interesting developments here are the pixels in Starvis 2 sensors, which as a first afaik use a 2.5D structure to increase full-well capacity by a lot. And another, non-production sensor by Sony where they developed a self-aligning process and pixels are actually split in two layers, with the top layer only carrying the photodiode and the bottom layer entirely dedicated to the readout transistors. That's promising for lower readout noise and also for increasing full-well capacity.
swayvil|3 years ago
How far can a noise reduction algorithm go? Can we use a white painted wall as a mirror?
michrassena|3 years ago
aaroninsf|3 years ago
abc_lisper|3 years ago
killjoywashere|3 years ago
You mean, "resolution"?
jsmith99|3 years ago
ipsum2|3 years ago
bediger4000|3 years ago
swayvil|3 years ago
moistly|3 years ago
The word “simply” is doing a lot of work in that last sentence, I’m sure!
https://plato.stanford.edu/entries/computational-philosophy/
(BTW, I just posted same to front page, if the subject interests you & we’re lucky, it’ll generate some discussion)