top | item 37840503

Log is the "Pro" in iPhone 15 Pro

1163 points| robenkleene | 2 years ago |prolost.com

409 comments

order
[+] ta8645|2 years ago|reply
I've never owned an Apple device. I don't take photographs or video with my phone very often. But this video presentation was captivating. It was clear, concise, without any nonsense, and thoroughly interesting.
[+] pen2l|2 years ago|reply
The guy in the video is Stu, not only does he have an impressive resume (https://www.imdb.com/name/nm0556179/ known for originality, e.g. he did Sin City's look), he is also the original author of MagicBullet, which is one of the most used software by people in the industry to do easy color work. If there's one person who knows about color work, LUTs in creative work, color encoding systems, etc., it's him so naturally he knows how to present relevant subject matter without nonsense.
[+] dclowd9901|2 years ago|reply
I don’t know if it’s the same thing, but capture on my Nikon D7100 always felt more “manipulable” than capture on an iPhone or the like, I suspected as a downstream effect of using RAW format with a larger image sensor. Interpreting log through this understanding, it felt pretty intuitive reading through this post. I don’t know if it’s accurate, but it feels accurate…
[+] Graziano_M|2 years ago|reply
This video is excellent. About halfway through I was thinking, "Oh so this is like RAW for video" and then seconds later he gets to explaining how it's not exactly RAW.
[+] thrdbndndn|2 years ago|reply
The concept of Log seems needlessly confusing from (still) digital image processing perspective, which I have some experience.

Firstly the name is called "Log" (for logarithmic) but isn't that what gamma does in color spaces like sRGB since forever? "Normal" video standards like BT.709 also have non-linear transfer functions. I don't get why "log" is stressed here. Maybe it just means a different/higher gamma coefficient (the author didn't talk much about the "log" part in the article).

And the main feature of it, at least according to this article, is that it clips the black and white level less, so leaves more headrooms for post-processing.

This is definitely very useful (and is the norm if you want to do something like, say, high quality scanning), but I failed to see how it warrants a new "format". You should be able to do that with any existing video format (given you have enough bit depth, of course).

[+] kllrnohj|2 years ago|reply
For some reason you're getting a lot of wrong or just bad replies. But the answer to your question is yes both sRGB/gamma2.2 & log are non-linear, but almost in the opposite direction. gamma2.2 is exponential not logarithmic. As in, it's spending all its bits on the lower half of the brightness range, whereas log is actually spending more bits in the highlights.

It actually looks more like HLG in this way.

https://www.artstation.com/blogs/tiberius-viris/3ZBO/color-s... has some plots of the curves to compare visually

[+] jlouis|2 years ago|reply
> This is definitely very useful (and is the norm if you want to do something like, say, high quality scanning), but I failed to see how it warrants a new "format".

This warrants a separate answer. Cameras are getting to the point where they can capture far more information than we can display. Hence, we need a lot of bit depth to accurately store this added precision. But adding bits to the data signal requires a lot of extra bandwidth.

In principle, we should just store all of this as 16/32bit FP, and many modern NLEs use such a pipeline, internally. But by creating a non-linear curve on integer data, we can compress the signal and fine-tune it to our liking. Hence we can get away with using the 8-12bit range, which helps a lot in storage. With log-curves, 12bit is probably overkill given the current sensor capabilities.

There's a plethora of log-formats out there, typically one for each camera brand/sensor. They aren't meant for delivery, but for capture. If you want to deliver, you'd typically transform to a color space such as rec.709 (assuming standard SDR, HDR is a different beast). The log-formats give you a lot of post-processing headroom while doing your color grading work.

[+] jlouis|2 years ago|reply
The transfer functions in your (rec.709) color space is non-linear indeed. However, the pixel values you store are in a linear relationship with each other. The difference between values 20 and 21 are the same as the difference between values 120 and 121, assuming an 8bit signal. I.e., the information is the same for all pixels. Further down the chain, these values are then mapped onto a gamma curve, which is non-linear.

What the "log"-spaces are doing is to use a non-linear relationship for the pixel values, as a form of lossy compression. If the signal has to factor through 8bit values, using a compression scheme before it hits the (final) gamma curve is a smart move. If we retain less precision around the low and high pixel values and more precision in the middle, we can get more information from the camera sensor in a certain region. Furthermore, we can map a higher dynamic range. It often looks more pleasing to the eye, because we can tune the setup such that it delivers a lot of precision and detail where our perception works the best.

In short: we are storing (8bit/10bit) pixel values. The interpretation of these values are done in the context of a given color space. In classic (rec.709) color spaces, the storage is linear and then mapped onto a non-linear transfer function. In the "log" spaces, the storage is non-linear and is then mapped onto a non-linear transfer function. In essence we perform lossy compression when we store the pixel in the camera.

[+] ben7799|2 years ago|reply
RAW formats on digital cameras are also storing data in a log format. RAW conversion process is normally converting that to a color space along with (for most cameras) doing the De-Bayer algorithm.

The built in converter that produces JPG files in the camera does this too.

Our eyes perceive light as linear when it's really logarithmic.

There is really no difference between video and still here, it's just that it's more normalized at the consumer level to deal with RAW formats at this point for stills.

[+] dist-epoch|2 years ago|reply
> but I failed to see how it warrants a new "format". You should be able to do that with any existing video format

It's about support.

The .zip format supports LZMA/ZStandard compression and files larger than 4 GB. But if you use that, a lot of software with .zip support will fail to decompress them.

The same way with log. While in theory you could probably make .mp4 or .mkv files with H264 encoded in log, I bet a lot of apps will not display that correctly if at all.

[+] alok-g|2 years ago|reply
From the article:

>> ... in DaVinci Resolve ... choose Apple Log for the Input Gamma ...

Indeed, it just sounds to be a different choice of the curve, perhaps more suited for the HDR capabilities available today.

PS: I did not read the article in detail. My first reaction was to just search there for 'gamma' in the article to see how 'log' is being compared to it.

[+] OJFord|2 years ago|reply
> Standard iPhone video is designed to look good. A very specific kind of good that comes from lots of contrast, punchy, saturated colors, and ample detail in both highlights and shadows.

I remarked to my wife showing me a video recently that you could tell it was taken on an iPhone, I don't think it's just the 'punchiness', for me the main thing is the way it seems to attempt to smooth out motion - the 'in' thing seems to be to sort of spin around showing what's around you while selfie-vlogging and tik-tokking and what-notting, and iPhones make it look like you did it with a steadicam rig that's not quite keeping up.

[+] pnpnp|2 years ago|reply
Another thing they've done more recently is HDR video (to my cave man brain, this means brighter brights).

They've paired this with much higher brightness on the screens, which makes the videos look much more realistic. I first noticed this on my M1 Pro screen, which absolutely blew me away (1600 nits peak brightness).

That's the biggest telltale "filmed on iPhone" trait I'm noticing right now. Yes, you can create HDR videos in other ways, and I'm sure it will be more popular on other platforms soon.

[+] mattigames|2 years ago|reply
Someone used an iphone to record their desktop screen playing call of duty and the top comment on Reddit was how it made the game look Disneyesque, a spot-on assessment.
[+] throw0101a|2 years ago|reply
> I remarked to my wife showing me a video recently that you could tell it was taken on an iPhone

It's also relatively understood that certain camera companies (Nikon, Canon, Sony, Fuji) have a certain 'look' to them in how they process the raw image sensor data to generate a JPEG (there's a differences in the final colours).

[+] DrawTR|2 years ago|reply
I know exactly what you mean by this! I can always tell if it was taken on an iPhone -- not that it looks bad, or anything, but there's always a few little cues that make it obvious. As you mentioned, I think the motion is a large part of it.
[+] leokennis|2 years ago|reply
To add, a few generations ago hand held video shot on iPhones was not (or hardly effectively) stabilized. But now iPhone have good stabilization. I think the tradeoff (the too-smooth motion thing) is worth it.
[+] basisword|2 years ago|reply
That’s a specific camera mode (action mode I think). Does the standard video mode also do heavy stabilisation?
[+] amaterasu|2 years ago|reply
If I was a prosumer/hobbyist video equipment company, I'd be terrified about what Apple does next. They already have significant penetration into the editing market (both with Final Cut, and codec design), they control a number of the common codecs, and they have _millions_ of devices in the field along with substantial manufacturing capability. The cinema end aren't in trouble yet IMO, but the rest should be concerned...
[+] HALtheWise|2 years ago|reply
It's always surprised me that there's not more interest in log-scale/floating-point ADCs built directly into camera sensors. Both humans and algorithms care a lot more about a couple-bit difference in dark areas than light, and we happily use floating point numbers to represent high-range values elsewhere in CS.
[+] piperswe|2 years ago|reply
I didn't know it could record straight to USB-C storage! That gets rid of a major reason to spend crazy money on a 1TB phone, and it's definitely a game changer for anyone shooting 4K ProRes.
[+] pyrophane|2 years ago|reply
> With its high bit depth and dynamic range, log footage has many of the benefits of raw. But Apple Log is not raw, and not even “straight off the sensor.” It’s still heavily processed — denoised, tone-mapped, and color adjusted.

I wonder if this is because, at the end of the day, it is still a tiny little camera with a small sensor and small lens, and so with none of the processing magic the image would look pretty terrible under most circumstancdes.

[+] ayoisaiah|2 years ago|reply
Log seems like a strong reason to finally switch from Android to iPhone if you're a photography/filmmaking enthusiast like myself. The ecosystem is so much more mature and the gap seems to be growing not shrinking.

Android has Raw Video with MotionCam which also produces insanely good results¹ (even better than iPhone's ProRes video), but everything else just sucks.

[1]: https://youtu.be/O5fnGDR4i9w?feature=shared

[+] sandworm101|2 years ago|reply
I am not an influencer. I am not a fashion model. I am not an interior designer. I don't use my cellphone camera to generate "content". I use it to document things. I need it to take clear pictures that accurately represent things that I see. We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera. This is dangerous. Pictures taken buy such cameras can no longer be considered accurate representations. Correction of shadows, the replacement of dull color with vibrant, the smoothing of textures ... every photo is now a crafted work of art by the machine. They are a distorted representation. This will come back to haunt us.

Think of this: a cop body camera that auto-adjusts faces to display them more clearly at night. Sounds like a good idea. Then something happens. The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day. Yes, the camera did take a more clear and useful photo, but it is not a proper depiction of the reality experienced by the officer.

[+] willio58|2 years ago|reply
> every photo is now a crafted work of art by the machine.

This was always the case. Unless you have a very specific camera setup where you're trying to avoid this, there have always been certain characteristics that come through in photos from cameras. In fact, it's the main selling point of some cameras. Hasselblad, polaroid, cannon, sony all have their own 'looks' when it comes to output.

> The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day.

I'll use a similar but opposite argument here. Ever since iPhones came out they could never really capture dark-skinned people as we see them through our eyes. Unless you had perfect lighting, you could clearly see issues with the sensor catching the contrast in their face. With all the retouching you speak of, iPhones have gotten much better at showing some people more closely to how we see them in reality. So when that cop claims "I couldn't see the guy's face, the damn camera is too good!", I'd be very hesitant to believe him.

[+] lang_agnostic|2 years ago|reply
> We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera

This is a great point but it's not what the article is about. This is about bringing existing features of digital cinema cameras to a portable phone.

[+] cptskippy|2 years ago|reply
> I need it to take clear pictures that accurately represent things that I see. We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera. This is dangerous.

You could argue that up until now you were not able to take photos or video that accurately represented the world you see but instead only using the rose color lenses of the device manufacturer. The photos and videos that you take today with your phones or cameras have distortions applied automatically based on presets provided by the software used to capture the media. Sometimes you get options like Vibrant, Indoor, Portrait, and Landscape mode to choose how the images or video are manipulated. You don't get to see what the camera actually saw, only what the device manufacturer wants you to see.

Log video is like Raw photos. As this capability becomes more prevalent, I could see it becoming a requirement for criminal investigators and other to capture evidence using a Log or Raw mode.

What I would argue is that, if it's not there already, we need signatures and metadata stored in the EXIF of photos and video captured that tells how the image was capture. With that you could determine to what extent the media has been manipulated.

[+] blurri|2 years ago|reply
Your camera (including film cameras) never could take a fully accurate picture to represent what you see. Digital sensors and film don't perceive what our eyes do. It's always been up to you the photographer to ensure that. If you choose to shoot on auto, that's your choice to let the camera guess at the accuracy. Most people don't like what is actual reality so they under, over, long and short expose to choose what reality they represent. They light things artificially and they put make up on. The might even create stage scenes. Even in pure film days, humans have been altering the output. Whether it be for realism or artistic purposes, dodging/burning were effectively retouching practices in film.

Yes smart phone cameras are using computation to get a more "correct" output, unless it's being marked as a feature to alter the image such as face smoothing. Camera makers are always trying to make their camera sensors (or film makers) better perceive the range our human eyes can or at least give use the choice through data to make the decision on realism or art.

Your bit about the police officer is 100% irrelevant to your main point.

[+] zerd|2 years ago|reply
I was trying to take a passport photo, and one of the requirements is "has not been touched up". But when I took the photo with my phone, I noticed that it had been very helpful in touching up my face by removing almost all of my wrinkles and made my skin nice and soft. Even with all "enhancements" off. This was on an Samsung S10, I tried with an iPhone SE, it was slightly less visibly touched up, so I used that, but still definitely had a "beauty" filter built in. It's probably implemented in ASIC so you basically can't turn it fully off.
[+] igornadj|2 years ago|reply
If your need to document the world accurately is important to you, you should be using a dedicated device for that purpose.
[+] kqr|2 years ago|reply
Almost all the benefits mentioned in the video are (a) lack of post-processing and (b) high dynamic range. Is that what "log" means in videography?
[+] jorlow|2 years ago|reply
Log is lower contrast so it's less likely to clip (be a fully saturated color or pure white or black). And clipping inherently limits your max dynamic range.

Log also means a "look" is not baked into the image so, since you're starting from scratch, it's 1) easier to tweak the images so you can cut between two cameras from different manufacturers without distracting differences and 2) you can give the image more of your personality.

As a general note, I've found that in the world of "cinematography", tech terms aren't used very rigorously and there's a lot of cargo cult which comes from the benefit of one tech being conflated as a benefit of something else. It's often hard to sift through the noise when learning.

[+] jlouis|2 years ago|reply
In videography the term "log" is heavily overloaded and you'd want to ask for more detail in order to figure out exactly what is meant.

A pixel value, be it integer or floating point, means little on its own. There's a context for that value which is a color space. In the typical process, you have several color spaces in play: the camera has one for capture. There's one for color processing (the "working" space). And there's one for the display. When a pixel goes through the pipeline, it's processed via color space transformations.

In the "classic" color spaces, the pixel values have a linear relationship, and all of them carry the same amount of information. The "log" color spaces all have a non-linear (gamma) curve: they retain less information at very low and very high pixel values, but subsequently retain more information in the middle. It's a form of compression.

The human eye doesn't respond equally to all levels of brightness, so throwing away detail at the ends for more detail in the middle is usually a great choice. We retain information in the signal at the brightness level where the eye is able to perceive small details and texture, while throwing away information in the signal where it isn't.

We can now map more dynamic range into the same amount of bits, due to our non-linear compression. How large a dynamic range is given by underlying color space we are operating in.

If you go up in camera quality, you will typically see pixels use 10bits or more for their values. Combined with a log-curve, this leads to more information density, which allows capture of an even higher dynamic range. In turn, post-processing can now fix e.g. exposure to a much larger extent.

Finally, a LUT is linear approximation. "Real" color space transformation will use the underlying mathematical curves for much greater precision.

[+] rebuilder|2 years ago|reply
No, ”log” just means some form of logarithmic response curve when encoding color data. You don’t necessarily get better dynamic range per se, but you get a more useful distribution of the light samples your sensor is taking.
[+] alexashka|2 years ago|reply
> Is that what "log" means in videography?

> a) Lack of post processing

No. Absence of processing (modifications to make it look 'better') is the default for all non-consumer devices.

> b) high dynamic range

Yes. In practice log is about choosing which bits of color information to retain and which to throw out, to optimize for space.

Log optimizes for retaining detail in very dark and very bright areas by sacrificing detail in the midtones.

Non-log optimizes for midtones. That's all it is.

So if you have a high contrast scene (bright blue sky, someone sitting in the shade), you'll want to use log. In an average/regular contrast scene, you use non-log, that way you get more detail in the midtones.

In photography, there is no need to optimize for space (video is at least 24 frames/sec, photography is a few frames/sec at most, usually), so log is not a thing - we just capture all the things, all of the time.

[+] jillesvangurp|2 years ago|reply
A little bit. The log format is non linear. This means there are more details in the shadows relative to the really bright areas. This mimics the human eye and brain which also do not have a linear range of sensitivity.

Basically, the common unit of light in cameras (a stop) is one click on the aperture wheel. E.g. going from 1/11 to 1/16 halves the amount of light. Some cameras of course have a few settings in between. It looks linear to us but it is effectively logarithmic. The dynamic range of the human eye is much larger than the typical camera, screen, or print medium. The human eye has a range of about 20-22 stops (between black and white). A good camera might get between 12 and 14 stops. A decent screen might get to something like 8-10 and print medium is more like 5-7. Taking photos and shooting videos involves a lot of creative choices about what looks natural to us. HDR is basically taking and combining multiple exposures in a way that still looks natural to us on a medium that has less dynamic range than our eyes (-ish, a lot of HDR photography looks a bit unnatural for this reason).

Digital photo processing is about compressing and moving light around to make the most of the much more limited dynamic range of the screen or print medium you are targeting relative to the camera that you used to capture that.

When you do that, most of the interesting information is going to be captured in the darker portions of the image. You typically expose for neutral grey values which is only about 18% of the light. That means half of the darker information (shadows) is in that 18% range of values. And the other half is in the brighter part. Except our eyes are much more perceptive of the darker bits. So, a linear format is not ideal to store that. A log format allocates more bits to the dark half and less to the other 82%. That's a good thing because that allows you to do things like brighten shadows and pull out detail there.

The log format does this by applying a log function to the raw sensor readings. That's why the format looks so flat because all the values end up being relatively close to the 18% mark (neutral). You "undo" this by applying a suitable lut that multiplies the values suitably. You deepen the shadows to near black and brighten the bright stuff to near white. The difference is that you now have full control over this process; can move the white, grey, and black points around. And you can apply color math to the log values before you apply the lut. This is not that different from how you'd process a linear format except now your starting point is better as you are using more bits for the darker parts of the image than for the lighter parts. This gives you more of the captured dynamic range to play with in post processing.

The weakness of the iphone is that while it stores log format, it's not really capable of switching between LUTs on camera or while you are shooting. I'm guessing this just takes too much CPU/battery. So, you have to wait until post processing to see what the end result is going to look like. Some high end cameras have a lot of in camera processing that you can tweak in post processing.

[+] t0bia_s|2 years ago|reply
Who is a target audience? Most Apple users wont spent time in postproduction and colour grade their footages. Pros will stay with dedicated technology made for cinematography.
[+] dannyw|2 years ago|reply
Does this also disable the excessive sharpening of iPhone's video processing?

Even 'ProRAW' photos are sharpened and aggressively denoised, which ruins detail.

[+] ChuckMcM|2 years ago|reply
This was a great read, clear, concise, and entertaining. It also struck me how "over powered" phones are these days. I get that the latest chip with the latest GPUs/APUs/IPUs/whatever are really capable, but the weird "I don't need a cinema capable device in my phone" feeling starts to get overwhelming. If nothing else I feel like we're going to force Apple (and others) to go back to easily replaceable battery technology because a "phone" will meet its user's needs for a decade or more and the parts that wear out will need to be replaced several times.
[+] justsomehnguy|2 years ago|reply
The video is so good I watched it to the end despite not even having iPhone nor having any plans to have it or shoot videos. Packed and succinct.

But makes me wonder how soon we would see an... SSD iPhone cases? Because you can always duct tape the external drive to the phone but it would block the screen. *grin* Sure, you can use double sided tape, but... And slightly tangential - how short and compact USB-C cable can be? Sure there are tons of the angled on the market, but I assume they aren't guaranteed to give you full 10gbps Gen2 speeds.

[+] fallingmeat|2 years ago|reply
I don't know anything about photography, but curious to know what the cheapest "pro" alternative is. The phone is now $1200! This feature is cool, but if you wanted that feature, is it cheaper in a purpose-built device?
[+] hatsune|2 years ago|reply
Log does not work great in heavy computational video right now. It was the case with RAW on phones till manufacturers find ways to bake computated (i.e. stacked, stablised, etc.) data into RAW, like xiaomi, huawei, pixel, and apple.

That's a weakest point of phones as by exposing to log curve, you exactly showcase the poor latitude of phone sensor. With the price and additional rigging required (and cooling), just save and get a bmpcc.

The argument of getting out a phone and handheld for night photography (first saw in Huawei P30) or slow-motion and get really great image is valid.

The argument of randomly getting out a phone with cases, cooling, external SSD, mounted battery, matte boxes (especially for strong glaring on iPhone), and a camera stabliser (because a phone stabliser is not tailored for this weight), will be over 1.5kg. That does not sound like valid.

Log is quite useless other than controlled or professional shooting as exposure matters a lot more, and lack of IRE exposure (false color i.e.) makes it not feasible.

[+] keyle|2 years ago|reply
Great video btw. Well explained and as succinct as possible.
[+] elAhmo|2 years ago|reply
I am by no means someone who shoots a lot of videos, but I have been playing a lot in the past few days with the camera app mentioned in this article, Blackmagic Camera, and I am super excited to do some shots that might seem a bit more professional.
[+] sudosysgen|2 years ago|reply
It's a bit of a gimmick. These phones just don't have the noise performance to make log video work outside if very very specific conditions. I had it also on my old LG V30 and it only was remotely useful in full sunlight (and since we're talking about very low processing, not much changed since then).

This is inevitable because the noise floor is just too high to have a large usable dynamic range unless illumination is high.

Combined with video compression it's just not great. It's not really even unique to smartphones, many DSLRs/MILCs when they first started supporting log video had similar issues, but obviously it's going to be much worse for a smartphone.