The article title is a bit misleading, I was expecting actual simulation of film stock processing and rendering in Python. Here this is more about 3D LUTs usage, not much to do with film simulation itself.
For what it's worth, I've updated the post and put a disclaimer up front:
> Disclaimer: The post is more about understanding LUTs and HaldCLUTs and writing methods from scratch to apply these LUTs to an image rather than coming up with CLUTs themselves from scratch.
The thing about LUTs is that it's only mathematically valid if you know the type of data the "input" is--and I'm not just talking about oh is it an 8 bit image or a 12 bit image.
It needs to be aware of the color space and EOTF (an extended idea of "gamma")--which is why LUTs are only used in very controlled scenarios (e.g. for videography, the input color settings are fully detailed, for example Sony's slog, so the LUT is a reproducible, mathematically sound operation)
"RAW" photos from cameras are what we call linear color space, where the RGB values correspond linearly to the amount of light received by each photosite. If you try to use a LUT designed for RAW on an sRGB JPEG image, you're gonna have some problems, at least without screwing with the color space.
It's why I kind of gave up on trying to use LUTs in photo editing, it's just so unreliable.
LUTs are quite fun to play with. If you look for videos on "3D LUT Creator" you will find some cool things done using LUTs.
If you are looking for a great and free tool to create LUTs, have a look at https://grossgrade.com/en/
It is not easy to find IMO; I knew existed because I had used it and it took me ages to find it again...
Also, while I had no luck with 3D LUT Creator (trial) on wine, Grossgrade works fine :-)
Some feedback, the image with the caption:
Left – the original image, right – the image after applying the 12-bit identity CLUT
Looks the most convincing film like. The last sample (Fuji Velvia 50), absolutely does not look like Film at all (let along Velvia 50), main culprit is the shadows underneath the truck. I understand you're just applying RawTherapee's LUT there, but maybe you need to tweak the intensity down or play with the brightness.
Skips the interesting question of how the LUT tables are made, but still nice introduction to the topic.
I guess to make film simulation, you could photograph bunch of color calibration targets (eg IT8) in different lighting conditions with both the film and digital sensor, and then try match them somehow. That is assuming the film is still available.
Film is definitely still available, and you can use a target to as a starting point for a LUT, though the workflow is not straightforward. The core of the problem is that you're only sampling a fraction of the number of colors in the LUT, so to derive the rest of the LUT entries, you need to interpolate and extrapolate.
The trick is which algorithm you use to take the sparse 3D mesh of the calibration target and warp/interpolate the rest of the values. Trilinear would be the most naïve (and lowest quality) approach.
There's a ton more detail about how to actually match digital to film in Steve Yedlin's blog [1], including a cool video of sparse color interpolation in 3D (toward the bottom of the page).
The Velvia simulation at the end is very good, nice work!
As someone that still regularly shoots film and also owns a Fuji X Series camera, I don't find the film simulations that Fujifilm puts in the X models to be any good, so I feel like there is still a lot of worthwhile work to be done here.
I enjoy my Fuji X system camera and the colours it produces. Sometimes, however, I'd like to do some RAW processing in Linux (Darktable) - but of course this means that I lose in-camera film simulation.
Since the camera can store the same photo in 2 different formats (RAW+JPEG), I'm wondering if it would be worthwhile to use a lot of these file pairs to try to get a LUT allowing to map Fuji RAW files to Fuji-Like JPEG results.
Is there anybody knowledgeable here to tell me whether this approach is doomed from the start or if it could be promising?
> I don't find the film simulations that Fujifilm puts in the X models to be any good
I feel like you might be in the minority, I continually hear overwhelmingly positive remarks regarding Fuji's film sim. I'm not aware of anything comparable at that price point, and I have yet to see any non-pro film sim come close to the stock Fuji sims.
Back when Instamatic got popular, I was using an N900 and of course didn't have access to the app. So I made my own, by piping images over SSH to a server running a couple of Imagemagick scripts that applied one of a few LUTs I cooked up, and optionally some vignetting maybe.
It's hard for me to see this as doing a good job of simulating film when it doesn't mention grains. Film has a granular structure where each "pixel" is a grain (crystal). Film is essentially already digital, but with a higher count of less regular pixels.
you'd expect to be a stop down because there's so little information to work on in the highest bits of digital (I don't remember gamma being applied which is instrumental to handle the log / lin perception / recording mismatch. But there's simply much less data in the highest ranges of digital unless you deliberately go about getting additional information. (ETTR expose to the right was the earliest widely adopted technique. the group of photographers I used to follow closely took extensive readings for the precise sensitivity of the RGGB channels to enable the maximum information capture using tuned filtration and custom raw file converters. When the big camera companies hit the next limit with specifications, I am hoping that they will finally address this capture optimisation issue with at least providing better information and interfaces for developers and expert users who program.
Velvia was a very important product...
Velvia was launched by Fujifilm guerilla marketing the Los Angeles Olympic Games, for those who remember that Kodak was a huge official sponsor and the invocation by Kodak of what my world in design and publishing (and software for the same) felt was a terrible misread of public sentiment and the unmistakable arrogance that quickly dismantled Kodak commercially thereafter...
how we see the world is a lot more important to people than any research or surveys could establish..
I have been mightily impressed with the latest Fujifilm film stock emulations on the GFX 100s model just out recently. (this 102MP "medium format" camera which is a normal size of a larger SLR film camera body and the simultaneously launched 85mm f/1.7 lens is a combination of image capture capabilities I think many hners would be interested in if they could get a hands on experience with one. Optical design is hitting diffraction limits so quickly that the best new lenses often don't become any sharper stopped down to smaller apertures than the widest open diaphragm. f/2.0 is becoming the sharpest aperture. Historically it was f/8 or very occasionally f/5.6 capable of the sharpest picture. For non photographers, Fujifilm makes or made the Hassleblad cameras and lenses since the H series of auto focus models, and are considered as possibly the best cine lens manufacturer if you are simply seeking a perfection of sheer resolution. 30 years ago the longest usenet thread on the medium format digest entitled "breaking the 50lppmm barrier" ran to 200 printed pages (yes it was worth printing this in entirety!) and concluded via countless means and calculations that 50 pairs of separated lines visibly resolved by a lens at one meter from the test chart was as good as it gets. Today 200 line pairs per millimetre is increasingly common. The human eye with average 20:20 vision resolves 8lppmm at 1m. I'm currently evaluating purchasing a Fujifilm lens capable of projecting the similar resolution to 200lppmm on the sensor right through its zoom range. this is completely phenomenal. Directors of photography have been deploying all manner of tricks to soften the image of actors faces eg using special diffusion for only wavelengths reflected by human skin. I'm convinced my iPhone is playing with subsurface scattering bursting fill flash light somehow in portrait mode.
the whole thing with film is the 3D grain structure involved. Technicolour is /was "only" a halftone matrix of transferred organic dyes in the final printing of the projection positive.
at the end of the 90s and before Kodak finally expired commercially Fujifilm was pressing ahead non stop developing increasingly complex multiple layer structures of photo film including whole additional film layers that were sensitive not only separating barriers. I remember being truly excited for what was going to happen in photo film technology until as late as 2002. I (my company) owned a Heidelberg Tango drum scanner until 2007. I'm not sure if you can seriously use anything else now for archival quality scanning, although I may be involved in acquiring the Black Magic Design 35mm cine film scanner in the not distant future, but the resolution and color depths aren't improved since the days when the Tango was a halo acquisition by my business. In theory a lot could be improved that's for sure. the 50lppmm limit opined by usenet would put the limit of a 35mm film image to be 25 megapixels if I'm not wrong. 24MP seems to be a very happy number for 90 percent of professional work today for print. Cinema can be different, because our motion perception of resolution isn't much explored and the low light output of projecting gives 1e-6 less different colours (though because of Macadam Elipses a few of those don't matter : https://en.m.wikipedia.org/wiki/MacAdam_ellipse
[+] [-] app4soft|5 years ago|reply
Few issues on this site:
1) "Right" and "Left" are wrongly mapped (i.e. "Left" — is the original image in all cases).
2) On 1280px width screen pair images shown in vertical order, instead of horizontal.
[+] [-] lonesword|5 years ago|reply
2) The image comparison thing is a wordpress feature. Not sure if there's anything I can do to fix this.
[+] [-] rasz|5 years ago|reply
[+] [-] rubatuga|5 years ago|reply
https://github.com/yoonsikp/pycubelut
I’m trying to add a GPU acceleration feature using wgpu-py, but it was unfortunately too buggy last time I tried in January
[+] [-] kelsolaar|5 years ago|reply
[+] [-] lonesword|5 years ago|reply
> Disclaimer: The post is more about understanding LUTs and HaldCLUTs and writing methods from scratch to apply these LUTs to an image rather than coming up with CLUTs themselves from scratch.
[+] [-] zokier|5 years ago|reply
[+] [-] CarVac|5 years ago|reply
[+] [-] carlob|5 years ago|reply
[0] https://en.wikipedia.org/wiki/Piaggio_Ape
(ape = bee, vespa = wasp: one is for work, the other for leisure, but same company)
[+] [-] lonesword|5 years ago|reply
[+] [-] hatsunearu|5 years ago|reply
It needs to be aware of the color space and EOTF (an extended idea of "gamma")--which is why LUTs are only used in very controlled scenarios (e.g. for videography, the input color settings are fully detailed, for example Sony's slog, so the LUT is a reproducible, mathematically sound operation)
"RAW" photos from cameras are what we call linear color space, where the RGB values correspond linearly to the amount of light received by each photosite. If you try to use a LUT designed for RAW on an sRGB JPEG image, you're gonna have some problems, at least without screwing with the color space.
It's why I kind of gave up on trying to use LUTs in photo editing, it's just so unreliable.
[+] [-] uyt|5 years ago|reply
[+] [-] felixr|5 years ago|reply
If you are looking for a great and free tool to create LUTs, have a look at https://grossgrade.com/en/ It is not easy to find IMO; I knew existed because I had used it and it took me ages to find it again...
Also, while I had no luck with 3D LUT Creator (trial) on wine, Grossgrade works fine :-)
[+] [-] splintercell|5 years ago|reply
Looks the most convincing film like. The last sample (Fuji Velvia 50), absolutely does not look like Film at all (let along Velvia 50), main culprit is the shadows underneath the truck. I understand you're just applying RawTherapee's LUT there, but maybe you need to tweak the intensity down or play with the brightness.
[+] [-] zokier|5 years ago|reply
I guess to make film simulation, you could photograph bunch of color calibration targets (eg IT8) in different lighting conditions with both the film and digital sensor, and then try match them somehow. That is assuming the film is still available.
[+] [-] turnsout|5 years ago|reply
The trick is which algorithm you use to take the sparse 3D mesh of the calibration target and warp/interpolate the rest of the values. Trilinear would be the most naïve (and lowest quality) approach.
There's a ton more detail about how to actually match digital to film in Steve Yedlin's blog [1], including a cool video of sparse color interpolation in 3D (toward the bottom of the page).
[1] http://www.yedlin.net/NerdyFilmTechStuff/index.html
[2] http://www.yedlin.net/OnColorScience/index.html
[+] [-] camkerr|5 years ago|reply
https://teamdeakins.libsyn.com/joachim-jz-zell-color-scienti...
Lots of good info in that episode regarding LUTs, ACES and colour in film & tv.
[+] [-] yesimahuman|5 years ago|reply
As someone that still regularly shoots film and also owns a Fuji X Series camera, I don't find the film simulations that Fujifilm puts in the X models to be any good, so I feel like there is still a lot of worthwhile work to be done here.
[+] [-] justtocomment|5 years ago|reply
Since the camera can store the same photo in 2 different formats (RAW+JPEG), I'm wondering if it would be worthwhile to use a lot of these file pairs to try to get a LUT allowing to map Fuji RAW files to Fuji-Like JPEG results.
Is there anybody knowledgeable here to tell me whether this approach is doomed from the start or if it could be promising?
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] xyzzy_plugh|5 years ago|reply
I feel like you might be in the minority, I continually hear overwhelmingly positive remarks regarding Fuji's film sim. I'm not aware of anything comparable at that price point, and I have yet to see any non-pro film sim come close to the stock Fuji sims.
[+] [-] rebuilder|5 years ago|reply
Worked OK, was completely pointless of course.
Edit: I meant Hipstamatic. It's been a while.
[+] [-] nwiswell|5 years ago|reply
There is a wonderful transfer matrix method library in Python for reflectometry simulation too, if that's what you were hoping to find.
https://github.com/kitchenknif/PyTMM
[+] [-] jbunc|5 years ago|reply
[+] [-] steveBK123|5 years ago|reply
[+] [-] sldksk|5 years ago|reply
[+] [-] Cullinet|5 years ago|reply
Velvia was a very important product...
Velvia was launched by Fujifilm guerilla marketing the Los Angeles Olympic Games, for those who remember that Kodak was a huge official sponsor and the invocation by Kodak of what my world in design and publishing (and software for the same) felt was a terrible misread of public sentiment and the unmistakable arrogance that quickly dismantled Kodak commercially thereafter...
how we see the world is a lot more important to people than any research or surveys could establish..
I have been mightily impressed with the latest Fujifilm film stock emulations on the GFX 100s model just out recently. (this 102MP "medium format" camera which is a normal size of a larger SLR film camera body and the simultaneously launched 85mm f/1.7 lens is a combination of image capture capabilities I think many hners would be interested in if they could get a hands on experience with one. Optical design is hitting diffraction limits so quickly that the best new lenses often don't become any sharper stopped down to smaller apertures than the widest open diaphragm. f/2.0 is becoming the sharpest aperture. Historically it was f/8 or very occasionally f/5.6 capable of the sharpest picture. For non photographers, Fujifilm makes or made the Hassleblad cameras and lenses since the H series of auto focus models, and are considered as possibly the best cine lens manufacturer if you are simply seeking a perfection of sheer resolution. 30 years ago the longest usenet thread on the medium format digest entitled "breaking the 50lppmm barrier" ran to 200 printed pages (yes it was worth printing this in entirety!) and concluded via countless means and calculations that 50 pairs of separated lines visibly resolved by a lens at one meter from the test chart was as good as it gets. Today 200 line pairs per millimetre is increasingly common. The human eye with average 20:20 vision resolves 8lppmm at 1m. I'm currently evaluating purchasing a Fujifilm lens capable of projecting the similar resolution to 200lppmm on the sensor right through its zoom range. this is completely phenomenal. Directors of photography have been deploying all manner of tricks to soften the image of actors faces eg using special diffusion for only wavelengths reflected by human skin. I'm convinced my iPhone is playing with subsurface scattering bursting fill flash light somehow in portrait mode.
there's a good run down of the Fujifilm film stock simulations their digital cameras can perform here :https://www.bhphotovideo.com/explora/photography/tips-and-so...
the whole thing with film is the 3D grain structure involved. Technicolour is /was "only" a halftone matrix of transferred organic dyes in the final printing of the projection positive.
at the end of the 90s and before Kodak finally expired commercially Fujifilm was pressing ahead non stop developing increasingly complex multiple layer structures of photo film including whole additional film layers that were sensitive not only separating barriers. I remember being truly excited for what was going to happen in photo film technology until as late as 2002. I (my company) owned a Heidelberg Tango drum scanner until 2007. I'm not sure if you can seriously use anything else now for archival quality scanning, although I may be involved in acquiring the Black Magic Design 35mm cine film scanner in the not distant future, but the resolution and color depths aren't improved since the days when the Tango was a halo acquisition by my business. In theory a lot could be improved that's for sure. the 50lppmm limit opined by usenet would put the limit of a 35mm film image to be 25 megapixels if I'm not wrong. 24MP seems to be a very happy number for 90 percent of professional work today for print. Cinema can be different, because our motion perception of resolution isn't much explored and the low light output of projecting gives 1e-6 less different colours (though because of Macadam Elipses a few of those don't matter : https://en.m.wikipedia.org/wiki/MacAdam_ellipse