jamesg | 7 years ago | on: The Remarkable Persistence of 24x36
jamesg's comments
jamesg | 7 years ago | on: The Remarkable Persistence of 24x36
I’ve printed a good number of images shot on m43 at about that size, and that’s not been my experience. The specifics definitely matter, but I’ve found Olympus’s M.Zuiko 75/1.8 to be competitive with a 70-200/2.8 in terms of sharpness, distortion, etc, for instance (dxomark seem to broadly agree: https://www.dxomark.com/Lenses/Olympus/Olympus-MZUIKO-DIGITA... vs https://www.dxomark.com/Lenses/Nikon/AF-S-VR-Zoom-Nikkor-70-...). You do have a larger minimum DoF, but 150/3.6 equivalent is not that far off.
I would like a bit more resolution, but that’s also true of my Nikon D4S (also 16MP).
Clearly FF and m43 have different strengths, and you’ll get the best results by playing to the strengths of each. I concede that all else being equal, larger sensors do afford you more flexibility (lower light performance, for instance). On the other hand, traveling with my Nikon FF kit is a huge PITA (mostly due to the lenses; the body is a constant cost I can mostly deal with).
I’d love to hear more about the limitations you’ve hit with larger prints on m43.
jamesg | 7 years ago | on: Ask HN: What are good projects to understand CUDA and GPU Programming?
1. It allows for easy experimentation with the order in which work is done (which turns out to be a major factor in performance) —- IMO, this is one of the trickier parts of programming (GPU or not), so tools to accelerate experimentation accelerate learning also.
2. It allows you to write your algorithm once and emit code to run on OpenGL, OpenCL, CUDA, Metal, various SIMD flavors, and a bunch more exotic targets. CUDA effectively limits you to desktop/laptop computers, and at this point I’d rather bet on needing a mobile version at some point than not.
3. It eliminates a ton of boilerplate code, so you can get started quickly.
4. It’s what the pros use. Much of Adobe’s image processing code is in Halide now, for instance (source: pretty much any presentation extolling the virtues of Halide). The Halide authors cite a particular algorithm — the Local Laplacian Filter - where an intern, in one afternoon, beat out a hand optimized C++ implementation that had taken months to develop with a Halide implementation. I don’t know if the specifics of that have been exaggerated, but directionally I believe it. It was pretty transformational in the codepath I used it for.
I feel like developing an intuition for the “shape” of algorithms that will perform well before diving into the specifics of low-level tools like CUDA will serve you well.
jamesg | 7 years ago | on: Don't learn Dvorak
If it's a Mac, Colemak is one of the pre-installed layouts, so if I need to work on someone else's computer I'll just enable it while I work on it and remove it afterwards. Otherwise I can manage, but it does slow me down quite a bit, and I'll have to look down at my hands pretty frequently. You immediately notice the extra workload though: Colemak is pretty low effort, but QWERTY just feels like finger moshing to me now.
I'm pretty sure it's possible to remain proficient in both, but QWERTY was sufficiently destructive to my hands that I use it as little as possible.
jamesg | 7 years ago | on: Xamarin Forms: it works
Also has the benefit of being substantially the same on Windows and Mac, so you can use a Mac directly for Mac / iOS development and everything works pretty much the way it did on Windows. VS on Mac is unfortunately pretty dissimilar to VS on Windows.
jamesg | 7 years ago | on: Xamarin Forms: it works
I think it's a reasonable strategy to say "hey, we're not going to be able to solve all the problems, and it's probably not the right way to spend our time even if we could. What we will do is make integration with other solutions straightforward so we're not the bottleneck". Making a strategic choice not to be the bottleneck seems like a good call regardless. That said, the things I've found to be a hassle are things like getting OpenGL working on UWP from C# (tractable, but that's really not how I want to be spending my time).
But yeah, wow what a difference it's made having Microsoft throw their resources at it. My current codebase lets me build for iOS, Mac & UWP at present (I'll get to Android at some point), with really not very much effort. Being able to debug on the Mac version rather than waiting for iOS deploys is just about enough to repay the time investment on its own. I'm anxiously awaiting them getting their web platform support up to snuff -- I'd dearly love to never have to JavaScript again. :)
jamesg | 7 years ago | on: Don't learn Dvorak
Colemak is (relatively) easy to learn if you know QWERTY, and it's been life-changing for me: I can work for more hours of the day, and I suspect more years of my life with Colemak.
Interestingly, I tried configuring my phone for Colemak a while ago and had to switch it back. The relatively small movements you make with Colemak meant that the swipe typing thing was just about useless -- it just couldn't discriminate between words.
jamesg | 7 years ago | on: Interactive Camera Simulator
It's a double-edged sword though: under-exposing will add more shadow noise.
Iliah Borg (one of LibRaw's authors) has a good write-up on it: https://www.rawdigger.com/howtouse/deriving-hidden-ble-compe...
DXOMark also maintains a database of their own measurements of each camera, including actual ISO sensitivity for each nominal ISO sensitivity, eg: https://www.dxomark.com/Cameras/Nikon/D850---Measurements
jamesg | 7 years ago | on: Vocore2, a ridiculously small Linux PC
Overall it’s an impressive little package, however I’ve been finding that you need to underclock the CPU to make it stable. Or possibly use a giant heat sink, but that would somewhat counteract the benefits of such a tiny board. Do you know if Neutis have found a good solution for keeping the H5 stable?
However, in broader strokes, I’m pretty excited that we’re starting to see these boards with open hardware designs (the vocore and the beagle, for instance). Being able to use this to bootstrap more complex board designs feels a bit like how web apps became a lot easier once the so-called LAMP stack was robust enough to build on top of. There’s additional hurdles with hardware, for sure, but each barrier removed is meaningful progress.
jamesg | 7 years ago | on: Photo School
You're probably right that the first image wouldn't be significantly different on any other lens: it looks to be shot with a fairly narrow aperture in fairly unchallenging lighting conditions, so you're not hitting any of the aspects of photography where lens design has really advanced. A really cheap lens might show distortion or chromatic aberration (admittedly less important in B&W) at the edges, but beyond that, you'd be fine.
But I saw this article recently, and it's a sample size of 1, but nonetheless I found it somewhat surprising: https://petapixel.com/2018/08/15/is-the-sensor-or-the-lens-t... -- unfortunately it doesn't specify how these images were processed so it's hard to draw conclusions (eg: if it was all SOOC JPEGs, I'd be willing to believe that the older image processor in the D610 has worse highlight reconstruction, for instance. Or possibly worse demoasaicing). However prior to reading that article I'd have unconditionally recommended investing in glass before getting a new body; evidently my mental model for this was at least slightly deficient.
FWIW, my 2c: take photos that leverage the gear you've got, and conversely choose gear to get the kinds of images you want. Eg: I love my Nikkor 85/1.4G lens, despite its shortcomings, and you'll never get the same results on a smartphone (I mean, 85/1.4 is a pretty razor thin depth of field). If what you want is a super creamy background and tons of detail on the in focus regions, and you like the framing at 85mm, then that's pretty much the only way to get it (that I know of anyway). However, if you're only shooting wide angle, with everything in focus, then the differences will definitely be more subtle. They'll be there (eg: a Zeiss 21/2.8 on a D850 will pull out detail that the smartphone just won't know is there, the D850 sensor will have a lot more dynamic range, etc), but those differences will be less readily apparent, especially if you're viewing these images on a smartphone screen.
One last point: you're no doubt aware, but there are more dimensions to a lens' quality than its resistance to chromatic aberration (re: fancy APO optics), speed, or resolving power. I have a couple of the Voigtlander lenses for Micro 4/3, and while they have a lot of shortcomings, (so much coma aberration!), the way they render the background has a particular quality to it that's hard to replicate; not always desirable either, but sometimes fun. I also enjoy playing with some of the shortcomings: it will render with a kind of a halo-like glow around close objects when shot wide open. Likewise, the bokeh on my Zeiss 100/2 MP is pretty special (I believe this is a consequence of them not using any aspherical elements), despite the fact that it suffers from terrible chromatic aberration (possibly also related to them not using any aspherical elements). There's also color rendition of different lenses, but much of that can be corrected / simulated in post if you're patient enough. And every now and then I shoot with an old Zeiss Jena 200/2.8 lens with an adapter precisely because of its shortcomings yield a particular look. Colors are a bit more muted, it's a bit less contrasty, and chromatic aberration is extremely pronounced. It looks like a photo that would have been taken in the 60s, which is kind of neat.
Apologies for the long post! I love this stuff. :)
Edit: one more link. Ming Thein's review of the Zeiss 100/2 MP gives some more details on that lens, and I think some of the shots he's included are a great example of that lens's specific background rendering. Despite occasional annoying ellipses, it has a "swirly" property to it that I really like and I just don't get on any of my other lenses. https://blog.mingthein.com/2012/07/27/revisited-and-reviewed...
jamesg | 7 years ago | on: Ricoh releases SDKs for Pentax cameras
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
I just don't see why laziness should be restricted to users. Developers are lazy too.
You're right that there are only 4 rules (or more or less depending on your formulation), but I don't care. I'd rather take the time to have another martini. Or, y'know, implement features that make my users happy.
And it definitely gets harder when there are more moving parts. You're right that the rules are simple, but the execution of those rules gets more complex as you add more components, more threads, remoting, etc. I never said it was impossible, or up there with Fermat's Last Theorem or anything like that. Just that this is work the computer could be doing for me. I want to be lazy, but Apple won't let me.
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
CoreLocation is a good abstraction. However, its primitives for specifying what trade-offs you are willing to make in acquiring a location are, well, primitive.
A couple of random examples:
- CoreLocation doesn't tell you the source of the location sample (GPS, WiFi, etc). It gives you an estimate of accuracy. Of note, it doesn't give you a measure of the accuracy of the accuracy. This is of import as we have seen examples where the data is off by a whole hemisphere -- I'm not kidding! I understand that exposing these details is kind of "ugly", but obscuring it is removing signals that we could use to figure out the reliability of the data, and what techniques we might be able to use to "clean" the data. I am willing to concede that CL is a good API for general use, but when you're building consumer products, that doesn't cut it. The guy in New York who was reported as being in Antarctica (again, seriously), doesn't really care that the iPhone doesn't provide us the tools to fix that, he just wants it to work (and he's no longer a user).
- Related to the first point, but separate: the implementation of the algorithm for seeking to the desired accuracy is a black box. This makes it really easy to use for basic stuff, but you really don't have any way of knowing the result, in milliwatts, of passing in a given value to that argument. There are ways to mitigate this (which we've had to explore), and we can experiment to learn what the drain is, approximately, but hiding that information has obstructed our development process. Consider also that CL doesn't allow me to specify how long I am willing to wait to get the location fix at the desired accuracy. It does not let me set a budget in milliwatts for a location fix. I realise that Android doesn't provide those exact abstractions, but the tools it does provide make it easier (by which I mean "possible") for me to build them myself.
I also don't think you've successfully made the argument that the OS will know better than us. It's a generic tool, which makes some assumptions. It will have made compromises that don't necessarily work for us. It will not perform optimally for every use case. Going back to something I said before, it seems to optimise for getting to the desired accuracy quickly. For background location tracking apps like ours, that is not a priority. Power is. Neither CoreLocation's abstraction nor documentation provide for this use-case.
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
I had to learn Java specifically for this project. Python is my preferred hammer for most nails, but not an option on mobile. I've also been a professional C programmer before, wreaking havoc in the kernel. I've got opinions on Objective-C, but that's a subject that deserves a whole separate post.
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
You could well be right that they're not using it correctly -- that sounds entirely plausible. I guess my point was more that, whatever the cause, the net effect of it is that the Apple Simulator is unrealistically fast, and the Android Emulator is unrealistically slow. Neither really encourage great development if you rely on them.
jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective
Thanks for your comments, and the typo in that first sentence was pretty bad. Sorry about that. I've ceded the grammatical highground for the foreseeable future with that gaffe.
jamesg | 15 years ago | on: Ask HN: Best Developer Linux Laptop?
jamesg | 15 years ago | on: Ask HN: Best Developer Linux Laptop?
I have the x61, which is great. If I were buying one today, I'd probably get one of the x300 series; they seem to have a slower CPU than the x61 (and x200), but faster graphics, and better screen resolution. CPUs are fast enough that it almost doesn't matter these days (for web dev anyway), but faster graphics are always good -- I really feel the screen redraw when switching desktops with it plugged into my 26 inch monitor (still, this machine is like, 2.5 years old).
For me, having a lightweight and portable machine is pretty key. I also have an MBP (15 inch), but it's so much more work to throw into my backpack and lug around. I always have my Thinkpad with me, which is a huge part of what makes it valuable.
... Man, I am such a fanboy! :)