jamesg's comments

jamesg | 7 years ago | on: The Remarkable Persistence of 24x36

> If you try to print big (let's say A3+) you'll rapidly find the limits of the m43 system

I’ve printed a good number of images shot on m43 at about that size, and that’s not been my experience. The specifics definitely matter, but I’ve found Olympus’s M.Zuiko 75/1.8 to be competitive with a 70-200/2.8 in terms of sharpness, distortion, etc, for instance (dxomark seem to broadly agree: https://www.dxomark.com/Lenses/Olympus/Olympus-MZUIKO-DIGITA... vs https://www.dxomark.com/Lenses/Nikon/AF-S-VR-Zoom-Nikkor-70-...). You do have a larger minimum DoF, but 150/3.6 equivalent is not that far off.

I would like a bit more resolution, but that’s also true of my Nikon D4S (also 16MP).

Clearly FF and m43 have different strengths, and you’ll get the best results by playing to the strengths of each. I concede that all else being equal, larger sensors do afford you more flexibility (lower light performance, for instance). On the other hand, traveling with my Nikon FF kit is a huge PITA (mostly due to the lenses; the body is a constant cost I can mostly deal with).

I’d love to hear more about the limitations you’ve hit with larger prints on m43.

jamesg | 7 years ago | on: Ask HN: What are good projects to understand CUDA and GPU Programming?

Since you mentioned image processing in particular, I’d recommend looking into Halide instead of (or as well as) CUDA. Few reasons:

1. It allows for easy experimentation with the order in which work is done (which turns out to be a major factor in performance) —- IMO, this is one of the trickier parts of programming (GPU or not), so tools to accelerate experimentation accelerate learning also.

2. It allows you to write your algorithm once and emit code to run on OpenGL, OpenCL, CUDA, Metal, various SIMD flavors, and a bunch more exotic targets. CUDA effectively limits you to desktop/laptop computers, and at this point I’d rather bet on needing a mobile version at some point than not.

3. It eliminates a ton of boilerplate code, so you can get started quickly.

4. It’s what the pros use. Much of Adobe’s image processing code is in Halide now, for instance (source: pretty much any presentation extolling the virtues of Halide). The Halide authors cite a particular algorithm — the Local Laplacian Filter - where an intern, in one afternoon, beat out a hand optimized C++ implementation that had taken months to develop with a Halide implementation. I don’t know if the specifics of that have been exaggerated, but directionally I believe it. It was pretty transformational in the codepath I used it for.

I feel like developing an intuition for the “shape” of algorithms that will perform well before diving into the specifics of low-level tools like CUDA will serve you well.

http://halide-lang.org/

jamesg | 7 years ago | on: Don't learn Dvorak

At this point I'm pretty terrible at QWERTY on a physical keyboard. I'm fine with it on phones and tablets for some reason, but my brain is pretty hard-wired for Colemak on physical keyboards at this point. I also use a Kinesis Advantage keyboard which adds to the context switch when I have to use a different computer (though this is less significant for sure).

If it's a Mac, Colemak is one of the pre-installed layouts, so if I need to work on someone else's computer I'll just enable it while I work on it and remove it afterwards. Otherwise I can manage, but it does slow me down quite a bit, and I'll have to look down at my hands pretty frequently. You immediately notice the extra workload though: Colemak is pretty low effort, but QWERTY just feels like finger moshing to me now.

I'm pretty sure it's possible to remain proficient in both, but QWERTY was sufficiently destructive to my hands that I use it as little as possible.

jamesg | 7 years ago | on: Xamarin Forms: it works

You may want to try using JetBrains' Rider as your IDE. I've hit similar issues with XF in VS, but Rider has been a much better experience for me. For instance: it won't lose its shit if you double-click a XAML file :)

Also has the benefit of being substantially the same on Windows and Mac, so you can use a Mac directly for Mac / iOS development and everything works pretty much the way it did on Windows. VS on Mac is unfortunately pretty dissimilar to VS on Windows.

jamesg | 7 years ago | on: Xamarin Forms: it works

That's fair, but I think that XF has a good story around integrations with other frameworks to fill its gaps: SkiaSharp has the SkiaSharp.Views.Forms namespace, for instance, and there's solid documentation on integrating the two, right there on Microsoft's Xamarin.Forms docs. Actually, speaking of docs, Charles Petzold's book on XF is an awesome resource -- I've not seen commensurate investments in documentation made by XF's competitors (admittedly it's not something I track that closely). Similarly XF's ability to add a native control directly to a StackLayout via extension methods makes it fairly straightforward to just drop in a native component if that's what you want to do.

I think it's a reasonable strategy to say "hey, we're not going to be able to solve all the problems, and it's probably not the right way to spend our time even if we could. What we will do is make integration with other solutions straightforward so we're not the bottleneck". Making a strategic choice not to be the bottleneck seems like a good call regardless. That said, the things I've found to be a hassle are things like getting OpenGL working on UWP from C# (tractable, but that's really not how I want to be spending my time).

But yeah, wow what a difference it's made having Microsoft throw their resources at it. My current codebase lets me build for iOS, Mac & UWP at present (I'll get to Android at some point), with really not very much effort. Being able to debug on the Mac version rather than waiting for iOS deploys is just about enough to repay the time investment on its own. I'm anxiously awaiting them getting their web platform support up to snuff -- I'd dearly love to never have to JavaScript again. :)

jamesg | 7 years ago | on: Don't learn Dvorak

If you're considering learning Dvorak, I'd strongly recommend considering Colemak instead. I tried Dvorak with the hope that it would mitigate RSI, but it spreads the work among fingers pretty unevenly -- I really ended up just moving the problem rather than fixing it (mostly to my right pinky).

Colemak is (relatively) easy to learn if you know QWERTY, and it's been life-changing for me: I can work for more hours of the day, and I suspect more years of my life with Colemak.

Interestingly, I tried configuring my phone for Colemak a while ago and had to switch it back. The relatively small movements you make with Colemak meant that the swipe typing thing was just about useless -- it just couldn't discriminate between words.

jamesg | 7 years ago | on: Interactive Camera Simulator

Additionally, most cameras these days will under-expose by a pretty substantial amount. Digital darkroom software maintains a database of cameras with an entry for how much each camera under or over-exposes (mostly under) which is applied before any of your adjustments are layered on top. Adobe's DNG spec calls this "baseline exposure". I used to always under-expose by a about a third of a stop because I reasoned that whilst I could probably recover shadow detail (even if it were noisy), once the sensor has clipped, there's nothing I can do to recover lost highlights. With modern cameras, this doesn't really make sense any more: the camera will just meter that way to begin with.

It's a double-edged sword though: under-exposing will add more shadow noise.

Iliah Borg (one of LibRaw's authors) has a good write-up on it: https://www.rawdigger.com/howtouse/deriving-hidden-ble-compe...

DXOMark also maintains a database of their own measurements of each camera, including actual ISO sensitivity for each nominal ISO sensitivity, eg: https://www.dxomark.com/Cameras/Nikon/D850---Measurements

jamesg | 7 years ago | on: Vocore2, a ridiculously small Linux PC

I’ve recently been experimenting with this board, which looks to be quite similar (Allwinner H5): https://www.friendlyarm.com/index.php?route=product/product&...

Overall it’s an impressive little package, however I’ve been finding that you need to underclock the CPU to make it stable. Or possibly use a giant heat sink, but that would somewhat counteract the benefits of such a tiny board. Do you know if Neutis have found a good solution for keeping the H5 stable?

However, in broader strokes, I’m pretty excited that we’re starting to see these boards with open hardware designs (the vocore and the beagle, for instance). Being able to use this to bootstrap more complex board designs feels a bit like how web apps became a lot easier once the so-called LAMP stack was robust enough to build on top of. There’s additional hurdles with hardware, for sure, but each barrier removed is meaningful progress.

jamesg | 7 years ago | on: Photo School

Whilst it can be somewhat subjective, and different styles of photography will benefit different amounts from exotic gear, it's surprisingly complex even in fairly narrow domains.

You're probably right that the first image wouldn't be significantly different on any other lens: it looks to be shot with a fairly narrow aperture in fairly unchallenging lighting conditions, so you're not hitting any of the aspects of photography where lens design has really advanced. A really cheap lens might show distortion or chromatic aberration (admittedly less important in B&W) at the edges, but beyond that, you'd be fine.

But I saw this article recently, and it's a sample size of 1, but nonetheless I found it somewhat surprising: https://petapixel.com/2018/08/15/is-the-sensor-or-the-lens-t... -- unfortunately it doesn't specify how these images were processed so it's hard to draw conclusions (eg: if it was all SOOC JPEGs, I'd be willing to believe that the older image processor in the D610 has worse highlight reconstruction, for instance. Or possibly worse demoasaicing). However prior to reading that article I'd have unconditionally recommended investing in glass before getting a new body; evidently my mental model for this was at least slightly deficient.

FWIW, my 2c: take photos that leverage the gear you've got, and conversely choose gear to get the kinds of images you want. Eg: I love my Nikkor 85/1.4G lens, despite its shortcomings, and you'll never get the same results on a smartphone (I mean, 85/1.4 is a pretty razor thin depth of field). If what you want is a super creamy background and tons of detail on the in focus regions, and you like the framing at 85mm, then that's pretty much the only way to get it (that I know of anyway). However, if you're only shooting wide angle, with everything in focus, then the differences will definitely be more subtle. They'll be there (eg: a Zeiss 21/2.8 on a D850 will pull out detail that the smartphone just won't know is there, the D850 sensor will have a lot more dynamic range, etc), but those differences will be less readily apparent, especially if you're viewing these images on a smartphone screen.

One last point: you're no doubt aware, but there are more dimensions to a lens' quality than its resistance to chromatic aberration (re: fancy APO optics), speed, or resolving power. I have a couple of the Voigtlander lenses for Micro 4/3, and while they have a lot of shortcomings, (so much coma aberration!), the way they render the background has a particular quality to it that's hard to replicate; not always desirable either, but sometimes fun. I also enjoy playing with some of the shortcomings: it will render with a kind of a halo-like glow around close objects when shot wide open. Likewise, the bokeh on my Zeiss 100/2 MP is pretty special (I believe this is a consequence of them not using any aspherical elements), despite the fact that it suffers from terrible chromatic aberration (possibly also related to them not using any aspherical elements). There's also color rendition of different lenses, but much of that can be corrected / simulated in post if you're patient enough. And every now and then I shoot with an old Zeiss Jena 200/2.8 lens with an adapter precisely because of its shortcomings yield a particular look. Colors are a bit more muted, it's a bit less contrasty, and chromatic aberration is extremely pronounced. It looks like a photo that would have been taken in the 60s, which is kind of neat.

Apologies for the long post! I love this stuff. :)

Edit: one more link. Ming Thein's review of the Zeiss 100/2 MP gives some more details on that lens, and I think some of the shots he's included are a great example of that lens's specific background rendering. Despite occasional annoying ellipses, it has a "swirly" property to it that I really like and I just don't get on any of my other lenses. https://blog.mingthein.com/2012/07/27/revisited-and-reviewed...

jamesg | 7 years ago | on: Ricoh releases SDKs for Pentax cameras

As for camera makers releasing SDKs, there’s a few, though I’ve found them to be unimpressive (binary only blobs for Windows, etc). However more interesting to me are cameras that speak standard(ish) protocols. I’ve been looking into this recently, here’s what I found: Panasonic cameras (GH4 is the one I’m looking at) expose a HTTP API over WiFi, and are able to upload images to an SMB share while you shoot. The API is fairly extensive: lets you control focus, etc. Nikon cameras have a pretty bad story on WiFi (at least up to the D810, which is the newest one I own), so I haven’t got far with them. I hear, though haven’t verified that Canon cameras speak PTP/IP, which is pretty neat. Olympus also has a WiFi control mode, but annoyingly it seems to disable the on-body controls when you use it (tested on an OMD EM1 Mk 1). I’m yet to get my hands on a Sony camera (will probably buy an A7 later this year), but I’ve seen videos that suggest it should be fairly straightforward to control remotely.

jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective

It's totally about laziness! But that's what computers are all about -- I could store printed versions of all of my documents in a filing cabinet, and go and manually sort them every time I needed a different ordering, but I'm lazy! I use a database!

I just don't see why laziness should be restricted to users. Developers are lazy too.

You're right that there are only 4 rules (or more or less depending on your formulation), but I don't care. I'd rather take the time to have another martini. Or, y'know, implement features that make my users happy.

And it definitely gets harder when there are more moving parts. You're right that the rules are simple, but the execution of those rules gets more complex as you add more components, more threads, remoting, etc. I never said it was impossible, or up there with Fermat's Last Theorem or anything like that. Just that this is work the computer could be doing for me. I want to be lazy, but Apple won't let me.

jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective

So, I will say that I bundled documentation and "openness" into one box, which I probably shouldn't have. The connection is that absented of the "openness" we were really looking for, we sought documentation to describe what's going on. That said...

CoreLocation is a good abstraction. However, its primitives for specifying what trade-offs you are willing to make in acquiring a location are, well, primitive.

A couple of random examples:

- CoreLocation doesn't tell you the source of the location sample (GPS, WiFi, etc). It gives you an estimate of accuracy. Of note, it doesn't give you a measure of the accuracy of the accuracy. This is of import as we have seen examples where the data is off by a whole hemisphere -- I'm not kidding! I understand that exposing these details is kind of "ugly", but obscuring it is removing signals that we could use to figure out the reliability of the data, and what techniques we might be able to use to "clean" the data. I am willing to concede that CL is a good API for general use, but when you're building consumer products, that doesn't cut it. The guy in New York who was reported as being in Antarctica (again, seriously), doesn't really care that the iPhone doesn't provide us the tools to fix that, he just wants it to work (and he's no longer a user).

- Related to the first point, but separate: the implementation of the algorithm for seeking to the desired accuracy is a black box. This makes it really easy to use for basic stuff, but you really don't have any way of knowing the result, in milliwatts, of passing in a given value to that argument. There are ways to mitigate this (which we've had to explore), and we can experiment to learn what the drain is, approximately, but hiding that information has obstructed our development process. Consider also that CL doesn't allow me to specify how long I am willing to wait to get the location fix at the desired accuracy. It does not let me set a budget in milliwatts for a location fix. I realise that Android doesn't provide those exact abstractions, but the tools it does provide make it easier (by which I mean "possible") for me to build them myself.

I also don't think you've successfully made the argument that the OS will know better than us. It's a generic tool, which makes some assumptions. It will have made compromises that don't necessarily work for us. It will not perform optimally for every use case. Going back to something I said before, it seems to optimise for getting to the desired accuracy quickly. For background location tracking apps like ours, that is not a priority. Power is. Neither CoreLocation's abstraction nor documentation provide for this use-case.

jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective

Don't know on the first one, but you can use ctrl + cursor keys to move between spaces. Fairly sure that's the default, but if not, it can be setup in preferences for Spaces.

jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective

I'll make just one refutation to this: I am not a Java programmer :)

I had to learn Java specifically for this project. Python is my preferred hammer for most nails, but not an option on mobile. I've also been a professional C programmer before, wreaking havoc in the kernel. I've got opinions on Objective-C, but that's a subject that deserves a whole separate post.

jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective

QEMU is a truly awesome piece of code. Everything Fabrice Bellard does is incredible.

You could well be right that they're not using it correctly -- that sounds entirely plausible. I guess my point was more that, whatever the cause, the net effect of it is that the Apple Simulator is unrealistically fast, and the Android Emulator is unrealistically slow. Neither really encourage great development if you rely on them.

jamesg | 15 years ago | on: Android vs iOS: A Developer's Perspective

The inception link was just a bit of fun :)

Thanks for your comments, and the typo in that first sentence was pretty bad. Sorry about that. I've ceded the grammatical highground for the foreseeable future with that gaffe.

jamesg | 15 years ago | on: Ask HN: Best Developer Linux Laptop?

Oh, forgot to mention: I've been running Ubuntu on it exclusively for the last few years. Suspend and resume has been flakey in some releases of Ubuntu, ok in others (I've given up trying, honestly), but otherwise the hardware support has been flawless.

jamesg | 15 years ago | on: Ask HN: Best Developer Linux Laptop?

I love my Thinkpad. It's had all sorts of tortuous treatment and just keeps on ticking. It was kind of amazing the first time I spilt water on it to have it drain out through the holes in the bottom of it and just keep going (I've also stepped on it, dropped it, etc; I'm actually very careful with my computers, but when you work with it 14+ hours a day for a few years, sooner or later you're going to do something stupid with it).

I have the x61, which is great. If I were buying one today, I'd probably get one of the x300 series; they seem to have a slower CPU than the x61 (and x200), but faster graphics, and better screen resolution. CPUs are fast enough that it almost doesn't matter these days (for web dev anyway), but faster graphics are always good -- I really feel the screen redraw when switching desktops with it plugged into my 26 inch monitor (still, this machine is like, 2.5 years old).

For me, having a lightweight and portable machine is pretty key. I also have an MBP (15 inch), but it's so much more work to throw into my backpack and lug around. I always have my Thinkpad with me, which is a huge part of what makes it valuable.

... Man, I am such a fanboy! :)

page 1