jmileham's comments

jmileham | 4 years ago | on: Delayed Job vs. Sidekiq

Just to note that at least once delivery is the best case scenario if you configure Delayed::Job correctly, we just made it impossible to configure otherwise in our fork. The only alternative is at-most-once delivery, which puts you in Sidekiq territory with the potential for silent job loss. The semantics of exactly once delivery can only be achieved by domain-aware code, and the way to do that with any form of background job system is building jobs that are internally idempotent.

jmileham | 4 years ago | on: Delayed Job vs. Sidekiq

As you say, looks like redis-raft is capable of solving durability and consistency in a replicated environment as of 2020, which is welcome news: https://jepsen.io/analyses/redis-raft-1b3fbf6

At Betterment we use our OSS mostly-compatible fork of Delayed::Job referenced elsewhere in the comments to enqueue and work millions of jobs a day and sleep much better at night with the at-least-once delivery semantics if-and-only-if the related transaction commits which you can’t get without some form of integration with your primary database.

jmileham | 8 years ago | on: No correlation between headphone frequency response and retail price

> It's really cool hearing what they heard in the studio control room for the final mix. And often surprising.

But that's not quite what you're hearing - you're typically hearing what happens after the final mix is shipped to a mastering engineer who listened to the recording on a variety of intentionally flawed sound systems (probably including the "car test" - playing the tune on a car stereo with road noise, which is about as hostile an environment as people will expect to enjoy music in). Then the engineer threaded the needle to come up with the most pleasing sound they could muster for the intended market.

In the process the recording will have been compressed and EQed quite a bit, and likely will sound a good bit richer at a given loudness than it did when the mix was done - you should be able to "hear through" the mix better than before, unless the mastering engineer was simply going for loudness-at-all-costs, in which case, it might just be loud.

Anyway, not to take away from your point - good headphones, or even just headphones with different frequency response than you're used to, will open up different details of a mix, for sure, and flat response will give you the best chance to hear any details that weren't pushed to the fore intentionally, which can indeed be eye-opening.

jmileham | 8 years ago | on: No correlation between headphone frequency response and retail price

As long as there's recorded audio up there, you might as well reproduce it well, and even standard CD-audio goes up beyond 20KHz.

Also there's plenty of individual variance in people's hearing. No need to fit to the lowest common denominator even if the majority can't hear it. As a tall person, I appreciate that airplane seats aren't pitched to cut your kneecaps off after 6'0" (ok, maybe that's generous to the airline industry, but you get the point).

jmileham | 8 years ago | on: No correlation between headphone frequency response and retail price

I wonder if there's any effect that in-ear headphones are cheaper to produce but have advantages in accurate low frequency response?

Of course all this is confounded by the fact that music will tend to sound best on speakers/headphones with a response curve most like the speakers/headphones that the mastering engineer used (or more accurately, the set of speakers/headphones that the engineer compromised among). You will probably tend to have the best experience listening to music with the popular devices within a given musical subculture, because mastering engineers will be targeting those devices.

jmileham | 9 years ago | on: Programming with a love of the implicit

I definitely don't see it as hypocritical. Humanely designed systems are thoughtful in where they deploy implicitness. Implicitness can cut down on boilerplate for experts while simultaneously cutting down on confusing minutiae for newcomers.

On baked-in conventions, the implied design of a thoughtfully designed framework likely has wisdom that you shouldn't too quickly dismiss, and should at least fully wrap your brain around (along with the complementary design decisions in the rest of the framework) before diverging. The most baked-in assumptions in a well-designed system are the ones that exist for the strongest reasons.

jmileham | 10 years ago | on: Why your query language should be explicit

This is a great way to spin not having a query planner as a feature, but I'm glad to have one every day that I write and compose semantic bits of SQL that can and should have different execution plans depending on the context in which they're evaluated.

jmileham | 11 years ago | on: Don't link to line numbers in GitHub

Trite one-liners aside, I think you'll find in practice that some good teams rebase commits on feature branches all the time in order to keep the commit history readable. The implicit contract in that case is that nobody else considers that branch to be usable until it's merged.

It all comes down to the definition of publicity. If your team considers a feature branch to be privately owned by the requestor except for the purposes of code review, it works great.

jmileham | 11 years ago | on: Don't link to line numbers in GitHub

This works great unless you're rebasing and the commit hash you reference falls out of use. In PR comments that I expect to rebase again before release but for which I don't expect changes earlier in the file I'm referencing, I often live on the edge and link to the mutable diff. In the context of PR comments, you can edit them after the fact, so neither choice is necessarily game over - you can fix the line number or the commit hash you point to after the fact.

jmileham | 12 years ago | on: The Case Against ISP Tolls

I'll take the non-false dichotomy, please. Having enough capacity to meet the streaming demand of your current customers during peak times is a solvable problem that doesn't require a full gigabit for everybody, all the time. The pricing model required to support this while still looking attractive to customers I'll leave to Comcast's well-funded marketing department.

In order to make that happen, of course, we'd have to live in a fanciful world where shifting last-mile delivery cost to content providers wasn't an option so the painful process of exposing this cost to customers couldn't be hidden in a rat's nest of perverse incentives that benefit the most entrenched corporations. (Ironically, and despite its protestations, Netflix's ability to pay this rent is a barrier to entry for its own future competitors.)

jmileham | 12 years ago | on: The Case Against ISP Tolls

The decision to push 4k streams certainly would carry repercussions for Netflix both from a storage and transit standpoint, even absent paying interconnect fees to Comcast.

As a cable subscriber, I expect that when paying for N Mbps of bandwidth, I'm entitled to N Mbps of bandwidth of the content of my choosing. If Comcast's pricing model needs to change to a cost-per-gigabyte model in order to cope with the increased quantity of data customers consume, so be it. But sneaking the costs onto Netflix's tab effectively shifts Comcast's costs to all Netflix customers, allowing Comcast to artificially lower their prices relative to smaller ISPs without the market share necessary to effectively extract rent from Netflix.

jmileham | 12 years ago | on: Tarsnap: No heartbleed here

I don't think he'd refute that he'd have more to worry about had he not been lucky enough to be on an unaffected version of OpenSSL. I believe his point was that the use of stunnel to terminate SSL connections mitigates some of the attack vectors that could've been used to recover customer information in the event of having been compromised at the OpenSSL layer, and that the architecture of Tarsnap itself absolutely precludes recovery of customer backups in any event. And that these facts aren't an accident.

The important takeaway from this post is that it pays to employ layers of security when building software systems.

jmileham | 12 years ago | on: Life of Brian (Krebs)

The description of security by obscurity in this article reads a lot like Kerckhoff's principle, which when employed correctly is actually a virtue. Not to defend cybercrime, but completely covering your tracks (digitally or otherwise) is a very tricky problem - one that people have long tried to solve with both malicious and benevolent intent - and failings in that vein aren't necessarily of the level of amateurishness that the term implies.

jmileham | 12 years ago | on: A Hybrid File Storage Backend for Django

One caveat comes to mind about the approach this technique supports. Anonymizing your production data and distributing it to developer laptops is something you should think hard about before doing it, and approach very carefully if you do. Sometimes the sensitive information you should be protecting isn't just the users' creds and addresses. Sometimes it's the ways in which they've used your app, the content they've created, and the graph of other accounts with whom they've associated.

A typical anonymized DB dump is likely to share primary key values with production, and often an adversary knowing that user X posted something they'd rather keep private on your app will be able to simply look up their identity on your production server without privileged access.

Generating useful fake data is also hard, of course, and won't hit your edge cases like real production data, but then again, the code you're writing now won't really be exercised by the unwashed masses until you release it, so using production data is mostly a protection against regressions. If you've got sensitive content in your app, you should consider stronger test coverage in lieu of production dumps. Of course performance regressions can be hard to keep under wraps with automated testing (though you can defend against things like N+1 query problems), so YMMV.

jmileham | 13 years ago | on: Hat, not CAP: Introducing Highly Available Transactions

Good point, repeatable read is a pretty useful guarantee, though I would be loathe to give up global integrity constraints. Read committed seems to put a lot of work on the application developer's plate, though, and it's not clear to me what impact the different isolation levels have on system performance.
page 1