220's comments

220 | 9 years ago | on: Connecting Kubernetes services with linkerd

DNS for service discovery is fraught with perils. Many implementations don't respect TTLs, or don't actually use multiple records, or don't do something smart like power of two.

Even if you pick all your impls carefully, you have to wait for your TTL instead of a push mechanism for changes.

If you wanted to implement the watch as a library, code wise, it's maybe ~500 lines per language, using the k8s client lib. I could hammer it out in two days.

220 | 9 years ago | on: Connecting Kubernetes services with linkerd

I don't think SRV records are the right answer; your networking layer should be k8s aware and issue a watch command on Endpoints; it'll be updated immediately-ish when the servers changes. This is similar to how finagle's zookeeper server set is supposed to work.

What linkerd buys you is you don't have to write this type k8s-aware, zipkin logging library for every language you're running in production. But I think it's straddling a very narrow section; small users shouldn't care about this and just rely on the round robin, zipkin is a PITA to run anyway. Large users will probably want to write their own libraries (zipkin is a lot better if you put traces in your process)

220 | 9 years ago | on: Building Scala Projects: Maven vs. SBT

My current startup and my last one, both smaller orgs. I setup both build systems, and initially used SBT in the previous one. In both cases I think it helps that we had engineers from larger companies familiar with a working monorepo; if you've seen one done well you have some aspiration as to what to shoot for with pants, even if it's more than what's currently available.

I'd use SBT again for a locally contained, single project setup. Once you learn the arcana, it works well, has a ton of plugins, and the repl is nice. I don't think it scales well with new engineers or number of projects though.

220 | 9 years ago | on: Building Scala Projects: Maven vs. SBT

> operator-overloading enabling eDSL-creation

This is almost certainly in the negative column for me. SBT has the dubious distinction of being the only build system that takes me hours to figure out how to change a setting. Inevitably I have to fall back to reading the source, and given all the macro magic that can be difficult to unwind.

I've been using Pants, mostly because it's polyglot and has better multi-project support. The oss community is a lot smaller, and it has a fair number of problems around bugs, but I find the code much more approachable.

220 | 9 years ago | on: Https hurts users far away from the server

If you use cert pinning, like the DigiNotar/Iran/Gmail, you're still protected against a trusted CA, assuming you've communicated in the past, which is realistic for a real world attack.

It's an attack that's difficult to deploy because it's easy to detect if you're looking in the right places, and as soon as it's detected, you know the CA has been compromised, and the attacker loses a large investment.

220 | 9 years ago | on: Https hurts users far away from the server

Is there a risk model where you control the network enough to fake domain validation but only if the target initiates the request to Let's Encrypt?

Otherwise it doesn't matter if you use Let's Encrypt as the attacker could just initiate the validation regardless of your CA and end up with a valid certificate (which would still fail cert pinning)

Edit: Oh I see, it's a more about if DV should ever be green.

220 | 9 years ago | on: Https hurts users far away from the server

> FWIW, my personal website uses let's encrypt, so it would be yellow or worse.

This shouldn't effect your security stance.

There's a common misconception that you trust your private keys with your CA and they can somehow transparently MITM you. But they only have your public key, not your private keys, so they can't do that.

The security threat from trusted CAs is that they can MITM anyone, regardless of if you use them or not. BUT the attack isn't transparent, and things like cert pinning are effective in the real world from preventing attacks.

220 | 9 years ago | on: The AWS and MongoDB Infrastructure of Parse

We were a parse user for (many) apps, and tried to run parse server briefly before just letting all the features built on it die.

I think one of the major instabilities not mentioned the request fanout. On startup the parse ios client code could fan out to upwards of 10 requests, generally all individual pieces of installation data. Your global rate limit was 30 by default. Even barely used apps would get constantly throttled.

It was even worse running ParseServer, since on Mongodb those 10 writes per user end up as a sitting in a collection write queue blocking reads until everything is processed. It was easier to kill the entire feature set it was based on.

I know there were a ton of other issues but I've forced myself to forget them.

page 1