kalmar's comments

kalmar | 7 years ago | on: Ask HN: How do you organise your hard drive?

This used to be me until I started using screenshot shortcuts that put images in the clipboard instead of files. You can paste directly into jira, slack, hangouts, whatsapp web, and many other places.

kalmar | 7 years ago | on: Hg advent init

Author here. Those are great tips, thanks! I actually just installed evolve from pip last night. I'll probably borrow from your hgrc, or at least let it inspire me :-)

> I'm in the opposite boat where I've used mercurial for nearly 10 years but any time I try to use git I get lost.

Very interesting! Do you end up using it as a client to git repos?

kalmar | 7 years ago | on: Running a database on EC2? Your clock could be slowing you down

Back when we ran the Citus cluster on EBS, we lost some EBS volumes as well. This manifested as disk not responding, followed several days later by an email from AWS with the subject Your Amazon EBS Volume vol-123456789abcdef telling you the disk was lost irrecoverably.

But yeah, you need to be ready for your disks to go away no matter where they are: ephemeral, EBS, physical, whatever.

kalmar | 7 years ago | on: Running a database on EC2? Your clock could be slowing you down

Post author here. It's ephemeral, yes. It survives reboots, so that's not a problem. It doesn't survive instance-stop, so if a machine is being decommissioned by AWS we do indeed lose its data. As for how we protect against it, the main thing is replication: the data is stored on more than one machine. If we lose a machine for whatever reason, the shards from that machine are copied from a replica to another DB instance.

kalmar | 7 years ago | on: Running a database on EC2? Your clock could be slowing you down

Hi post author here! First off, we actually do use RDS for other databases. As you point out, having a lot of the operational stuff taken care of for you is great.

The post is specifically about our Citus cluster, which stores the analytics data for all our customers. Most of the reasons we do this have been given by other folks in the replies:

  * RDS doesn't support the Citus extension
  * data is stored on ZFS for filesystem compression
  * we get significantly higher disk performance from these
    instances' NVMe attached storage, which isn't available for
    RDS

kalmar | 7 years ago | on: Linux sandboxing improvements in Firefox 60

I'm curious why chroot is used instead of mount namespace and pivot_root(2). This would let them get away without CAP_SYS_CHROOT, while also providing stronger filesystem isolation.

kalmar | 8 years ago | on: Top Ten Time Series DBs

Honest question: how do people use influxdb for monitoring and alerting? Our metrics feed into influx, and I cannot get answers to simple questions like “what is the failure rate” because arithmetic across measurements isn't possible [0]. I could shoehorn things into a schema to make it work, but in the limit I end up with one mega measurement.

[0]: https://github.com/influxdata/influxdb/issues/3552

kalmar | 8 years ago | on: Terraform Gotchas and How We Work Around Them

Oh interesting. Note to self: see if there's an option to disable `terraform apply` without a plan.

I always refresh when running the pre-apply plan, but while iterating I use that. Do you always run your `tffreshplan` command before applying?

kalmar | 8 years ago | on: Terraform Gotchas and How We Work Around Them

I don't think it would work in this case, as the `ebs_block_device` block isn't a resource. In fact, the TF state doesn't even have the volume IDs for them!

An alternative to doing this was `terraform import` on all the volumes, then defining attachments, and hoping it all worked when you run `terraform plan`. I don't 100% remember now why we didn't do that.

kalmar | 8 years ago | on: Terraform Gotchas and How We Work Around Them

Woah can you comment on lines in a comment? /me dashes to investigate github

And thanks for the suggestion. So far it's been on a someday maybe list, but if it really does help that much, maybe we'll bump it to someday maybe soon.

page 1