(no title)
karlmdavis | 1 year ago
Biggest limiter is memory, where the need for it grows linearly with table index size. Postgres really really wants to keep the index pages hot in the OS cache. Gets very sad and weird if it can’t: will unpredictably resort to table scans sometimes.
We are running on AWS Aurora, on a db.r6i.12xlarge. Nowhere even close to maxed out on potential vertical scaling.
brightball|1 year ago
EDIT: Here's what I was thinking about. It's chunked in 10gb increments that are replicated across AZs.
> Fault-tolerant and self-healing storage
Aurora's database storage volume is segmented in 10 GiB chunks and replicated across three Availability Zones, with each Availability Zone persisting 2 copies of each write. Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Aurora storage is also self-healing; data blocks and disks are continuously scanned for errors and replaced automatically.
https://aws.amazon.com/rds/aurora/features/
dalyons|1 year ago
SOLAR_FIELDS|1 year ago
riku_iki|1 year ago
mrbonner|1 year ago
mrbonner|1 year ago
I think the reason behind aurora pick is to support arbitrary aggregation, filtering and low latency read (p90 < 3000ms). We could not pick distributed DB based on Presto, Athena or Redshift mainly for latency requirements.
The other contender I consider is Elastic search. But, I do think using it in this case is akin to fitting a square peg in round hole saying.
LunaSea|1 year ago
Is it IoT / remote sensing related?