I came here for the bad takes, and I have not been disappointed. Dynamo slays when you know your access patterns and need consistent performance and no operations requirements. Turns out, that's the case most of the time. Think about it as application state instead of a db. It's not key-value like Redis. GSIs with compound keys allow access to data across multiple dimensions on virtually unlimited data with consistent performance. Its weakness is querying data across dimensions you didn't plan on. If you need that regularly, it sucks. If you need that once in awhile, write a migration.
davidjfelix|1 year ago
pdhborges|1 year ago
cldcntrl|1 year ago
Here's most of the time out in the real world:
- Low-cardinality partition key leading to hot keys, trashing capacity utilization.
- Bad key design means access patterns are off the table forever, as nobody wants to take on data migration with BatchWriteItem.
- Read/write spikes causing throttling errors. The capacity concept is difficult - people don't understand how capacity relates to partitions and object sizes, or wrongly assume "On-Demand Capacity" means throttling is impossible, or that Provisioned Capacity Autoscaling is instant.
- Multiple GSIs to cover multiple access patterns = "why is our bill so high?".
I've seen these issues over and over again while working with real organizations.
Of course it's impressive technology, it's just so littered with traps that I've stopped recommending it except in very specific cases.