top | item 8729420

Databases at 14.4Mhz

297 points| chton | 11 years ago |blog.foundationdb.com | reply

81 comments

order
[+] ChuckMcM|11 years ago|reply
This is Foundation DB's announcement they are doing full ACID databases with a 14.4M writes per second capability. That is insanely fast in the data base world. Running in AWS with 32 c3.8xlarge configured machines. So basically NSA level data base capability for $150/hr. But perhaps more interesting is that those same machines on the open market are about $225,000. That's two rack, one switch and a transaction rate that lets you watch every purchase made at every Walmart store in the US, in real time. That is assuming the stats are correct[1], and it wouldn't even be sweating (14M customers a day vs 14M transactions per second). Insanely fast.

I wish I was an investor in them.

[1] http://www.statisticbrain.com/wal-mart-company-statistics/

[+] vilda|11 years ago|reply
Not all ACID transactions are equal. This is just a key-value store-like test. It shows the potential to scale, yet nothing regarding performance in real word.

32 c3.2xlarge instances have 1920GB memory. Given 1 billion 16B+ 8..100B values the whole dataset fits just into memory.

The Cassandra test mentioned [1] sustained loss of 1/3 instances. That's very impressive! Would love to see how F-DB handles this type of real-life situation (hint hint for follow up blog post).

[1] http://googlecloudplatform.blogspot.cz/2014/03/cassandra-hit...

[+] hendzen|11 years ago|reply
This is very impressive, however...

See this tweet by @aphyr: https://twitter.com/aphyr/status/542755074380791809

(All credit for the idea in this comment is due to @aphyr)

Basically because the transactions modified keys selected from a uniform distribution, the probability of contention was extremely low. AKA this workload is basically a data-parallel problem, somewhat lessening the impressiveness of the high throughput. Would be interesting to see it with a Zipfian distribution (or even better, a Biebermark [0])

[0] - http://smalldatum.blogspot.co.il/2014/04/biebermarks.html

[+] Dave_Rosenthal|11 years ago|reply
Actually, @aphyr's analysis is wrong. In fact it's actually worse that he says. There is an exactly 0% chance of conflicts between two transactions that each only write 20 random keys (as they do in this test). This is perhaps counterintuitive, but, because there are no reads, any serialization order is possible and therefore there is no chance of conflict.

Equally wrong is his assumption that this has any bearing on the performance of FoundationDB. In fact FoundationDB will perform the same whether the conflict rate is high or low. This isn't to say that FoundationDB somehow cheats the laws of transaction conflicts, just that it has to do all the work in either case. There is no trick or cheat on this test--this same performance will hold on a variety of workloads with varying conflict rates as well as those including reads.

[+] unknown|11 years ago|reply

[deleted]

[+] wmf|11 years ago|reply
Unfortunately this problem isn't specific to FoundationDB; the old "industry-standard" TPC-C benchmark has a similar low-contention design which has led to years of unrepresentative performance tuning and benchmarketing.
[+] jrallison|11 years ago|reply
Absolutely. Choosing the proper data model based on your access patterns is one of the best ways of keeping FDB performance high by reducing the likelihood of contention.
[+] vosper|11 years ago|reply
Yeah, this would have been more interesting if they could tell us what the throughput is given various levels of contention.
[+] jrallison|11 years ago|reply
We've been using FoundationDB in production for about 10 months now. It's really been a game changer for us.

We continue to use it for more and more data access patterns which require strong consistency guarantees.

We currently store ~2 terabytes of data in a 12 node FDB cluster. It's rock solid and comes out of the box with great tooling.

Excited about this release! My only regret is I didn't find it sooner :)

[+] lclarkmichalek|11 years ago|reply
Why do you need a 12 node cluster for 2 TB of data?
[+] bsaul|11 years ago|reply
Just watched the linked presentation about "flow" here : https://foundationdb.com/videos/testing-distributed-systems-...

Is it really the first Distributed DB project to have built a simulator ?

Because frankly, if that's the case, it seems revolutionary to me. Intuitively, it seems like bringing the same kind of quality improvement as unit testing did to regular software development.

PS : i should add that this talk is one of the best i've seen this year. The guy is extremely smart, passionate, and clear. (i just loved the The Hurst exponent part).

[+] jchrisa|11 years ago|reply
At Couchbase we have a plethora of simulators. Simulations of cluster topology changes, simulations of failure scenarios, simulations of workloads to estimate requried cluster sizes, etc. Here's one: https://github.com/couchbaselabs/cbfg
[+] dchichkov|11 years ago|reply
I'm only familiar with other key-value storage engines, not FoundationDB, but it seems like the goals are: "distributed key-value database, read latencies below 500 microseconds, ACID, scalability".

I remember evaluating a few low latency key-value storage solutions, and one of these was Stanford's RAMCloud, which is supposed to give 4-5 microseconds reads, 15 microseconds writes, scale up to 10,000 boxes and provide data durability. https://ramcloud.atlassian.net/wiki/display/RAM/RAMCloud Seems like, that would be "Databases at 2000Mhz".

I've actually studied the code that was handling the network and it had been written pretty nicely, and as far as I know, it should work both over 10Gbe and Infiniband with similar latencies. And I'm not at all surprised, they could get pretty clean looking 4-5us latency distribution, with the code like that.

How does it compare with FoundationDB? Is it completely different technology?

[+] jrallison|11 years ago|reply
Don't know much about RAMCloud, but from the description:

"Many more issues remain, such as whether we can provide higher-level features such as secondary indexes and multiple-object transactions without sacrificing the latency or scalability of the system. We are currently exploring several of these issues."

Sounds it doesn't provide multi-key ACID transactions at the very least.

[+] imaginenore|11 years ago|reply
But it doesn't store the data on disk.

It should be compared to Memcached, not to a real storage engine.

[+] felixgallo|11 years ago|reply
This looks very interesting and congratulations to the FoundationDB crew on some pretty amazing performance numbers.

One of the links leads to an interesting C++ actor preprocessor called 'Flow'. In that table, it lists the performance result of sending a message around a ring for a certain number of processes and a certain number of messages, in which Flow appears to be fastest with 0.075 sec in the case of N=1000 and M=1000, compared with, e.g. erlang @ 1.09 seconds.

My curiosity was piqued, so I threw together a quick microbenchmark in erlang. On a moderately loaded 2013 macbook air (2-core i7) and erlang 17.1, with 1000 iterations of M=1000 and N=1000, it averaged 34 microseconds per run, which compares pretty favorably with Flow's claimed 75000 microseconds. The Flow paper appears to maybe be from 2010, so it would be interesting to know how it's doing in 2014.

[+] emn13|11 years ago|reply
There's got to be some mistake there somewhere - on your part, or on theirs - because there's no way erlag improved from 1.09 seconds to 34 microseconds on pretty much any benchmark between 2010 and 2014. Even the factor 1000 (the message count) isn't enough to account for that difference - something's fishy.
[+] shortstuffsushi|11 years ago|reply
As someone who has no idea about the cost of high-scale computing like this, is $150/hr reasonable? It seems like an amount that's hard to sustain to me, but I have no idea if that's a steady, all the time rate, or a burst rate, or what. Or if it's a set up you'd actually ever even need -- seems like from the examples they mention (like the Tweets), they're above the need by a fair amount. Anyone else in this sort of situation care to chip in on that?
[+] landryraccoon|11 years ago|reply
Per the article, it's about 1/20th the cost that Google would charge you for the same number of writes.
[+] primitivesuave|11 years ago|reply
There are literally thousands of companies that each hand over millions of dollars to Oracle on a semi-regular basis, for enormous server setups to store enormous data sets. This is a major improvement.
[+] w8rbt|11 years ago|reply
I thought it was DB connections over radio waves just above 20 meters. Also, it's MHz, not Mhz.
[+] maliki|11 years ago|reply
This sounds great compared to my anecdotal experience with DB write performance; but is there a collection of database performance benchmarks that this can be easily compared to?

The best source for DB benchmarking I know of is http://www.tpc.org/. The methodology is more complicated there, but the top results are around 8 million transactions per minute on $5 million systems. This FoundationDB result is more like 900 million transactions per minute on a system that costs $1.5 million a year to rent (so, approx $5 million to buy?).

The USD/transactions-per-minute metric is clear, but without a standard test suite (schema, queries, client count, etc.), comparing claims of database performance makes my head hurt.

[+] Dave_Rosenthal|11 years ago|reply
I think you mean "900 million transactions per minute". Of course that overstates things since each TPC-C transaction entails a lot more than one write. TPC-C is about 2/3 read and 1/3 write, and each TPC transaction might do 20 low-level read+write operations (I'm actually not sure, but I think that's in the ballpark.)

In the NoSQL world many people have converged on a workload of 90% reads/10% writes to individual keys. We show 90/10 results on our performance page [1] but in this test we do 100% writes to stress the "transaction engine", which processes writes.

Since we have our SQL Layer [2] as well, we will run some more-comparable SQL tests in the future.

[1] https://foundationdb.com/key-value-store/performance

[2] https://foundationdb.com/layers/sql

[+] illumen|11 years ago|reply
Impressive.

However I think there's still plenty of room to grow.

320,000 concurrent sessions isn't that much by modern standards. You can get 12 million concurrent connections on one linux machine, and push 1gigabit of data.

Also, 167 megabytes per second (116B * 14.4 million) is not pushing the limits of what one machine can do. I've been able to process 680 megabytes per second of data into a custom video database, plus write it to disk on one 2010 machine. That's doing heavy processing at the same time on the video with plenty of CPU to spare.

PCIe over fibre can do many transactions messages per second. You can fit 2TB memory machines in 1U (and more).

Since this is a memory + eventually dump to disk database, I think there is still a lot of room to grow.

[+] tuyguntn|11 years ago|reply
I wish I would work with such great professionals as a Junior for 10years, right after school!!!
[+] oconnor663|11 years ago|reply
Any chance of an open source release for the database core? :)
[+] lttlrck|11 years ago|reply
"Or, as I like to say, 14.4Mhz."

Sorry, I don't like that at all.

[+] ctdonath|11 years ago|reply
Inclined to agree. Title made me think it was a retro-computing analysis of running dBASE II on an IBM PC/AT or some such.
[+] simcop2387|11 years ago|reply
I would agree. While 14.4 million writes per second is impressive, it definitely doesn't fit the definition of the Hz unit (cycles per second).
[+] imanaccount247|11 years ago|reply
Why the deliberately misleading comparisons? If you are doing something genuinely impressive, then you should be able to be honest about it and have it still seem impressive. One tweet is not one write. Comparing tweets per second to writes per second is complete nonsense. How many writes a tweet causes depends on how many followers the person who is tweeting has. The 100 writes per second nonsense is even worse. Do you just think nobody is old enough to have used a database 15 years ago? 10,000 writes per second was no big deal on of the shelf hardware of the day, nevermind on an actual server.
[+] jrochkind1|11 years ago|reply
it's just a comparison to give you a sense of what those numbers look like, order of magnitude comparisons. I didn't find it misleading, and didn't think it meant that you'd be able to run twitter on their db on one commodity server. But it's an order of magnitude estimate to give you a sense of the scale.