jstephens | 14 years ago | on: Thanks HN: we just launched the One-Time Secret beta based on your feedback
jstephens's comments
Depending on what database you use you can do partial replication (only tables that aren't secrets) to a slave and then backup the slave. We use MySQL and that is possible. Also if you combine that with a high rotation rate on your binlogs (again mysql) and wipe out the older logs you can effectively have a slave with all of the "permanent" data and then only two hours of binlogs of everything. So in case of disaster you copy the slave back and then replay the binlogs you kept (a couple hours) for secrets and you are back where you started. But since you never replicated the secrets or kept more than 2 hours of binlogs you have no way of recovering the secrets outside of that window.
jstephens | 14 years ago | on: Ask HN: Is it feasible to use redis as the only datastore?
Redis also support PubSub, lists, ordered lists, and hashes. Not exactly a strict "key value" store.
jstephens | 14 years ago | on: MessagePack: Fast and Compact Serialization
For perl it's fast as long as you have a small payload. Here's my benchmarks using progressively bigger data structures.
perl: 5.008008
Storable: 2.21
JSON::XS: 2.25
Data::MessagePack: 0.34
==== Size ====
.-----------------+-----------+---------+---------.
| src | storable | json | msgpack |
+-----------------+-----------+---------+---------+
| 1 | 4 | fail | 1 |
| 3.14 | 8 | fail | 5 |
| {} | 7 | 2 | 1 |
| [] | 7 | 2 | 1 |
| [('a')x10] | 37 | 41 | 21 |
| {('a')x10} | 14 | fail | 11 |
| +{1,+{1,+{}}} | 29 | 14 | 7 |
| +[+[+[]]] | 19 | 6 | 3 |
'-----------------+-----------+---------+---------'
==========================
Testing: theirs:HASH (4995 bytes)
==== Serialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.14 usr + 0.00 sys = 1.14 CPU) @ 26947.37/s (n=30720)
mp: 1 wallclock secs ( 1.11 usr + 0.00 sys = 1.11 CPU) @ 70446.85/s (n=78196)
storable: 1 wallclock secs ( 1.06 usr + 0.00 sys = 1.06 CPU) @ 18442.45/s (n=19549)
Rate storable json mp
storable 18442/s -- -32% -74%
json 26947/s 46% -- -62%
mp 70447/s 282% 161% --
==== Deserialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.02 usr + 0.00 sys = 1.02 CPU) @ 23424.51/s (n=23893)
mp: 1 wallclock secs ( 1.02 usr + 0.00 sys = 1.02 CPU) @ 35136.27/s (n=35839)
storable: 1 wallclock secs ( 1.06 usr + 0.00 sys = 1.06 CPU) @ 25357.55/s (n=26879)
Rate json storable mp
json 23425/s -- -8% -33%
storable 25358/s 8% -- -28%
mp 35136/s 50% 39% --
==========================
Testing: xdbi:HASH (24138 bytes)
==== Serialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.06 usr + 0.00 sys = 1.06 CPU) @ 2414.15/s (n=2559)
mp: 1 wallclock secs ( 1.07 usr + 0.00 sys = 1.07 CPU) @ 8373.83/s (n=8960)
storable: 1 wallclock secs ( 1.06 usr + 0.00 sys = 1.06 CPU) @ 7244.34/s (n=7679)
Rate json storable mp
json 2414/s -- -67% -71%
storable 7244/s 200% -- -13%
mp 8374/s 247% 16% --
==== Deserialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.11 usr + 0.00 sys = 1.11 CPU) @ 4842.34/s (n=5375)
mp: 2 wallclock secs ( 1.13 usr + 0.00 sys = 1.13 CPU) @ 4324.78/s (n=4887)
storable: 1 wallclock secs ( 1.05 usr + 0.00 sys = 1.05 CPU) @ 4654.29/s (n=4887)
Rate mp storable json
mp 4325/s -- -7% -11%
storable 4654/s 8% -- -4%
json 4842/s 12% 4% --
==========================
Testing: pdf:HASH (35239 bytes)
==== Serialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.04 usr + 0.00 sys = 1.04 CPU) @ 9845.19/s (n=10239)
mp: 1 wallclock secs ( 1.05 usr + 0.00 sys = 1.05 CPU) @ 5119.05/s (n=5375)
storable: 1 wallclock secs ( 1.10 usr + 0.00 sys = 1.10 CPU) @ 7518.18/s (n=8270)
Rate mp storable json
mp 5119/s -- -32% -48%
storable 7518/s 47% -- -24%
json 9845/s 92% 31% --
==== Deserialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.08 usr + 0.00 sys = 1.08 CPU) @ 3110.19/s (n=3359)
mp: 1 wallclock secs ( 1.07 usr + 0.00 sys = 1.07 CPU) @ 2790.65/s (n=2986)
storable: 1 wallclock secs ( 1.08 usr + 0.00 sys = 1.08 CPU) @ 3554.63/s (n=3839)
Rate mp json storable
mp 2791/s -- -10% -21%
json 3110/s 11% -- -13%
storable 3555/s 27% 14% --
==========================
Testing: globallib:HASH (197319 bytes)
==== Serialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.09 usr + 0.00 sys = 1.09 CPU) @ 1760.55/s (n=1919)
mp: 1 wallclock secs ( 1.06 usr + 0.00 sys = 1.06 CPU) @ 791.51/s (n=839)
storable: 1 wallclock secs ( 1.05 usr + 0.00 sys = 1.05 CPU) @ 1279.05/s (n=1343)
Rate mp storable json
mp 792/s -- -38% -55%
storable 1279/s 62% -- -27%
json 1761/s 122% 38% --
==== Deserialize ====
Benchmark: running json, mp, storable for at least 1 CPU seconds...
json: 1 wallclock secs ( 1.71 usr + 0.00 sys = 1.71 CPU) @ 301.75/s (n=516)
mp: 1 wallclock secs ( 1.03 usr + 0.00 sys = 1.03 CPU) @ 404.85/s (n=417)
storable: 1 wallclock secs ( 1.12 usr + 0.00 sys = 1.12 CPU) @ 544.64/s (n=610)
Rate json mp storable
json 302/s -- -25% -45%
mp 405/s 34% -- -26%
storable 545/s 80% 35% --jstephens | 14 years ago | on: Poll: How many hours do you work a week?
Since I use the RSS I only spend an hour at most a day reading stuff on here, but I work a ~40 hour week usually.
jstephens | 14 years ago | on: MySQL--Replication - Peer-to-peer based, multi-master replication for MySQL
No. Each master has an offset so when you build a ring you give each master a different starting point. This keeps your ids in sync. So master1 would create ids: 1, 11, 21, 31, 41, 51, etc... and master2 would create ids: 2, 12, 22, 32, 42, 52, etc...
page 1