mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs
mazieres's comments
mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs
A place where this kind of pattern is useful is in having a generic accessor type for fields, and wanting to extend it to tuples. So for example accessor(x) might return x.field, and accessor.name might be "field." To make it work for tuples, you need a string for every possible tuple index. E.g.:
template<typename T, size_t N> struct tuple_accessor;
template<typename...T, size_t N>
struct tuple_accessor<std::tuple<T...>, N> {
constexpr decltype(auto) operator(auto &&t) const {
return get<N>(std::forward<decltype(t)>(t));
}
static constexpr const char *name = index_string<N>();
};mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs
mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
Also, here's a more authoritative source than stack overflow for the return type of main: https://timsong-cpp.github.io/cppwp/n4861/basic.start.main#2
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
This is confusing, because it begs the question "same as what?" In fact, you can migrate a coroutine across threads, or even create a thread specifically for the purposes of resuming a coroutine.
But I suppose it is true that from the point at which you resume a coroutine to the point at which it suspends itself, it will use a particular stack for any function calls made within the coroutine. That's why you can't suspend a coroutine from within a function call, because the function call will use the stack.
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines
mazieres | 6 years ago | on: RFC 8548: Cryptographic Protection of TCP Streams (Tcpcrypt)
An example would be if you don't have a certificate. Maybe you just have a pre-shared secret or a kerberos ticket. Or worse, maybe you just have a password, and so need to use a PAKE protocol or something to authenticate the session. Or maybe you are using an RPC protocol, like NFS, that doesn't let you add TLS in a backwards-compatible way, but where the client and server do share some secret file handle.
mazieres | 6 years ago | on: RFC 8548: Cryptographic Protection of TCP Streams (Tcpcrypt)
So the goals of the tcpcrypt project back then were 1) solve the performance problem by making it practical to encrypt essentially all TCP traffic, 2) make undetectable widespread eavesdropping impractical, 3) provide a pathway for insecure applications to achieve high network security with minimal effort, and 4) avoid the traps applications commonly experience such as low-quality pseudo-random seeds, or leaking session keys through memory errors.
We solved #1 by carefully constructing the protocol to minimize public key overhead on the server and allow optimized server authentication. In particular, with low-exponent RSA (popular at the time), the server-side computation was only encryption (which is much, much cheaper than decryption). Moreover, for strong security (with server-side authentication), servers could perform batch signing in which a single signature could be amortized over many connections. The result was dramatically better performance for the original version of tcpcrypt, which you can see described here:
http://www.scs.stanford.edu/~dm/home/papers/bittau:tcpcrypt....
We solved #2 by encrypting everything, and providing a mechanism to tie authentication to that encryption. So if an ISP systematically mounted a man-in-the-middle attack, they would cause authentication to fail. Moreover, it would be easy to run tests by, for instance, logging session IDs at connections between various endpoints and comparing them after the fact. That still meant ISPs could violate the privacy of unauthenticated connections, but then at least we would know about it and could decide, as a society, whether we wanted this kind of eavesdropping.
#3 was really the ultimate goal--strong security everywhere even with insecure legacy protocols. The problem with legacy protocols is that they didn't all contain a way to add a "STARTTLS" verb. So what we did was give tcpcrypt an "Application-aware" bit that would allow out-of-band signaling that the application knows about tcpcrypt. This bit would allow legacy application maintainers to shoe-horn in stronger security in a completely backwards-compatible way, and if the project really took off could then ultimately disable the insecure old unauthenticated protocol.
Finally, #4 was handled by keeping session keys in the kernel (so they can't leak in core-dumps, uninitialized data, buffer errors, etc.), and providing a fresh session ID that can be authenticated even without a good source of randomness.
As time went on, the performance argument (#1) became much less relevant, with the result that tcpcrypt now uses ECC crypto instead of RSA. (Though the batch-authentication optimization is still available.) However, #2 actually became more relevant (at least in 2013), and #4 even more so with all these catastrophic bugs (like heartbleed, or Debian disabling randomness in OpenSSL). It's also still much, much harder than it should be to write software that encrypts network traffic, particularly where one doesn't just want the standard anonymous-client to server-with-X.509-certificate setup. If people adopt tcpcrypt, setting up an authenticated, encrypted TCP connection will become comparable to checking a unix password--just a few lines of code.
A lot of this is clearer if you read the companion TCP-ENO RFC, rather than tcpcrypt. The goals and rationale and overall negotiation mechanism were broken out into a separate document to allow better upgradability to future encryption protocols: https://tools.ietf.org/html/rfc8547
mazieres | 7 years ago | on: Understanding the Stellar Consensus Protocol
mazieres | 7 years ago | on: Understanding the Stellar Consensus Protocol
Now what makes SCP different from traditional BFT replication is not just that the quorums are defined in a decentralized way, but that they require a transitive closure of dependencies. So if you depend on stronghold and stronghold depends IBM and binance also depends on IBM, then even if you don't think you care about binance, you will still remain in sync with them.
mazieres | 8 years ago | on: Keybase is now supported by the Stellar Development Foundation
There are obviously annoying ways in which keybase could support cryptocurrency, but you are making a lot of assumptions about what Keybase is going to do that are not based on the blog post. For example, wallets do not mine coins and Stellar does not support mining.
Why don't you wait to see what comes out and submit a feature request if you don't like it, instead of flipping out about some hypothetical feature you won't like.
mazieres | 8 years ago | on: Keybase is now supported by the Stellar Development Foundation
Ripple has only just now, in 2018, published their decentralized consensus algorithm (Cobalt), which as far as I know is not even in production use yet, and doesn't provide optimal safety. (In settings where Cobalt is guaranteed Safe, SCP would be too, but not vice versa.) Their production network still uses a protocol that, by Ripple's own analysis (https://arxiv.org/pdf/1802.07242), fails to guarantee safety without >90% agreement on the UNL.
mazieres | 8 years ago | on: Keybase is now supported by the Stellar Development Foundation
What's worse is that colored coins could distort the incentive structure to make it profitable to bribe miners, because the benefit to an attacker of subverting consensus could far outweigh the value of 12.5 BTC/block.
mazieres | 8 years ago | on: Stellar Protocol: A Federated Model for Internet-Level Consensus (2016) [pdf]
One way to view how SCP avoids leader election is to consider that it is effectively emulating the leader. SCP has two phases, a nomination and a balloting phase. The nomination phase is effectively like one or more instances of an asynchronous broadcast protocol (which don't require a leader since multiple nodes can choose to broadcast). The balloting phase is like Paxos, except that the value to propose is embedded in the ballot number so nodes don't require a leader to tell them what is being proposed--they can each emulate the leader themselves.
mazieres | 8 years ago | on: Stellar Protocol: A Federated Model for Internet-Level Consensus (2016) [pdf]
Running a validator protects a token issuer against double redemptions, as might happen in a mining-based blockchain where anonymous miners fork the blockchain and thus create twice as many tokens. That's fine for pure crypto tokens, where you can create Ethereum [classic] or Bitcoin cash out of thin air. But if you were using colored coins or ERC20 tokens to represent claims on bank deposits, these forks would be a problem.