mazieres's comments

mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs

Your constexpr function is not legal because it doesn't initialize all the array elements, even though the compiler might let you get away with it. That's easily fixed. More importantly, though, your static_value::value is not a const char *.

mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs

Actually not. Try to implement a compile-time initialized const char* that represents some compile-time value like sizeof(X) for some data structure X. You'll see that you need recursion.

A place where this kind of pattern is useful is in having a generic accessor type for fields, and wanting to extend it to tuples. So for example accessor(x) might return x.field, and accessor.name might be "field." To make it work for tuples, you need a string for every possible tuple index. E.g.:

  template<typename T, size_t N> struct tuple_accessor;
  template<typename...T, size_t N>
  struct tuple_accessor<std::tuple<T...>, N> {
    constexpr decltype(auto) operator(auto &&t) const {
      return get<N>(std::forward<decltype(t)>(t));
    }
    static constexpr const char *name = index_string<N>();
  };

mazieres | 2 years ago | on: C++20 Idioms for Parameter Packs

Obviously there's a whole section on comma folds that you didn't read. However, more importantly, there are things that you can't do without recursion. For example, how would you implement the `index_string` function in the section on recursing over template parameters without using recursion?

mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines

Wow, that talk is a fantastic link. He actually gets negative overhead from using coroutines, because the compiler has more freedom to optimize when humans don't prematurely break the logic into multiple functions.

mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines

> Stackless means when you resume a coroutine you're still using the same OS thread stack

This is confusing, because it begs the question "same as what?" In fact, you can migrate a coroutine across threads, or even create a thread specifically for the purposes of resuming a coroutine.

But I suppose it is true that from the point at which you resume a coroutine to the point at which it suspends itself, it will use a particular stack for any function calls made within the coroutine. That's why you can't suspend a coroutine from within a function call, because the function call will use the stack.

mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines

Agreed. I think the killer use is for event-driven programs that already do a lot of "stack ripping". If you are already manually putting all your "local" variables into an object and slicing your functions into many methods, then the grossness of C++20 coroutines will be easy to digest given how much more readable they can make your code.

mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines

You are right that you are effectively stuck in a single coroutine (non-stack) frame. But you can chain multiple such coroutine frames, because one coroutine can co_await another.

mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines

Umm... if you download the code you will see that main returns int, but the main1...main6 functions invoked by main return void because they don't need to return a value.

mazieres | 5 years ago | on: My tutorial and take on C++20 coroutines

I really liked Josuttis's book on C++17 (https://www.cppstd17.com/). The only C++20 book I know of is the Grimm book linked in the referenced blog post (https://leanpub.com/c20). I'm glad I read it, but honestly it's a bit rough still. To be fair, though, it's still only a draft so will probably improve. And I'm assuming you already have C++11. There are a lot of resources for that one. I happened to use Stroustroup 4th edition, but I'm willing to believe there are better books, particularly for people already familiar with C++03.

mazieres | 6 years ago | on: RFC 8548: Cryptographic Protection of TCP Streams (Tcpcrypt)

> I'd love for you to expand more on this. What I can't think of is a realistic scenario where it's possible to validate the session id in a secure way, but there is also some reason that we couldn't just use TLS with client certificates.

An example would be if you don't have a certificate. Maybe you just have a pre-shared secret or a kerberos ticket. Or worse, maybe you just have a password, and so need to use a PAKE protocol or something to authenticate the session. Or maybe you are using an RPC protocol, like NFS, that doesn't let you add TLS in a backwards-compatible way, but where the client and server do share some secret file handle.

mazieres | 6 years ago | on: RFC 8548: Cryptographic Protection of TCP Streams (Tcpcrypt)

Some historical context here. First, the project was begun over 10 years ago, long before QUIC, before widespread use of https for most web pages, and before widespread use of ECC certificates. (E.g., americanexpress.com was an http web page, and while the form had you submit your password to an https URL, obviously attackers could tamper with the page in transit and make you submit your username/password elsewhere. Similarly gmail and google search were unencrypted--only the authentication page had https.) At the time, performance was a huge deal, because the cost of public key operations severely limited the number of https requests/second that a server could handle.

So the goals of the tcpcrypt project back then were 1) solve the performance problem by making it practical to encrypt essentially all TCP traffic, 2) make undetectable widespread eavesdropping impractical, 3) provide a pathway for insecure applications to achieve high network security with minimal effort, and 4) avoid the traps applications commonly experience such as low-quality pseudo-random seeds, or leaking session keys through memory errors.

We solved #1 by carefully constructing the protocol to minimize public key overhead on the server and allow optimized server authentication. In particular, with low-exponent RSA (popular at the time), the server-side computation was only encryption (which is much, much cheaper than decryption). Moreover, for strong security (with server-side authentication), servers could perform batch signing in which a single signature could be amortized over many connections. The result was dramatically better performance for the original version of tcpcrypt, which you can see described here:

http://www.scs.stanford.edu/~dm/home/papers/bittau:tcpcrypt....

We solved #2 by encrypting everything, and providing a mechanism to tie authentication to that encryption. So if an ISP systematically mounted a man-in-the-middle attack, they would cause authentication to fail. Moreover, it would be easy to run tests by, for instance, logging session IDs at connections between various endpoints and comparing them after the fact. That still meant ISPs could violate the privacy of unauthenticated connections, but then at least we would know about it and could decide, as a society, whether we wanted this kind of eavesdropping.

#3 was really the ultimate goal--strong security everywhere even with insecure legacy protocols. The problem with legacy protocols is that they didn't all contain a way to add a "STARTTLS" verb. So what we did was give tcpcrypt an "Application-aware" bit that would allow out-of-band signaling that the application knows about tcpcrypt. This bit would allow legacy application maintainers to shoe-horn in stronger security in a completely backwards-compatible way, and if the project really took off could then ultimately disable the insecure old unauthenticated protocol.

Finally, #4 was handled by keeping session keys in the kernel (so they can't leak in core-dumps, uninitialized data, buffer errors, etc.), and providing a fresh session ID that can be authenticated even without a good source of randomness.

As time went on, the performance argument (#1) became much less relevant, with the result that tcpcrypt now uses ECC crypto instead of RSA. (Though the batch-authentication optimization is still available.) However, #2 actually became more relevant (at least in 2013), and #4 even more so with all these catastrophic bugs (like heartbleed, or Debian disabling randomness in OpenSSL). It's also still much, much harder than it should be to write software that encrypts network traffic, particularly where one doesn't just want the standard anonymous-client to server-with-X.509-certificate setup. If people adopt tcpcrypt, setting up an authenticated, encrypted TCP connection will become comparable to checking a unix password--just a few lines of code.

A lot of this is clearer if you read the companion TCP-ENO RFC, rather than tcpcrypt. The goals and rationale and overall negotiation mechanism were broken out into a separate document to allow better upgradability to future encryption protocols: https://tools.ietf.org/html/rfc8547

mazieres | 7 years ago | on: Understanding the Stellar Consensus Protocol

The thing is that reputation isn't formed in a vacuum. E.g., in the case of Stellar's blockchain, you have companies issuing assets like digital dollars or carbon credits or shares in commercial real estate ventures. The tokens have value because people trust their counterparties. Even in the case of XLM, Stellar's "native" cryptocurrency, ultimately people believe it has value because they can trade it for other assets on Stellar's built-in DEX or sell it for fiat currency or other crypto at exchanges. It doesn't matter how many Sybil nodes an attacker creates, if I place Kraken and Coinbase in my quorum slice, I will remain in sync with their validators and know that I can subsequently choose to deposit all of my tokens on those exchanges for trading.

mazieres | 7 years ago | on: Understanding the Stellar Consensus Protocol

The Sybil attack doesn't work against SCP because, unlike proof-of-stake, the validators are not anonymous. E.g., are you using Stronghold dollars? Then put their validators in all of your quorum slices and you will be guaranteed not to be forked from them. Eventually, every exchange and issuer should designate one or more validators. By including the validators of the institutions you care about in your quorum slices, you know you will be able to redeem and trade the tokens at those places.

Now what makes SCP different from traditional BFT replication is not just that the quorums are defined in a decentralized way, but that they require a transitive closure of dependencies. So if you depend on stronghold and stronghold depends IBM and binance also depends on IBM, then even if you don't think you care about binance, you will still remain in sync with them.

mazieres | 8 years ago | on: Keybase is now supported by the Stellar Development Foundation

If you are so allergic to cryptocurrency, why didn't it bother you when Keybase started writing their root into the Bitcoin blockchain (https://keybase.io/docs/server_security/merkle_root_in_bitco...)?

There are obviously annoying ways in which keybase could support cryptocurrency, but you are making a lot of assumptions about what Keybase is going to do that are not based on the blog post. For example, wallets do not mine coins and Stellar does not support mining.

Why don't you wait to see what comes out and submit a feature request if you don't like it, instead of flipping out about some hypothetical feature you won't like.

mazieres | 8 years ago | on: Keybase is now supported by the Stellar Development Foundation

How is Stellar still "in the gate," more than two years after deploying their decentralized Byzantine agreement algorithm?

Ripple has only just now, in 2018, published their decentralized consensus algorithm (Cobalt), which as far as I know is not even in production use yet, and doesn't provide optimal safety. (In settings where Cobalt is guaranteed Safe, SCP would be too, but not vice versa.) Their production network still uses a protocol that, by Ripple's own analysis (https://arxiv.org/pdf/1802.07242), fails to guarantee safety without >90% agreement on the UNL.

mazieres | 8 years ago | on: Keybase is now supported by the Stellar Development Foundation

Note that in general there is no way to name a particular branch of a blockchain fork. In cases with a protocol change coordinated well in advance, a counterparty anticipating the fork could announce that their tokens on one branch will be useless. However, if you just have two competing mining pools duking it out with the same protocol, there will be no way to name the branches ahead of time.

What's worse is that colored coins could distort the incentive structure to make it profitable to bribe miners, because the benefit to an attacker of subverting consensus could far outweigh the value of 12.5 BTC/block.

mazieres | 8 years ago | on: Stellar Protocol: A Federated Model for Internet-Level Consensus (2016) [pdf]

What's interesting is that SCP implements consensus without electing a leader. There are, of course, numerous asynchronous protocols that do this, like Ben Or, Rabin, Mostéfaoui, and most recently HoneyBadger, but is rarer for synchronous protocols like SCP. However, it is necessary for SCP's setting, because if you don't even have agreement among nodes over what nodes do and don't exist in the system, how could you hope to elect a leader.

One way to view how SCP avoids leader election is to consider that it is effectively emulating the leader. SCP has two phases, a nomination and a balloting phase. The nomination phase is effectively like one or more instances of an asynchronous broadcast protocol (which don't require a leader since multiple nodes can choose to broadcast). The balloting phase is like Paxos, except that the value to propose is embedded in the ballot number so nodes don't require a leader to tell them what is being proposed--they can each emulate the leader themselves.

mazieres | 8 years ago | on: Stellar Protocol: A Federated Model for Internet-Level Consensus (2016) [pdf]

Yes. Every validator in Stellar has a copy of the complete ledger. However, different validators may be authoritative for different types of token. Say bank_A runs a validator and issues digital dollars on Stellar, while bank_B runs a validator and issues digital euros on Stellar. Each validator will store both banks' token holdings and prevent double spends. However bank_A should offer to redeem its digital dollars for real currency only when the redemption transaction commits on its own validator, and similarly for bank_B.

Running a validator protects a token issuer against double redemptions, as might happen in a mining-based blockchain where anonymous miners fork the blockchain and thus create twice as many tokens. That's fine for pure crypto tokens, where you can create Ethereum [classic] or Bitcoin cash out of thin air. But if you were using colored coins or ERC20 tokens to represent claims on bank deposits, these forks would be a problem.

page 1