kazuho's comments

kazuho | 10 years ago | on: Show HN: Neverbleed – privilege separation engine for OpenSSL and LibreSSL

I assume you are referring to Keyless SSL. https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...

For Keyless SSL, it is necessary to make RSA operations asynchronous, since the operations are requested over the TCP network (which may have big delays).

OTOH Neverbleed degelates the operations within the same server using Unix sockets. So there is no fear of such delays. And the server spawn a dedicated thread to each client thread. In other words, the delay is practically _no worse_ than what it is without Neverbleed.

And discussing _how worse_ it is, calculations related to TLS handshakes may block the server for a few milliseconds. It may sound bad, but generally speaking it is negligible comparing to the latency over a public network.

kazuho | 10 years ago | on: H2O 1.4.0 released with support for forward secrecy

Libh2o (the protocol implementation) support both libuv and evloop (our tailor-made event loop).

The default event loop of libh2o is libuv, since libuv is popular and has bindings to other protocols (which you would need if you want to implement an application using libh2o).

OTOH the standalone server uses evloop for performance.

kazuho | 11 years ago | on: Show HN: Dependency-based prioritization makes HTTP/2 much faster than SPDY

Thank you for the suggestion.

In case of the benchmark, I run the server as a VM instance on the client machine so that the results would be reproducible (with network latency added by `tc qdisc` command).

PS. PageSpeed is a really nice service; and I agree that it is a useful tool for evaluating the speed in the real-world.

kazuho | 11 years ago | on: Extracting the SuperFish certificate

Thank you for the comment.

I think you did not understand my comment.

It is true that the software is used for MITM. It is true that _Superfish_ is in the middle, decrypting the communication.

OTOH the author claimed that it might be likely for _others_ as well to possibly MITM the communication, by using the recovered key. My comment is that such a situation is unlikely under the premise that the public-key encryption technology was used correctly (from technical standpoint, not ethical).

EDIT: Even if it was the case that the recovered private key was used by the MITM server running locally for communicating with the web browsers, it wouldn't mean that others could use the key to decrypt data transmitted over the wire by using the key, since all the communication encrypted by the key would terminate within the local machine.

EDIT2: Ah sorry, now I understand. The root certificate installed by the adware was using the recovered private key. That would mean that others can MITM the communication by DNS spoofing, etc. together with a server certificate signed with the recoverd key.

kazuho | 11 years ago | on: Extracting the SuperFish certificate

While the blogpost is interesting, I am skeptical of the author's claim that the recovered private key may be used for decrypting user data transmitted over the wire, since private keys cannot be used for encrypting data sent to somebody else.

What it can all do by itself is to decrypt data sent from others, or to digitally sign some data.

I would suspect that the bundled private key was used for digitally signing data to show that it was actually generated by the software. The approach is not perfect (since the private key may get decrypted as the author did), but in general it would work effectively for kicking out third party software.

If the developer's intention was to encrypt the data transferred through the public network, then he/she should have used TLS with server-side authentication, with optionally using clear-text credentials transmitted over the encrypted channel to authenticate the software (e.g. basic authentication over HTTPS).

If it gets proved that private information could be decrypted from data transmitted over the public network by using the recovered private key, then this would be an interesting case of misusing public-key cryptography.

kazuho | 11 years ago | on: OpenSSL Security Advisory

Kind of off-topic, but I wonder when they will release OpenSSL 1.0.2.

1.0.2 includes important fixes (e.g. support for ALPN which is mandatory in HTTP/2, adds support for cross-root certificate validation).

kazuho | 11 years ago | on: Proxygen, Facebook's C++ HTTP Framework

I am sorry to point this out, but the patent license of Proxygen does not look similar to that of Apache License for two reasons.

- the license is terminated when one files a claim against _any_ of Facebook's software or services (IIRC Apache License gets terminated only when filed against the software)

- the license also terminates when you claim that "any right in any patent claim of Facebook is invalid or unenforceable"

The second clause seems very agressive (or pro-patent) to me, which makes me feel sorry for the developers of Proxygen, since IMO such a clause would harm the acceptance of the software outside Facebook.

It would be great if you reconsider the patent license.

Disclaimer: I am developer of H2O, an open-source HTTP/1 and HTTP/2 library, so there is obviously a conflict of interest here. But I wanted to leave a comment anyways since, honestly, I feel sorry if my friends at Facebook needs to go with this kind of license.

page 1