mikewest's comments

mikewest | 5 years ago | on: Feedback wanted: CORS for private networks (RFC1918)

IPv6 does indeed complicate things. I suspect we'll end up trying a few things before finding the right answer, starting with a) allowing network admins to configure IP ranges that correspond to the network they control, and b) examining the local network to infer a private range.

Happily(?), IPv4 networks are still pervasive, and this proposal seems clearly valuable in those environments.

mikewest | 5 years ago | on: Feedback wanted: CORS for private networks (RFC1918)

The core assertion behind this proposal is that devices and services running on a local network can continue making themselves available to external networks if and only if they can update themselves to make that desired relationship explicit. If they can't update themselves, they also can't fix security bugs, and really must not be exposed to the web.

mikewest | 5 years ago | on: Feedback wanted: CORS for private networks (RFC1918)

Correct. In the status quo, you will be best-served by looking at solutions similar to what Plex is shipping (https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...). ACME's DNS-based challenges might even make this easier today than it was when that mechanism was designed.

Longer-term, it seems clear that it would be valuable to come up with ways in which we can teach browsers how to trust devices on a local network. Thus far, this hasn't been a heavy area of investment. I can imagine it becoming more important if we're able to ship restrictions like the ones described here, as they do make the capability to authenticate and encrypt communication channels to local devices more important.

mikewest | 5 years ago | on: Feedback wanted: CORS for private networks (RFC1918)

The proposal does not attempt to force private network resources to use TLS. That would be an excellent outcome, but it's difficult to do in the status quo, and is a separate problem to address separately.

The proposal _does_ require pages that wish to request resources across a network boundary to be delivered securely, which therefore requires resources that wish to be accessible across network boundaries to be served securely (as they'd otherwise be blocked as mixed content). This places the burden upon those resources which wish to be included externally, which seems like the right place for it to land.

mikewest | 6 years ago | on: Improving privacy and security on the web

Unfortunately, crawling isn't a terribly effective way of evaluating breakage, as the crawler doesn't sign-in, and therefore doesn't attempt to federate sign-in across multiple sites. That's part of the reason that we're not shipping this change today, but proposing it as a (near-)future step.

To that end, we've implemented the change behind two flags (chrome://flags/#same-site-by-default-cookies and chrome://flags/#cookies-without-same-site-must-be-secure) so that we can work with developers to help them migrate cookies that need to be accessible cross-site to `SameSite=None; Secure`.

Ideally, we won't unintentionally break anything when we're confident enough to ship this change.

mikewest | 6 years ago | on: Improving privacy and security on the web

We're proposing treating cookies as `SameSite=Lax` by default (https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03...). Developers would be able to opt-into the status quo by explicitly asserting `SameSite=None`, but to do so, they'll also need to ensure that their cookies won't be delivered over non-secure transport by asserting the `Secure` attribute as well.

https://tools.ietf.org/html/draft-west-cookie-incrementalism spells out the proposal in a bit more detail.

mikewest | 9 years ago | on: Let 'localhost' be localhost

That's done via port-forwarding. That is, Chrome is talking to the loopback interface on a particular port. The server listening at that port forwards the requests across the debugging bridge to the phone, and ferries the response back across in the same way.

It should be unaffected by the suggestion in this document.

mikewest | 9 years ago | on: Let 'localhost' be localhost

> Did you just assume IPv4? That's IPist!

The next line in the document is "IPv6 loopback addresses are defined in Section 3 of [RFC5156] as '::1/128'." :)

mikewest | 9 years ago | on: Let 'localhost' be localhost

That's not perfectly true. RFC5735 defines 127/8 as loopback addresses, but it leaves the door open for other addresses to be assigned to the loopback interface. And indeed doing so is a common pattern for network devices, and a less common (but very useful) pattern for services.

I don't mean to overconstrain the definition of loopback. If you have a good mechanism for specifying a specific IP range as loopback, and that mechanism can be understood by client software and resolution APIs, then I don't see any reason not to allow it.

The salient distinction from my perspective is traffic within a specific host, and traffic that traverses the network. If you have suggestions for language that make that distinction more clearly than the document currently does, I'm happy to incorporate it. :)

I also find it curious that this draft allows only address queries (presumably A and AAAA) under .localhost. I'd like to know the rationale for that restriction. For example, there may well be applications that only use SRV records.

https://twitter.com/dbrower/status/781001487157719040 raises similar concerns. The rationale is simple: I wanted to make the smallest change possible to RFC6761, and item 3 of https://tools.ietf.org/html/rfc6761#section-6.3 already contains the address query restriction.

It's probably reasonable to reconsider it, that's just a larger change than the one I was specifically trying to make. :)

mikewest | 13 years ago | on: Content Security Policy

Does a page's CSP break your extension in Chrome Canary? We've done quite a bit of work to allow extensions to transparently bypass a page's policy, and I'd much prefer to fix the bugs in Chrome than for you to kill a page's policy via the WebRequest API.

I'd very much appreciate it if you could point me at things that aren't working in Canary. :)

mikewest | 13 years ago | on: Blink: A rendering engine for the Chromium project

The user agent string is, for the moment, remaining exactly the same format. For better or worse, all those crufty bits are currently necessary for compatibility with sites doing a poor job of sniffing out functionality.

mikewest | 13 years ago | on: Blink: A rendering engine for the Chromium project

Hi Maciej. Sorry if my comments read as though I was implying that you were wrong or bullheaded to choose WebKit2. That wasn't my intention; there are of course good technical arguments for choosing either architecture, and I'll choose my words more carefully next time the question comes up.

mikewest | 13 years ago | on: Blink: A rendering engine for the Chromium project

1. Chromium will continue to build via gyp.

2. I'm not sure what you mean.

3. We can't, and don't want to, change the license of code that's already been released. That said, most (all?) of WebCore isn't LGPL. See my favourite file, http://trac.webkit.org/browser/trunk/Source/WebCore/page/Con... for example.

4. WebKit and Chromium have historically had differing opinions regarding what makes a "good" comment. I think you can expect Blink's code to tend more and more towards Chromium-style as time goes on, but it's not going to happen overnight.

5. ~5 million lines of code that we don't currently compile or run in Chromium. That's a bit, but it won't have much effect on the binary size.

6. Short term, not much will change. Longer term, a few things will probably happen: for instance, the widget tree will likely be removed, and we'll likely be able to step back and reevaluate some changes in light of that.

mikewest | 13 years ago | on: Blink: A rendering engine for the Chromium project

Generally, I think WebKit2 and Chromium simply disagree about where to hook into the platform, and what the responsibilities of the embedder should be. The description at http://trac.webkit.org/wiki/WebKit2 is written from Apple's point of view, but I think it's broadly fair.

The position we're taking is that the Content layer (in Chromium) is the right place to hook into the system: http://www.chromium.org/developers/content-module That's where we've drawn the boundary between the embeddable bits and the Chrome and browser-specific bits.

Regarding the history, I'd suggest adding some questions to the Moderator for tomorrow's video Q/A: http://google.com/moderator/#15/e=20ac1d&t=20ac1d.40&... The folks answering questions there were around right at the beginning... I only hopped on board ~2.5 years ago. :)

page 1