top | item 45736499

HTTPS by default

302 points| jhalderm | 4 months ago |security.googleblog.com

255 comments

order
[+] tialaramex|4 months ago|reply
I have had HTTPS-by-default for years and I can say that we're past the point where there's noticeable year-to-year change for which sites aren't HTTPS. It's almost always old stuff that pre-dates Let's Encrypt (and presumably just nobody ever added HTTPS). The news site which stopped updating in 2007, the blog somebody last posted to in 2011, that sort of thing.

I think it's important to emphasise that although Tim's toy hypermedia system (the "World Wide Web") didn't come with baked in security, ordinary users have never really understood that. It seems to them as though http://foo.example/ must be guaranteed to be foo.example, just making that true by upgrading to HTTPS is way easier than somehow teaching billions of people that it wasn't true and then what they ought to do about that.

I am reminded of the UK's APP scams. "Authorized Push Payment" was a situation where ordinary people think they're paying say, "Big Law Firm" but actually a scammer persuaded them to give money to an account they control because historically the UK's payment systems didn't care about names, so to it a payment to "Big Law Firm" acct #123456789 is the same as a payment to "Jane Smith" acct #123456789 even though you'd never get a bank to open you an account in the name of "Big Law Firm" without documents the scammer doesn't have. To fix this, today's UK payment systems treat the name as a required match not merely for your records, so when you say "Big Law Firm" and try to pay Jane's account because you've been scammed, the software says "Wrong, are you being defrauded?" and so you're safe 'cos you have no reason to fill out "Jane Smith" as that's not who you're intending to give money to.

We could have tried to teach all the tens of millions of UK residents that the name was ignored and so they need other safeguards, but that's not practical. Upgrading payment systems to check the name was difficult but possible.

[+] sam_lowry_|4 months ago|reply
I run my blog in unencrypted HTTP/1.1 just to make a point that we do not have to depend on third parties to publish content online.

And I noticed that Whatsapp is even worse than Chrome, it opens HTTPS even if I share HTTP links.

[+] isodev|4 months ago|reply
While Google and friends are happy to push for https, it’s dramatically easier to scam people via ads or AI generated content. Claiming plain HTTP is scary seems like a straw man tbh
[+] speleding|4 months ago|reply
I don't like this change. There are a lot of SaaS business that allow you to create a CNAME along the lines of "saas_app_name.yourbusiness.com". For example Fastmail and Zoho do that, our business offers that feature as well. When you arrive at our site we do a redirect to a proper https URL.

But a browser will not accept a redirect from a domain with an incorrect certificate (and rightly so), so this will start failing if https becomes the default, unless we generate certificates for all those customers, many thousands in our case. And then we need to get those certificates to the AWS load balancer where we terminate https (not even sure if it can handle that many). I think we may need to retire that feature.

[+] 8organicbits|4 months ago|reply
> The news site which stopped updating in 2007, the blog somebody last posted to in 2011, that sort of thing.

Interesting, that hasn't been my experience. There's a certain group of stubborn techies who have active sites lacking HTTPS. One example is Dave Winer's blog:

http://scripting.com/

He's doing some really interesting things over at https://feedland.com, so I'm glad I clicked through the TLS warning on his blog.

[+] philippta|4 months ago|reply
While this is great for end users, this just shows again what kind of monopoly Google has over the web by owning Chrome.

I work at a company that also happens to run a CDN and the sheer amount of layers Google forces everyone to put onto their stack, which was a very simple text based protocol, is mind boggling.

First there was simple TCP+HTTP. Then HTTPS came around, adding a lot of CPU load onto servers. Then they invented SPDY which became HTTP2, because websites exploded in asset use (mostly JS). Then they reinvented the layer 4 with QUIC (in-house first), which resulted in HTTP3. Now this.

Each of them adding more complexity and data framing onto, what used to be a simple message/file exchange protocol.

And you can not opt out, because customers put their websites into a website checker and want to see all green traffic lights.

[+] zoeysmithe|4 months ago|reply
>First there was simple TCP+HTTP. Then HTTPS came around, adding a lot of CPU load onto servers.

You can't do e-commerce without encryption. You live under capitalism. Its weird to me to see capitalists not wanting to accept payments for goods. As far as the complexity argument, goes, wait until you see what goes on in your CPU! Or the codebase of your average website. There is no real simplicity and simplicity just ties people's hands.

These weird worship of simplicity just don't make sense. By this argument we should have never left the mainframe green-screen terminal world. Or the PDP era. Or the abacus era for that matter. An arbitrary line drawn in the sense is a near purely emotional appeal and the libertarian housecat meme when applied to technology.

Instead, this is a train with no final destination and those who think overwise are just engaging in nostalgia.

[+] IgorPartola|4 months ago|reply
I distinctly remember trying to sign up for Pandora’s premium plan back in 2012 and their credit card form being served and processed over HTTP. I emailed them telling them that I wanted to give them my money if they would just fix the form. They never got back to me or fix it for several more years while I gave my money to Spotify. Back then HTTPS was NOT the norm and it was a battle to switch people to it. Yes it is annoying for internal networks and a few other things but it is necessary.
[+] fuzzzerd|4 months ago|reply
I remember even back in the early 2000s https for credit card forms was pretty common. Surprised a company like Pandora wasn't with it by thr 2010s.
[+] billyhoffman|4 months ago|reply
In the early to mid 2000s I would believe this. But for a major e-commerce provider in 2012? That seems vanishing improbable.

PCI DSS is the data security standard required by credit card processors for you to be able to accept credit card payments online. Since version 1.0 came out in 2004, Requirement 4.1 has been there, requiring encrypted connections when transmitting card holder information.

There’s certainly was a time when you had two parts of a commerce website: one site all of the product stuff and catalogs and categories and descriptions which are all served over HTTP (www.shop.com) and then usually an entirely separate domain (secure.shop.com) where are the actual checkout process started that used SSL/TLS. This was due to the overhead of SSL in the early 2000s and the cost of certificates. This largely went away once Intel processors got hardware accelerated instructions for things like AES, certificates became more cost-effective, and then let’s encrypt made it simple.

Occasionally during the 2000s and 2010s you might see HTML form that were served over HTTP and the target was an HTTPS URL but even that was rare simply because it was a lot of work to make it that complex instead of having the checkout button just take you to an entirely different site

[+] ottah|4 months ago|reply
Mmmm, great that and mandatory key rotation every 90 days, plus needing to get a cert from an approved CA, means just that more busy work to have an independent web presence.

I don't like people externalizing their security policy preferences. Yes this might be more secure for a class of use-cases, but I as a user should be allowed to decide my threat model. It's not like these initiatives really solve the risks posed by bad actors. We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.

[+] techbrovanguard|4 months ago|reply
You understand that key rotation can and should be automated, right?
[+] danpalmer|4 months ago|reply
HTTPS doesn't have mandatory key rotation every 90 days. LetsEncrypt does for reasons that they document, but you can go elsewhere if you'd prefer.

> I as a user should be allowed to decide my threat model

Asking you if you want to proceed is allowing you to decide your threat model.

> We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.

...and yet we have largely eliminated entire classes of issue on the web with the shift to HTTPS, to the point where asking users to opt-in to HTTP traffic is actually a practical option, raising the default security posture with minimal downside.

[+] zoeysmithe|4 months ago|reply
As someone who has run email servers, I can guarantee you none of this is theater. If you remove all the anti-spam backing, email becomes a useless service. At least the kind of 'accept mail from anyone' smtp thing we all decided to standardize on.
[+] hylaride|4 months ago|reply
This is all automatable and is well documented for almost every setup. If you're on a cloud provider/CDN it's even easier as they'll handle all this for you at pretty much no cost.

You can also still use your own threat model. You can use self-signed certs, import your own CA, etc. The issue is that browsers need to service the mass market, including the figurative grandma who won't otherwise understand fake bank certificates.

As for email, yes...that is a complete shitshow and I'm still surprised it works as well as it does.

[+] antisol|4 months ago|reply
Impressive. I don't need to post my opinion on this anymore - you did it so much better than I ever could.
[+] protocolture|4 months ago|reply
Prediction: Wifi captive portal vendors will not react to this until after 90% of their customerbase has their funding dry up.

It is incredibly common for public wifi captive portals to be built on a stack of hacks, some of which require the inspection of HTTP and DNS requests to function.

*Yes better tools exist, but they dont arent commonly used, and require Portal, WAP and Client support. Most vendors just tell people to turn new fancy shit off, disable HTTPS and proceed with HTTP.

[+] GaryBluto|4 months ago|reply
To be fair, most people connecting to captive portal networks are more likely to be doing so on their phones, and I don't think IOS even allows non-Safari browsers for captive Wi-Fi login. I'm unsure how they'll fix this for Android though.
[+] EGreg|4 months ago|reply
What are you talking about? You can easily build the captive portals by setting up a custom DNS server, and HTTPS has nothing to do with it! In fact, local networks have been doing this very thing for years now. Apple even supports Detecting this interception so the operating system can show a captive portal to the user. The OS maker gives network admins an official a way to enforce captive portals, and it’s not going away with https.
[+] drusepth|4 months ago|reply
Doesn't it already do this? I keep a domain or two on HTTP to force network-level auth flows (which don't always fire correctly when hitting HTTPS) and I've gotten warnings from Chrome about those sites every time for years... Only if I've been to the site recently does the warning not show up.
[+] deathanatos|4 months ago|reply
Right now it only shows a little bubble in the URL bar saying "Not Secure", I think. (So, that is a "warning", in a sense.) TFA is saying there will now be an interstitial if you attempt an HTTP connection.

HSTS might also interact with this, but I'd expect an HSTS site to just cause Chrome to go for HTTPS (and then that connection would either succeed or fail).

> to force network-level auth flows (which don't always fire correctly when hitting HTTPS)

The whole point of HTTPS is basically that these shouldn't work, essentially. Vendors need to stop implementing weird network-level auths by MitM'ing the connection, and DHCP has an option to signal to someone joining a network that they need to go to a URL to do authentication. These MitM-ers are a scourge, and often cause a litany of poor behavior in applications…

[+] dadrian|4 months ago|reply
Chrome has shown the HTTP warning in Incognito mode for about a year, and has shown the warning if you're in Advanced Protection mode for about 2-3 years.
[+] bo1024|4 months ago|reply
> What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.

What is the risk exactly? A man-in-the-middle redirect to a malicious https site?

[+] cowl|4 months ago|reply
This is to be honest a little unfortunate. While Https is very important, do we really need to verify that Blog X that I may read once a year is really who they say they are? For many sites it doesn't make a lot of sense but we are here due to human nature
[+] marginalia_nu|4 months ago|reply
http://www.slackware.com/ is probably the biggest website I'm aware of that does not serve encrypted traffic[1]. but there are a few other legitimately useful resources that don't encrypt.

[1] (Except on the arm subdomain for some reason)

[+] tepmoc|4 months ago|reply
I love https, but I also hate that its basically killed on-site caching and give CDNs more power as its only way to distribute content closer to user
[+] superkuh|4 months ago|reply
HTTPS is great. HTTPS without HTTP is terrible for many human person use cases. Pretending those use cases don't exist is anti-human. But for corporate person use cases HTTPS-only is more than fine, it's required. So they'll force it on us all in all contexts. But in our own personal setups we can chose to be the change we want to see in the world and run HTTP+HTTPS. Even if most of the web becomes an HTTPS-only ID-centric corporate wasteland it doesn't take that many people to make a real web. It existed before them and still does. There's more human's websites out there now then ever. It's just getting harder and harder to find and see using their search and browser defaults. It's not okay, but maybe this is finally a solution to eternal september and we can all just live peacefully on TCP/IP HTTP/1.1 HTTP+HTTPS with HTML while corporate persons diverge off into UDP-land with HTTP/3 HTTPS-only CA TLS only QUIC for delivering javascript applications.
[+] rr808|4 months ago|reply
Https really sucks for our intranet. Every little web app and service needs certificates and you can't use letsencrypt.
[+] itintheory|4 months ago|reply
You may not want to, but you can use public certs and URLs on your intranet. You can't necessarily do http-01 challenges, but DNS based challenges are feasible. There are also other ACME providers which will let you skip challenges for DCVd domains.
[+] keyle|4 months ago|reply
I'm sure there will be a setting flag to stop blocking http sites, or maybe even a domain exclusion which will let you set up your intranet to work on http.

Maybe everything .local will already be allowed.

[+] austin-cheney|4 months ago|reply
The only challenge to https, as compared to http, is certificates. If not for certificates I could roll out a server with https absolutely anywhere in seconds including localhost and internal intranets.

On another note I would much prefer to skip https, as the default, and go straight to WSS (TLS WebSockets). WebSockets are superior to HTTP in absolutely every regard except that HTTP is session-less.

[+] shadowgovt|4 months ago|reply
Good stuff.

Anyone have a good recipe for setting up an HTTPS for one-off experiments in localhost? I generally don't because there isn't much of a compromise story there, but it's always been a security weakness in how I do tests and if Chrome is going to start reminding me stridently I should probably bother to fix it.

[+] marginalia_nu|4 months ago|reply
How exactly are unencrypted localhost connections a security weakness? To intercept the data on a loopback connection you'd need a level of access where encryption wouldn't really add much privacy.
[+] zamadatix|4 months ago|reply
Chrome treats localhost as a secure origin (regardless of HTTPS) by default - don't overthink it.
[+] tracker1|4 months ago|reply
As good an idea as this is... I do hope that localhost/127.0.0.1 will be excluded for devs/testers.
[+] cube00|4 months ago|reply
What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.

Two hosting providers I use only offer HTTP redirects (one being so bad it serves up a self signed cert on the redirect if you attempt HTTPS) so hopefully this kicks them into gear to offer proper secure redirects.

[+] matthewaveryusa|4 months ago|reply
If I set a DNS entry that points to a private ip (e.g, A internal.domain.com 192.168.0.5), will the allow private site setting succeed or fail for http://internal.domain.com?

Either way I agreee with this update. It's better to put the burden of knowledge on those hosting things locally and tinkering with DNS than those that have no idea that a domain does not infer ownership of said domain.

[+] fooofw|4 months ago|reply
What defines private sites, I wonder – beyond "such as local IP addresses like 192.168.0.1, single-label hostnames, and shortlinks like intranet/"?