top | item 19502657

Keybase is not softer than TOFU

614 points| malgorithms | 7 years ago |keybase.io | reply

293 comments

order
[+] el_cujo|7 years ago|reply
"How often do resets happen? Answer: if you're using WhatsApp or Signal, all the freaking time.

With those apps, you throw away the crypto and just start trusting the server: (1) whenever you switch to a new phone; (2) whenever any partner switches to a new phone; (3) when you factory-reset a phone; (4) when any partner factory-resets a phone, (5) whenever you uninstall and reinstall the app, or (6) when any partner uninstalls and reinstalls. If you have just dozens of contacts, resets will affect you every few days."

I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle. I feel like for the average user, they wouldn't do this with their phone more than like once every year or two. So I guess if you have like 600 people you talk to on these apps regularly then that works out to daily, but for me at least this isn't that big of a deal and was pretty much understood from the outset.

[+] flafla2|7 years ago|reply
This isn't really the issue. From further down on the article:

""" There's a very effective attack here. Let's say Eve wants to break into Alice and Bob's existing conversation, and can get in the middle between them. Alice and Bob have been in contact for years, having long ago TOFU'ed.

Eve simply makes it look to Alice that Bob bought a new phone:

Bob (Eve): Hey Hey

Alice: Yo Bob! Looks like you got new safety numbers.

Bob (Eve): Yeah, I got the iPhone XS, nice phone, I'm really happy with it. Let's exchange safety numbers at RWC 2020. Hey - do you have Caroline's current address? Gonna surprise her while I'm in SF.

Alice: Bad call, Android 4 life! Yeah 555 Cozy Street.

So to call most E2E chat systems TOFU is far too generous. It's more like TADA — Trust After Device Additions.1 This is a real, not articifical, problem, as it it creates an opportunity for malicious introductions into pre-existing conversation. Unlike real TOFU...by the time someone is interested in your TOFU conversation, they can't break in. With TADA, they can. """

The quote you linked is relevant because it means that you can't simply ignore this problem; resets are fairly common, common enough that you can't just delete the key-loser's account (for example). However it doesn't have anything to do with the actual security flaw (if we want to call it that, it's really more of a UX / messaging problem) being discussed.

[+] MisterTea|7 years ago|reply
> I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle.

What income bracket is your circle? Just going to reminisce a bit here...

I come from a family business that used a lot of manual labor from the local neighborhood here in south Queens, NYC. I spent a lot of time with guys in late teens and 20's, mainly black and hispanic kids looking to make some decent side money. They came from low income backgrounds and were poorly educated. This leads to a very unstable life where money really is scarce.

They frequently changed phone numbers due to: "crazy" ex, owes money to scary people, owes child/alimony support, owes the government, commits crimes and uses frequent burners, did jail time, or the #1 reason; out of minutes and no money to put on the prepaid plan. They used prepaid plans as they could refill accounts using cash at physical phone stores or check cashing stores because they don't have a computer or bank account. No phone service means no internet. Most of those guys were good people who I got along with just fine. Ignorance really is a terrible thing.

[+] CharlesColeman|7 years ago|reply
> I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle. I feel like for the average user, they wouldn't do this with their phone more than like once every year or two.

It's not to common with me, and I just notify my contacts know I'm about to do it under my old key before I switch, so they know a warning is coming. Then I just reverify next time we meet.

I'd probably be more diligent if I actually had something to hide, but I don't, so I'm not.

[+] mhluongo|7 years ago|reply
I've had to do it 4-5 times myself (busted phone, water, upgrade). I think it's possible to export the keys (at very least the message history) to avoid this, but if I need to verify with someone I actually fall back on... Keybase :)
[+] seany|7 years ago|reply
I've migrated signal from 3 different phones, along with history and crypto. Signal supports backups that are movable.
[+] dabernathy89|7 years ago|reply
I lost my message history on Whatsapp recently just because I decided to rollback from the Android Q beta. Doesn't always have to be a new piece of hardware.
[+] idlewords|7 years ago|reply
Also (I believe) when you swap out SIM cards, which is routine behavior in parts of the world.
[+] idlewords|7 years ago|reply
Much of the criticism of how gently WhatsApp and Signal handle key resets misses the mark. Widespread adoption of end-to-end encrypted messaging is an effective countermeasure to passive collection and blanket surveillance. In order to get that widespread adoption, you can't be showing people skull-and-crossbones warnings every time they swap out a SIM card.

Speaking from my experience getting journalists and political campaigns set up with signal in 2017-2018, the early scary key change warnings were off-putting to people and made them reluctant to continue with the messenger. At the time, I was in contact with 100-150 people via signal and quickly ran out of patience with anyone who insisted on a safety number check. But the UI at the time encouraged that level of paranoia.

I continue to believe that making key changes as painless as possible for users is the correct approach as long as there are ways to harden this behavior in the settings, for the benefit of the far smaller set of people to whom MITM attacks are a credible threat.

[+] BostonEnginerd|7 years ago|reply
I agree with your assessment about key change management. With that said, I do like the device history trail that Keybase uses. Keybase has a better multi-device story - in that it has a multi-device story at all. I understand what they're trying to do preserving message history - I do prefer my conversations to be ephemeral by default.

I'm OK with the compromise that Signal has made with key management -- there are people that I really care to have private communication with, and people who I prefer to have private communication with. I verify the former, and don't bother with the latter unless we happen to be bored together in the same room.

On a side note, thanks for your efforts over the last two years!

[+] comex|7 years ago|reply
I agree that there will always be some instances of key changes, and that you can't make that too scary or else it just drives users away.

But it's not all or nothing. Signal could do better at improving their UX to reduce the number of key changes people make in practice. If they did, they could probably make the prompt slightly more scary (but not Keybase-level scary) – but even if they didn't change the prompt, the mere fact that users wouldn't see it as frequently means they'd be more likely to pay attention when they did.

In particular, according on other posts in this thread, the backup/restore mechanism that allows transferring a key between devices currently has poor UX, only works on Android, is buggy. Obviously that should be fixed. Once that's done, they should make the app actively prompt you to transfer the key as part of the setup process, using a QR code scanned from either the previous phone or a linked computer. You would still have the option not to do so, and some users would inevitably choose that option (if only out of laziness), but if transferring the key is seamless enough, a decent fraction of users would do it.

[+] a3_nm|7 years ago|reply
> you can't be showing people skull-and-crossbones warnings every time they swap out a SIM card

But isn't Keybase's solution precisely about ensuring that this doesn't happen? If you change your phone's SIM card, the phone still remembers its secret key.

[+] peterwwillis|7 years ago|reply
A solution to this would be selective enforcement. Let the user decide if they want "light" (auto accept new keys), "strong" (skull and crossbones), or "paranoid" (auto-block any new keys) security.

The same should be applicable to browsers, I think. When logged into your bank, you should be able to click a button that says "Make this website use paranoid security" or similar, which would apply very strict policies that prevent most HTTPS attacks, and maybe even enable protection against phishing and similar domains, or something.

I don't know why apps keep painting users into a corner with one generic option rather then giving them more choices.

[+] eadmund|7 years ago|reply
> Widespread adoption of end-to-end encrypted messaging is an effective countermeasure to passive collection and blanket surveillance.

Widespread adoption of MitM-capable encrypted messaging active collection and targeted surveillance.

[+] cyphar|7 years ago|reply
Matrix handles this by exposing the device keys to the user so they can make decisions about whether to trust new devices (and I believe identity key changes mean you wouldn't be in your rooms anymore -- but in order to change identity keys you would have to delete you entire Matrix account on the homeserver).

If a new device has shown up, your messages will be blocked from being sent until you verify the new device. To be fair, it is too easy to blaze past the warning -- and it can happen often in large rooms. As a result, it's a little bit cumbersome at the moment, but with device cross-signing coming down the pipe and the new verification system (which is much better than Signal's IMHO -- you just check both devices have the same string of 7 emoji on their screen) it's getting a lot better.

[+] eeeeeeeeeeeee|7 years ago|reply
I still think Keybase is right — multi-device or some kind of multi-trust model is best so the key revocations aren’t happening so often. I remember this problem with PGP and most people did not take the key verification seriously.

And the problem with SSH that was pointed out in the article is funny now because of cloud services where servers are constantly being destroyed, so keys are changing frequently, unless you save and persist that private key for the server in your configuration. Which I’ve realized a lot of companies are simply not doing, which leads to people straight up ignoring key verifications in their ssh config.

[+] UncleMeat|7 years ago|reply
This works great for incredibly tech savvy people who have an offline way of verifying public keys. This is completely and utterly useless for 99.9% of whatsapp's 1b+ users. Heck, how many times have security-aware software engineers blazed through the "THIS KEY IS NOT TRUSTED" warning from ssh?
[+] uhoreg|7 years ago|reply
Yes, our (Matrix's) cross-signing feature, which we're currently working on, sounds similar to the solution presented by Keybase in this blog post.
[+] zepearl|7 years ago|reply
Just fyi, about "Matrix" (not directly related to this specific post, but I still wanted to mention something about it...):

I started testing "Matrix" using the "Riot" app on Android and the web-frontend ~4 weeks ago and so far things look good, which I think is fantastic!!!

I was especially happy to see that I could not read messages posted prior to me entering any encrypted chatrooms => gave me a real "good general feeling".

The "Riot"-app for Android (using it on SailfishOS) is so far quite intuitive and it did never crash so far. Battery consumption is relatively heavy (I used to charge my phone once per week, now it's once every ~3-4 days) but I guess that other users that have more frequently an active Internet connection (because of other apps) will be impacted less.

All in all I'm beginning to think to try to push it to my friends - I think that the first attempt might be to use an encrypted group chat for special themes (e.g. discussions about algorithms, stock market discussions, etc..., all what we do not dare to post in Whatsapp) to have a technical advantage vs. Whasapp & Co. .

[+] IshKebab|7 years ago|reply
Yeah that sounds like it doesn't really solve it at all (yet).
[+] zobzu|7 years ago|reply
yup, one big issue is the warning coming up too often, not making it more visible or forcing it in like matrix does.

because then you just end up with less people on the system, so you'll talk to them over unsafe channels.

keybase's opinion is definitely my opinion so im a little biased there, but so far i can't think of anything that's better than making the reset occurs as rarely as possible by using multiple devices for recovery.

[+] ajvs|7 years ago|reply
Yeah usability-wise it's not the greatest right now until those features get added, but security-wise it's better than Signal/WhatsApp in warning you about detected new keys.
[+] DoubleMalt|7 years ago|reply
I love Keybase.

Actually I tried to install the app a couple of days ago.

But there is no version on F-Droid and the version on Play Store has Firebase Analytics baked in

Is there a plan for a clean F-Droid version?

[+] sdenton4|7 years ago|reply
(A very tiny fwiw): you /can/ create a backup in signal and use it to transfer seamlessly to a new device, without triggering new safety number checks. The user flow sucks, but it is possible.
[+] unsignedint|7 years ago|reply
Requirements to use phone number, let alone, as an identifier is major complaints I have for many of messaging apps. It really limits usable cases as I have plenty of people I would love to interact but not necessarily want provide my phone numbers.

I love Keybase for this aspect, but something I don't like about it is its device name handling. They don't allow decommissioning old device names, so I end up having 'MyLaptop' 'MyLaptop 1' 'MyLaptop 2'...

[+] broahmed|7 years ago|reply
Let's not forget what Signal and the Signal Protocol (used by WhatsApp) have achieved: making end-to-end encrypted chat EASY and accessible for the masses, for many of whom "security" is password123. It's important in our post-Snowden world.
[+] godelski|7 years ago|reply
I'm not sure why this is downvoted. The reason WA has so much popularity is that it is easy to use and you could use wifi. Signal is not as easy to use (less features) but more secure and so privacy conscious people like it. But people don't like switching to Signal because "it is hard". Making a more secure app is good, but we have to question "do we want people using pretty good e2e or do we want to make the perfect app first?"
[+] orblivion|7 years ago|reply
The argument is that people don't bother to keep their verified connections up to date. But come on, how often to MITM attacks happen? For those rare cases where people are doing stuff important enough that it becomes a possibility, I would guess the security conscious individuals would become more diligent.

For the rest of us, it seems that doing it on occasion is still worth it. As I understand, Signal is designed to never indicate over the wire who has checked safety numbers. Thus, a MITM anywhere on the network creates a risk of becoming discovered, which is a cost in itself.

[+] hopler|7 years ago|reply
What's the point of using an encrypted chat at all if you don't care if it's hacked?
[+] nickik|7 years ago|reply
The point is to create a system that is private for all of society. You don't just drive a car that is safe if you go on a risky road.
[+] miopa|7 years ago|reply
I'm wondering why the article fails to mention that there is a sufficiently good and easy mechanism to compare and verify the new safety numbers. You just talk to your peer and read the numbers - and the peer can verify them.

This will fail when AI software gets really good at imitating voice in real-time during casual talk, but we're not there yet (or - if that is my threat model, I'll find an out of band way to verify)

[+] zaroth|7 years ago|reply
A sufficiently good mechanism from a cryptographic standpoint, but which from a usability perspective totally falls down because people never do it.

It’s a very worthy goal to make these events rare and scary so that users might actually bother reading those numbers out loud to each other.

[+] Pxtl|7 years ago|reply
I'm not a security guy, but wouldn't the most seamless approach be to encrypt the key collection with a master password and store the encrypted key collection on the server? So on a new device you'd download the encrypted key collection and then decrypt it locally?

If they forget their password, they can re-upload it from a validated device with a new master password.

[+] syn0byte|7 years ago|reply
Biggest issue is single point of failure for total access to all devices. Get\guess\beat out the master password and its game over on all connected devices.
[+] eeeeeeeeeeeee|7 years ago|reply
Isn’t this pretty close to how Apple iMessage works? I agree it’s a decent compromise to give encryption to the masses, but it has its downsides if Apple is compelled by a government to manipulate that.

All security is compromise though. And I think Apple has done something amazing with what is provided given their install base.

[+] hprotagonist|7 years ago|reply
I actually _do_ reverify safety numbers out-of-band every time they change.
[+] threwawasy1228|7 years ago|reply
I would actually like to see some numbers or a survey on how many people actually do this. Anecdotally I don't think I know a single person who ever verifies safety numbers out of band. Of the approximately 50 people I use to talk on signal with regularly not including large group chats of people I don't know as well, not a single person has ever tried to reverify me nor have I tried to reverify any of them.

Would be cool to see what the numbers are for reverifications.

[+] octorian|7 years ago|reply
And Signal does allow you to mark a safety number as "verified", which I think does put up an extra barrier. I don't think the article makes any mention of this.
[+] newscracker|7 years ago|reply
I'm curious how you verify them out of band, because that would reveal your threat model and why or what for you take such pains (not arguing that out of band verification is useless or wrong).

The following questions are not intended as attempts to poke a hole on the requirement for out of band verification. Do you meet in person and speak in whispers or show some code that's hidden from the view of others (and cameras)? Or do you use a voice call or SMS (both expressly designed to support surveillance)? Or do you use another end to end encrypted app or email (if yes, how do you verify the keys for that)?

[+] jplayer01|7 years ago|reply
I feel like, unless I'm about to transmit some data that is absolutely crucial, the act of regular conversation through Signal is enough to verify the person. Unless I'm missing something?
[+] solatic|7 years ago|reply
Why can't we just have government certificate authorities for the average Joe?

Ultimately, while people may have very little trust for the government in general, the one thing that people do trust the government for is establishing identity. We use government ID papers to establish our right to work, to open bank accounts, to enter legal agreements, and to cross borders. Why should communications be any different? We don't need to trust the government with the content of the communications (and we shouldn't), by not providing the government with the private keys. But why can't I get the government to sign a public key for me?

The issue it raises is whether people will eventually get locked out of society if the government decides to get antagonistic with somebody by revoking their public key and refusing to issue a new one, given a society where such a scheme is popular. But we don't have any sort of such protection today - the government can seize your passport, seize your driving license, freeze your bank accounts. A society in which the government solves identity issues for the digital age is only a net improvement over the status quo.

[+] inetknght|7 years ago|reply
> Similarly, in SSH, if a remote host's key changes, it doesn't "just work," it gets downright belligerent:

Funny enough, I have ranted to friends/coworkers about sysadmins completely replacing machines and not telling anyone. How do I know it happens? BECAUSE OF THIS EXACT WARNING.

[+] tosh|7 years ago|reply
If there was an iPad app that works in landscape mode I’d be able to switch most of my groups from other apps to keybase.
[+] xrd|7 years ago|reply
Smart stuff and fun:

"Did you though, or did you just scroll down here?"

[+] DINKDINK|7 years ago|reply
If a single party in a chat has their keys reset that's not a MITM attack because a MITM would need to rekey to both parties. If the two clients communicate via at least one uncompromised service to communicate that their counterparty's keys have been reset, they might be able to detect a MITM

Keybase's example Cozy Street is not a MITM attack (i.e. an attacker has inserted themselves between two parties and can get the plaintext), it's just impersonation.

If it was a real MITM attack both Alice and Bob would get rekey notifications, unless they both confirm whenever they get a rekey notification a MITM attack is possible.

I also think that crypto is stuck in the early 90s by thinking real world meetups are the only way to authenticate keys. If you know what your chat friend sounds and looks like, willing to submit a video, and don't think your adversary can fake such a proof, a simple video of you reading your pubkey/safety number is sufficient. Is that scalably practical? no but it is possible.

That said: keybase is doing cool novel work, I commend them for advancing the state of the art.

[+] grenoire|7 years ago|reply
I don't know how much weight such warnings will hold, given how well we know people ignore cookie warnings and the rest...
[+] arendtio|7 years ago|reply
Actually, I think it is wrong to call it TOFU as it simply doesn't require the user to opt-in to anything. Instead, it seems more like the thing the XMPP people call 'blind trust before verification' [1]. I am not quite sure if it is exactly the same as 'blind trust before verification' changes its behavior as soon as you explicitly verified the keys.

That way everybody can use somehow e2e encrypted messages, but if you really care about the security you validate your keys and get a real trusted e2e encryption.

[1] https://gultsch.de/trust.html