How about instead of talking about whether Wikileaks is good or bad or whether you support them or not, let's talk about the content of the post.
From what I've read so far, this is pretty freaking cool. It's super interesting to read these docs and see their thought process involved, especially since the product their building is so different from what people are making on a day to day business. It actually looks pretty fun to work on. Also, I think it's neat to read about their need for developing frameworks that can be used around the agency to accomplish stuff.
Unfortunately, I didn't ready anything about self modifying code, which is probably the most difficult malware to detect and probably to write. Maybe it's in there though, I didn't read the whole document. I came to the comments about half way through to see dozens of people talking about whether they support Wikileaks or not which I think is fine, free country, but I'd like to actually know what some people who work with this kind of stuff think.
A framework for compiling to self modifying, yet correct, code wiukd be super cool. I wonder if it always has to be written by hand? Probably not but maybe that's a separate tool Wikileaks has yet to release.
Self-modifying the underlying machine code isn't what it used to be. Besides the difficulty in writing it, there's lot's of caveats about how it interacts with the cache and the instruction pipeline. It also requires setup, because with modern memory protection all the machine code is read-only. Changing the memory protection for some machine code to be executable and writable at once will set off some alarms (And isn't even possible on systems with W^X). So you need to change it to just writable, make your modifications, then change it back to just executable, which is less suspicious, it just looks like what JIT compilers do. But all in all self-modifying code doesn't really give you anything.
The exception to that is packers and other obfuscation techniques, which are related to self-modifying code. The general idea with these is that you take your real program and compress/encrypt/mangle/etc it and store that data in an executable. The code in that executable de-compresses/decrypts/demangles that data, sets it as executable, and then runs it. Unlike traditional self-modifying code, packing is orders of magnitude easier to write for the malware developer. The advantage here is that an antivirus tool can't determine what your real program does statically unless it understands how you mangled it, which is hard to do in general. To "unpack" an executable you've got three general techniques:
1. Packers tend to get reused a lot, so just have a person write an unpacker for popular packers by hand, and do some pattern matching to figure out which packer an executable is using. This doesn't work for everything, but it's fairly simple.
2. Dynamic Analysis. Run the executable and watch the contents of memory as the program unpacks itself, the real program should pop right out. Of course you have to run the executable in some sort of sandbox environment, and there's ways for the malware to detect that and alter it's behavior. This also isn't the most efficient process, so you can't really do this to executables during, say, an antivirus scan.
3. Symbolic Analysis. Basically static analysis on steroids to figure out what the executable will do without actually running it. The malware can't stop this with sandbox detection. But it's super slow and is still an active area of research.
Edit: part of my comment is corrected by comment below - Thanks openasocket!
Another comment about the content of this article:
Three quarters down the wiki page there is code for "adding foreign language" to the code. The options are are to add code comments in Arabic/Chinese/Russian/Korean/Farsi. My gut reaction is the purpose of this added language is to obfuscate the true source of the code - i.e. the code has Chinese comments in it so it must be from China. Ahh. I guess this makes sense to do. Only problem now is that the Chinese/Russian/Farsi/etc characters that they included in their code is now public. (Obviously now the CIA will change the foreign language words they insert)
I'd posit if someone had an X-year-old (i.e. x=7) copy of some malware, and the malware had these specific foreign language comments as shown by the article, there's a good possibility the source of the malware would be from the us government.
This framework seems comparable to many open source obfuscation solutions. I would hope to see more advanced techniques, then again, maybe their requirements called for ensuring things did not look too obfuscated (the more tricks used, the more likely a signature could be detected for their tradecraft).
Personally I do not believe self-modifying code would make much sense in their use case. In fact, this would not be possible on iOS due to kernel-based security protections.
Ok. In that vain, here's a question; should you use any of these tools as an American citizen, beyond what you use them for, are you breaking any laws? That is, could you be guilty of something like sedition or something like it by using these thing illegally gotten?
I've really turned on Wikileaks. Itd be one thing if all the major powers had equivalent leaks publishing, but focusing on the US basically serves Chinese and Russian interests far more than it does the citizens of the US. String obfuscation isn't stemming from some corrupt deal that needs sunlight... this is just doing a disservice to their original mission.
We have fairly extensive evidence that Sony was hacked by a Russian-based APT group. It is likely they were paid to do so by the North Koreans. Check out https://www.operationblockbuster.com/wp-content/uploads/2016... for more info. TL;DR attribution is based on shared C2 and staging server infrastructure, a shared code base with unique implementations, and even shared public keys.
Disclaimer: I know and have worked with the people on Operation Blockbuster.
SONY had partnered with the US government to create a film that they thought and hoped could galvinize a revolutionary mood in North Korea (by making a comedy about the CIA assassinating the leadership and showing that mock assassination on screen). The DPRK considered this an attack (similar to the US considering the disclosure of hacked DNC emails an attack) and responded with a cyber attack on the contracting firm.
Russia remains a black spot, due to the language/kyrilic alphabet? And they do most secret stuff with typewriters and photocopies these days, so i've heard. Snowdens revelations had a big impact there.
It's much harder to leak Russian stuff because a lot of it is in paper form. After Snowden revelations, Russians returned to typewriters for all their top secret stuff [0]:
>A source at Russia's Federal Guard Service (FSO), which is in charge of safeguarding Kremlin communications and protecting President Vladimir Putin, claimed that the return to typewriters has been prompted by the publication of secret documents by WikiLeaks, the whistle-blowing website, as well as Edward Snowden, the fugitive US intelligence contractor.
>The FSO is looking to spend 486,000 roubles – around £10,000 – on a number of electric typewriters, according to the site of state procurement agency, zakupki.gov.ru. The notice included ribbons for German-made Triumph Adlew TWEN 180 typewriters, although it was not clear if the typewriters themselves were this kind.
>“After scandals with the distribution of secret documents by WikiLeaks, the exposes by Edward Snowden, reports about Dmitry Medvedev being listened in on during his visit to the G20 summit in London, it has been decided to expand the practice of creating paper documents.”
>Unlike printers, every typewriter has its own individual pattern of type so it is possible to link every document to a machine used to type it.
Now, their hacking tools are obviously not in paper form but I bet they're much more tightly controlled than the CIA/NSA tools. They probably have a much smaller team of people who have access to such tools so it's much harder for them to leak. It's also easier to do counterintelligence on people who do have access and you can bet every one of those people is monitored to some degree.
US has thousands of contractors who work for CIA/NSA/DIA and other intelligence agencies and many, supposedly, can easily walk out with some of the most sensitive documents that the USG possesses. [1] One of these contractors, supposedly, leaked out these files to WikiLeaks [2]. FBI is now on a hunt to figure out who it was.
Russians don't have a huge network of contractors. I couldn't find the exact figure but by a quick estimation, Russians have 100x less people doing the intelligence work. They also have much, much smaller budgets because of their economy. So it's easier for them to keep secrets from leaking.
CIA probably (most definitely?) has moles inside of FSB so FSB secrets do leak. Just not to WikiLeaks.
Or more bad news for the Trump administration with evidence of communication etc coming from Russian servers? ;)
Further down someone asked: "What would be the advantage to making your exploits appear to come from other countries?". If you want to sow doubt about the validity of evidence presented this seems like a good way to do so (not that we shouldn't be skeptical given the tools available).
Do you need THE best software-development talent to be able to build comprehensive surveillance like the big agencies? Like THE Christiano Ronaldo or THE Michael Jordan of programming.
Or is this more about funds and the power to set such a system in motion?
My thought is that much of the problem is tactical, logistical, organisational, and capabilities-oriented.
Consider the problem domain:
1. There's a vast amount of information flowing around the world. Much of it remains at best poorly protected, and until recently, that was even more the case.
2. Much of surveillance revolves around access to the channels themselves. Which means places such as satellite uplink/downlink centres, transoceanic cable landfalls, major switching hubs, telecoms hubs (AT&T's notorious San Francisco closet), etc.
3. Then you've got the problem of simply ingesting the information. For that, you need fat pipe of your own, and massive storage.
4. Then the problem of classifying and prioritising the information, or identifying and tracing specific targets. Again, in both cases, scale matters more than capability, where scale is both a matter of data (transmission, storage, processing) and above all access.
If you want to tap a specific landline, or cellphone, or cloud / online storage provider, do you have the tactical assets in place to be able to do so? E.g., official or unofficial liasons with the organisation in question. If official, how do you maintain that relationship (what balance of carrots and sticks). If unofficial, do you risk burning through such assets by utilising them. Google, to take an example, apparently looks poorly on employees directly accessing user data, and could well discipline or terminate any staff or contractors who do so. This doesn't mean that the NSA doesn't have and cannot use such assets, but they can likely only use each one a small number of times, possibly only once. That raises the costs for any such access, though again, scale offers a potential counterweight. (Rinse, wash, and repeat for all non-Google organisations, I'm actually raising them as an example here on account of their apparently stringent internal controls.)
5. Technical capabilities. For any given channel, there are the fundamental information-theoretical problems of establishing a link, transferring, and comprehending data. Depending on the complexities involved, this may be easy or hard, but there's almost certainly a fixed setup cost for any given service. This also means that the surveillance entity will likely target technical sources by some balance of total size (likelihood that any given target will be on it) and specific interest (that a particular target is there).
Such resources are again finite, and suggest yet another possible defeat: by embracing rapid change, workfactor for achieving technical penetration increases.
I'm arguing my own way through this, but in general, I'd think that size matters more than skill, though the two complement, and there are almost certainly instances in which brute intelligence and capability in conceiving of exploits is an essential factor.
We gain a lot from this. We can for example manufacture "Russian hysteria" - "Look we found a Russian rootkit on a DNC server". We can attack our allies and then make it look like the Chinese did it, and so on. It is immensely useful.
It can confuse the attribution so that trust is spoiled, so that energy is spend uselessly, so that another country takes the blame, for false flag attacks to justify other strategic moves.
Doesn't necessarily have to be instigation -- could just be misdirection. If you're targeting Russians, then making malware that looks like Russian hackers seems like a no brainer, if you don't want to attract attention from intelligence services.
Misleading information is a weapon in any nation's arsenal. I have to say I'm a little taken aback and almost feel like your comment might be trolling.
You can't conduct personal attacks like this on Hacker News. The odds that you're right in any particular case are low and not worth the considerable damage it does to the community. Please don't do this again.
This is uncivil and the sort of thing we ban accounts for. You can't attack another user like this on HN, regardless of how wrong you consider them. Please don't do it again.
There are many reasons why users post with new accounts on HN, not all such users are 'foreign' (whatever that word means on an international site). Accusing others of astroturfing or shilling without evidence is not allowed. Somebody simply having an opposing view does not count as evidence. There are plenty of opposing views.
Wikileaks is an organization built to destabilize the US government. Romantic and idealist stuff aside, they are playing the role of "useful idiot" for other intelligence agencies. And that is very dangerous.
Look at all the people complaining that wikileaks is anti-western and/or foriegn supported/agents with no proof of this whatsoever.
If anything wikileaks has shown a superior journalistic record in publishing whatever comes across their desk, so I don't see people criticising wikileaks on this the weakest of points as anything but intellectually dishonest at best.
[+] [-] asimpletune|9 years ago|reply
From what I've read so far, this is pretty freaking cool. It's super interesting to read these docs and see their thought process involved, especially since the product their building is so different from what people are making on a day to day business. It actually looks pretty fun to work on. Also, I think it's neat to read about their need for developing frameworks that can be used around the agency to accomplish stuff.
Unfortunately, I didn't ready anything about self modifying code, which is probably the most difficult malware to detect and probably to write. Maybe it's in there though, I didn't read the whole document. I came to the comments about half way through to see dozens of people talking about whether they support Wikileaks or not which I think is fine, free country, but I'd like to actually know what some people who work with this kind of stuff think.
A framework for compiling to self modifying, yet correct, code wiukd be super cool. I wonder if it always has to be written by hand? Probably not but maybe that's a separate tool Wikileaks has yet to release.
[+] [-] openasocket|9 years ago|reply
The exception to that is packers and other obfuscation techniques, which are related to self-modifying code. The general idea with these is that you take your real program and compress/encrypt/mangle/etc it and store that data in an executable. The code in that executable de-compresses/decrypts/demangles that data, sets it as executable, and then runs it. Unlike traditional self-modifying code, packing is orders of magnitude easier to write for the malware developer. The advantage here is that an antivirus tool can't determine what your real program does statically unless it understands how you mangled it, which is hard to do in general. To "unpack" an executable you've got three general techniques:
1. Packers tend to get reused a lot, so just have a person write an unpacker for popular packers by hand, and do some pattern matching to figure out which packer an executable is using. This doesn't work for everything, but it's fairly simple.
2. Dynamic Analysis. Run the executable and watch the contents of memory as the program unpacks itself, the real program should pop right out. Of course you have to run the executable in some sort of sandbox environment, and there's ways for the malware to detect that and alter it's behavior. This also isn't the most efficient process, so you can't really do this to executables during, say, an antivirus scan.
3. Symbolic Analysis. Basically static analysis on steroids to figure out what the executable will do without actually running it. The malware can't stop this with sandbox detection. But it's super slow and is still an active area of research.
[+] [-] DBNO|9 years ago|reply
Another comment about the content of this article:
Three quarters down the wiki page there is code for "adding foreign language" to the code. The options are are to add code comments in Arabic/Chinese/Russian/Korean/Farsi. My gut reaction is the purpose of this added language is to obfuscate the true source of the code - i.e. the code has Chinese comments in it so it must be from China. Ahh. I guess this makes sense to do. Only problem now is that the Chinese/Russian/Farsi/etc characters that they included in their code is now public. (Obviously now the CIA will change the foreign language words they insert)
I'd posit if someone had an X-year-old (i.e. x=7) copy of some malware, and the malware had these specific foreign language comments as shown by the article, there's a good possibility the source of the malware would be from the us government.
[+] [-] willstrafach|9 years ago|reply
Personally I do not believe self-modifying code would make much sense in their use case. In fact, this would not be possible on iOS due to kernel-based security protections.
[+] [-] bluejekyll|9 years ago|reply
[+] [-] throwaway91111|9 years ago|reply
[+] [-] kaminsod|9 years ago|reply
https://en.wikipedia.org/wiki/Metamorphic_code
https://web.archive.org/web/20070602060312/http://vx.netlux....
[+] [-] azinman2|9 years ago|reply
[+] [-] oliv__|9 years ago|reply
[+] [-] MaxLeiter|9 years ago|reply
[+] [-] rbanffy|9 years ago|reply
https://github.com/rbanffy/nsaname
[+] [-] salesguy222|9 years ago|reply
[+] [-] mjg59|9 years ago|reply
[+] [-] openasocket|9 years ago|reply
Disclaimer: I know and have worked with the people on Operation Blockbuster.
[+] [-] eli|9 years ago|reply
[+] [-] jwtadvice|9 years ago|reply
[+] [-] albertTJames|9 years ago|reply
LOL
[+] [-] equalunique|9 years ago|reply
90% of intelligence community cyber security spending is on offensive projects, so this revelation should not be too surprising. (http://www.reuters.com/article/us-usa-cyber-defense-idUSKBN1...)
[+] [-] quakeguy|9 years ago|reply
[+] [-] Jerry2|9 years ago|reply
>A source at Russia's Federal Guard Service (FSO), which is in charge of safeguarding Kremlin communications and protecting President Vladimir Putin, claimed that the return to typewriters has been prompted by the publication of secret documents by WikiLeaks, the whistle-blowing website, as well as Edward Snowden, the fugitive US intelligence contractor.
>The FSO is looking to spend 486,000 roubles – around £10,000 – on a number of electric typewriters, according to the site of state procurement agency, zakupki.gov.ru. The notice included ribbons for German-made Triumph Adlew TWEN 180 typewriters, although it was not clear if the typewriters themselves were this kind.
>“After scandals with the distribution of secret documents by WikiLeaks, the exposes by Edward Snowden, reports about Dmitry Medvedev being listened in on during his visit to the G20 summit in London, it has been decided to expand the practice of creating paper documents.”
>Unlike printers, every typewriter has its own individual pattern of type so it is possible to link every document to a machine used to type it.
Now, their hacking tools are obviously not in paper form but I bet they're much more tightly controlled than the CIA/NSA tools. They probably have a much smaller team of people who have access to such tools so it's much harder for them to leak. It's also easier to do counterintelligence on people who do have access and you can bet every one of those people is monitored to some degree.
US has thousands of contractors who work for CIA/NSA/DIA and other intelligence agencies and many, supposedly, can easily walk out with some of the most sensitive documents that the USG possesses. [1] One of these contractors, supposedly, leaked out these files to WikiLeaks [2]. FBI is now on a hunt to figure out who it was.
Russians don't have a huge network of contractors. I couldn't find the exact figure but by a quick estimation, Russians have 100x less people doing the intelligence work. They also have much, much smaller budgets because of their economy. So it's easier for them to keep secrets from leaking.
CIA probably (most definitely?) has moles inside of FSB so FSB secrets do leak. Just not to WikiLeaks.
[0] http://www.telegraph.co.uk/news/worldnews/europe/russia/1017...
[1] http://www.federaltimes.com/articles/fbi-arrests-contractor-...
[2] https://www.wsj.com/articles/authorities-questioning-cia-con...
[+] [-] vxxzy|9 years ago|reply
[+] [-] mihular|9 years ago|reply
[+] [-] k-mcgrady|9 years ago|reply
[+] [-] knz|9 years ago|reply
Further down someone asked: "What would be the advantage to making your exploits appear to come from other countries?". If you want to sow doubt about the validity of evidence presented this seems like a good way to do so (not that we shouldn't be skeptical given the tools available).
[+] [-] jwtadvice|9 years ago|reply
[+] [-] joshvm|9 years ago|reply
Could be a fun one for game DRM? Or apps where an API key is hidden in the binary?
[+] [-] philfrasty|9 years ago|reply
Do you need THE best software-development talent to be able to build comprehensive surveillance like the big agencies? Like THE Christiano Ronaldo or THE Michael Jordan of programming.
Or is this more about funds and the power to set such a system in motion?
[+] [-] dredmorbius|9 years ago|reply
My thought is that much of the problem is tactical, logistical, organisational, and capabilities-oriented.
Consider the problem domain:
1. There's a vast amount of information flowing around the world. Much of it remains at best poorly protected, and until recently, that was even more the case.
2. Much of surveillance revolves around access to the channels themselves. Which means places such as satellite uplink/downlink centres, transoceanic cable landfalls, major switching hubs, telecoms hubs (AT&T's notorious San Francisco closet), etc.
3. Then you've got the problem of simply ingesting the information. For that, you need fat pipe of your own, and massive storage.
4. Then the problem of classifying and prioritising the information, or identifying and tracing specific targets. Again, in both cases, scale matters more than capability, where scale is both a matter of data (transmission, storage, processing) and above all access.
If you want to tap a specific landline, or cellphone, or cloud / online storage provider, do you have the tactical assets in place to be able to do so? E.g., official or unofficial liasons with the organisation in question. If official, how do you maintain that relationship (what balance of carrots and sticks). If unofficial, do you risk burning through such assets by utilising them. Google, to take an example, apparently looks poorly on employees directly accessing user data, and could well discipline or terminate any staff or contractors who do so. This doesn't mean that the NSA doesn't have and cannot use such assets, but they can likely only use each one a small number of times, possibly only once. That raises the costs for any such access, though again, scale offers a potential counterweight. (Rinse, wash, and repeat for all non-Google organisations, I'm actually raising them as an example here on account of their apparently stringent internal controls.)
5. Technical capabilities. For any given channel, there are the fundamental information-theoretical problems of establishing a link, transferring, and comprehending data. Depending on the complexities involved, this may be easy or hard, but there's almost certainly a fixed setup cost for any given service. This also means that the surveillance entity will likely target technical sources by some balance of total size (likelihood that any given target will be on it) and specific interest (that a particular target is there).
Such resources are again finite, and suggest yet another possible defeat: by embracing rapid change, workfactor for achieving technical penetration increases.
I'm arguing my own way through this, but in general, I'd think that size matters more than skill, though the two complement, and there are almost certainly instances in which brute intelligence and capability in conceiving of exploits is an essential factor.
[+] [-] vxxzy|9 years ago|reply
[+] [-] rdtsc|9 years ago|reply
For example the official pretext for WWII was started as a false flag: https://en.wikipedia.org/wiki/Gleiwitz_incident US did it at the start of Vietnam War: https://en.wikipedia.org/wiki/Gulf_of_Tonkin_incident
We gain a lot from this. We can for example manufacture "Russian hysteria" - "Look we found a Russian rootkit on a DNC server". We can attack our allies and then make it look like the Chinese did it, and so on. It is immensely useful.
[+] [-] jwtadvice|9 years ago|reply
It can confuse the attribution so that trust is spoiled, so that energy is spend uselessly, so that another country takes the blame, for false flag attacks to justify other strategic moves.
[+] [-] empath75|9 years ago|reply
[+] [-] 27182818284|9 years ago|reply
[+] [-] lightbyte|9 years ago|reply
[+] [-] ndesaulniers|9 years ago|reply
[+] [-] sigmar|9 years ago|reply
[+] [-] brooklynmarket|9 years ago|reply
[+] [-] ComodoHacker|9 years ago|reply
[+] [-] lightbyte|9 years ago|reply
[+] [-] defen|9 years ago|reply
[+] [-] dang|9 years ago|reply
We detached this comment from https://news.ycombinator.com/item?id=14008045 and marked it off-topic.
[+] [-] Mikushi|9 years ago|reply
[deleted]
[+] [-] dang|9 years ago|reply
We detached this comment from https://news.ycombinator.com/item?id=14007208 and marked it off-topic.
[+] [-] bsamuels|9 years ago|reply
[deleted]
[+] [-] dang|9 years ago|reply
There are many reasons why users post with new accounts on HN, not all such users are 'foreign' (whatever that word means on an international site). Accusing others of astroturfing or shilling without evidence is not allowed. Somebody simply having an opposing view does not count as evidence. There are plenty of opposing views.
We detached this subthread from https://news.ycombinator.com/item?id=14008045 and marked it off-topic.
[+] [-] faragon|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] arca_vorago|9 years ago|reply
If anything wikileaks has shown a superior journalistic record in publishing whatever comes across their desk, so I don't see people criticising wikileaks on this the weakest of points as anything but intellectually dishonest at best.
[+] [-] angry-hacker|9 years ago|reply
Who said advertising doesn't work? Especially political.
[+] [-] strathmeyer|9 years ago|reply