Is there an effective way to "mostly" airgap, if you need Internet connectivity for your work? This is a comment I posted on a similar thread a few weeks ago.
=========================================
Just curious, how would airgapping be practical if you need Internet connectivity for your "real work"?
For example, let's say you run a quant trading firm and the algorithms you're concerned about being stolen need connectivity to download live trading info, and then after processing that info they need to communicate buy/sell orders to the outside world.
Are there any methods that could be used that would prevent all communication with a secure system (with an airgap level of certainty) besides the strictly defined data you need to do your "real work"?
-----
gaius 19 days ago | link
Sure, you would just use Radianz, and that is in fact what everyone does. This is a very solved problem! Bloomberg also operates a private network, and there are others too. These systems can operate perfectly well without access to the public Internet.
A couple of jobs ago I worked at a financial services firm with 2 networks and 2 PCs on everyone's desk. Rednet for outside connectivity, and an internal network for real work, and never the twain shall meet.
NO-ONE needs the Internet for real work, let's be honest, just for goofing off. Time we all started to prioritize security over mere convenience.
-----
*
wikiburner 19 days ago | link
Yep, maybe trading wasn't the best example, although they are still effectively at the mercy of the security of their data providers network - which admittedly is probably quite good.
Let's say you're a P.I., journalist, researcher, law enforcement, or intel agency, and need to automate news or people searches for some reason. If you were able to very strictly define the data you're expecting to receive, isn't there any way you could automatically pass this data on to a secure system without opening yourself up to exploits?
I've not played in this space for a looong time but...
There are four things you want to do -
1. Get a herd of cash together. The stuff that follows is not cheap.
2. Set up a hardware data diode (an appliance that only allows data to travel in one direction). [1]
3. Set up an air gap like Whale Comm's appliance used to do (two 1U rack-mount servers, back-to-back, which [dramatization alert] automates plugging a USB stick into one server, copying data onto it, pulling it out, sticking it into the other server, and coping the data onto it - at ~10Mb/s, if memory serves). [2]
4. Any time anything traverses the trust boundary, convert from one format to another, so PDF becomes RTF, DOC becomes TXT, PNG becomes GIF, and so on. The point is that converting attachments into other formats drops malicious payloads, or stops them from exploiting vulnerabilities in the apps that open the original formats.
[2] Whale Communications was acquired by Microsoft. The product is now called ForeFront Unified Access Gateway, and while still a good application firewall, no longer provides that air gap (http://en.wikipedia.org/wiki/Microsoft_Forefront_Unified_Acc...). I've no idea who else can do this.
A few years ago, I once met someone who worked on imaging devices for satellites for a military contractor, which was about all he would say about the specific work he did. He indicated that the building they did their work in had _no_ internet access. Generally, if they wanted to refer to things on the internet, they had to go to some internet-connected computers in another building, print out whatever they wanted and bring it back to the building they worked in.
Perhaps it's unrealistic to expect security without tradeoffs of convenience.
What you want is a firewall that uses deep packet inspection so that only data meeting your specifications get's through aka it's an XML file with the correct schema. Unfortunately, you can't exactly trust of the shelf Software for this stuff, however a firewall can be far simpler than a modern OS so they tend to be much more secure so you are reasonably safe just updating the inspection code. Though if you have a sufficient budget or are overly paranoid you can build your firewall from scratch. Also, what you do at the endpoint matters, opening websites in IE is going to be far less secure than say doing day trading using custom software.
That said, the usual rules of defense in depth still apply, ensure that your machine can only talk to a white-list of IP address, etc etc.
I'm a big fan of Qubes-OS. It's an interesting OS distro that's based on the Xen hyprevisor and allows you to maintain different VM zones that integrate with the desktop.
You could have a Banking VM that only runs a certain browser and the firewall only lets traffic to-and-from your banks site. You Work VM could be set to only allow traffic through a VPN connection to your work. Your BitCoin VM could be set to not have any network traffic at all. You could even have a Tor VM with a browser.
What about having two computers, one connected to the internet, one disconnected. Then develop and install two custom services - one obtains the required information, e.g. the trading data, from the internet or sends data to the internet. The other service runs on the disconnected computer and communicates with the software actually using and producing the data, probably in the form of a proxy server.
Finally connect both computers using something simple like a null modem cable and make both service communicate over this link using a very simple proprietary protocol. Assuming the disconnected computer is not already compromised before you start using the system and you have not been extremely sloppy when you designed and implemented the two services, it should be quite hard to compromise the disconnected computer.
One way would be to find valid data (passing the protocol checker in the receiving service) to be transferred from the connected to the disconnected computer that triggers a bug in any data consuming software leading to code injection and execution which in turn sends secret data over the null modem link to the (compromised) connected computer. That seems to be a quite a complex attack to me, especially if the data traveling over the link is something simple like stock price time series which enables very simple protocols and thorough validation.
To avoid some classes of bugs, e.g. buffer overflows, in the services linking both computers I would implement them using a managed runtime like .NET. This will of course expose the system to vulnerabilities in the underlying runtime.
Depending on the bandwidth you wanted, the first thing I thought of was having the output orders on one computer - perhaps using something with a higher data density than plain text - aiming a webcam at that computer's monitor and using some form of image recognition to copy the orders across.
Knuth on his air gap: "I currently use Ubuntu Linux, on a standalone laptop—it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux." : http://www.informit.com/articles/article.aspx?p=1193856
In a post-Stuxnet world, can we trust flash drives? If I remember correctly, that virus would jump onto flash drives to spread to the next few computers it touched. I think I might prefer an ethernet wire connection without the outgoing wires.
I read a comment just like this a couple of months ago. Someone saying that until this point they thought Richard Stallman was a complete paranoid nutjob but it turns out he was completely correct. I guess that's why he will be seen as a visionary in so many areas.
> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I fetch web pages from other sites by sending mail to a program (see git://git.gnu.org/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it.
I've been setting up networks since 2002 the following way:
Internal network, NOT connected to the internet.
External (small network) is connected to the internet, and has "terminal server" (Windows Terminal Server if I must, Xrdp if I can let the external servers be Linux).
Firewall between outside world and external network, configured to allow reasonable work on that network. Firewall between external network and internal network only allows internal network initiated connections to the RDP port (3389) on the external network.
Also, an rsync setup that allows some controlled transfer of files between inner and outer networks (preferable to USB drives - the USB ports should be disabled logically and physically, although I didn't always get to do that). This rsync setup goes through a different port, with a cable that is usually not connected (the air in "airgap"). When files need to go in or out, I plug the cable for a few minutes, and unplug when not needed.
From experience, this lets you keep a network reasonably secure, without having to put two PCs on everyone's desk.
Of course, there's risk: There might be a way to root the inside machines through a bug in RDP, after rooting the outside machines. However, it will work well, against "standard" attacks and malware that assume internet connectivity. Even if they get in (through a USB drive, as schneier says was done in the Iranian and US army facilities), they can't just call out to the internet.
It is completely possible that mentioning Windows in the article was meant to be only a smokescreen. I'm sure a person in his position would absolutely not want to publicly declare the exact solution he is using. In reality, it might as well be something completely else, like Slackware or some USB-bootable distro. Yes, this might be security through obscurity but considering that he admitted that he isn't familiar with the inner workings of Truecrypt etc, it is the safest bet. Not disclosing what exactly you are using doesn't allow an adversary with unlimited resources to adapt and optimize to break this specific scheme.
I have no insight into Schneier's top security setup, but I know for a fact he uses Windows on a regular basis. His portable computer is a standard Sony Vaio runing Windows.
That doesn't seem plausible. He could non-specifically say "don't use Windows. Ideally use [some stock linux distro] or investigate other unix operating systems that can be configured for safety." That wouldn't really give away anything.
Windows is a bad choice for an air gapped system. A much better would be Slackware, where its even simple to maintain an air gapped system. Maintaining a Linux with a package manager, e.g. Debian without internet is much more trouble.
I had scripts to maintain an air gapped Debian 10 years ago, but can no longer recommend them, as Debian now has signed archives, and the script breaks the sign.
You can still use optical media (CD/DVD) or USB keys to install packages with APT, so I don’t see how Slackware would have any advantage over Debian there.
There’s even a apt-offline[0] to create a list of ‘needed’ packages on one system, then download these packages on another one and transport them to the air-gapped system. Of course, you will still have to decide whether to trust these downloaded packages, and unless you trust at least some Debian Developers to do the right thing, this will be hard to do even with GPG signatures on all packages.
The OS on the air gapped system is not that important, since you don't have to deal with regular internet threats, and anybody who wants to attack you will use 0-days anyhow. But I think that the air gapped computer should have a different OS than the computer which writes the USB sticks (and is connected to the internet). Just to force the attacker to burn two 0-days.
> 1. When you set up your computer, connect it to the Internet as little as possible. It's impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible.
There's no technical reason you can't keep your airgapped computer completely off the internet for its entire life cycle. I'd even go so far as to commit heresy say that this is just plain bad security advice that Mr. Schneier is giving out here. Instead, you should probably get your install media from a trusted source and use that to install the OS and any initial updates (maybe that's a manufacturer's install CD or a Linux ISO that you burned yourself - avoid anything that isn't write-once). If the OS on your airgapped machine has a unpatched remote vulnerability, you're already putting that system at risk by connecting it to the internet even once.
Don't discard that trusted install media - if you need to create another airgapped machine, you're using the same airgapped data to perform the install. I realize that Bruce was discussing setting up a stand-alone computer, but I thought I'd share my experience: Years ago, around the same time that Blaster was a nuisance, I managed a network of airgapped machines. If any one of them had been hit because I chose to just let it download updates off the internet, the entire network would have been compromised. This would be much worse if you were worried about a targeted attack - every time you connect a fresh computer to the internet with the intent of moving that box over to the secure network, you're giving the attacker another opportunity to gain access.
For transferring data back and forth, I've used CDs in the past, but toyed with the idea of using a dedicated serial cable for transfers instead. Tar up the files, connect the cable, tell the remote machine to listen, shoot them over, then disconnect the cable. The connection has no network stack to worry about independent programs sending data across the channel; if extra data is added, the result on the other end likely won't untar; there's no auto-execution of programs to worry about. The only thing I have to worry about being compromised are my copies of tar and cat. Removeable media in general has issues - Schneier mentions a few examples in the article of successful compromises using USB sticks.
If the OS on your airgapped machine has a unpatched remote vulnerability, you're already putting that system at risk by connecting it to the internet even once.
Even if it's behind a NAT firewall with no external ports open? And you only connect via SSL (or SSH) to specific known hosts?
For the really paranoid, to the extent that your data can be represented as a text file, you can print it on paper from your internet connected machine and OCR it into your air gapped machine, and vice versa. In this case, you only have to worry about your printer or scanner having a backdoor. If you are very confident in your OCR accuracy, you can encrypt it prior to printing and decrypt it after scanning.
"Don't worry too much about patching your system; in general, the risk of the executable code is worse than the risk of not having your patches up to date."
Not good advice.
If you plan to open anything other than text files on the machine, un-patched software is almost as big a risk as transferring executables. The only difference is that it seems less dangerous to you.
I came here to say the exact same thing. The problem with complicated file formats isn't that they contain "macros"; it's that the code that parses and interprets those files is prone to memory corruption.
First, instead of using removable media from which data could still be recovered, I'd get a second Ethernet switch. Whenever I wanted to move data from my regular machine to my secure machine, I'd have to move the cable on my regular machine from one switch to the other. Thus it would be physically impossible to be connected to both internal and external networks simultaneously, and I wouldn't be leaving any persistent physical data trail like a USB stick or CD-ROM.
The second thing I'd do is a double air gap. Think of it as an airlock: you can't open the inner door until you're sure no contaminants got through the outer door. The intermediate host would have a single purpose: run malware checks. Thus, only data that had already been checked in a secure environment would even be allowed to touch the real secure machine.
I worked on an "Air Gapped" network. We didn't call it that.
As the internet and open source took off it became more and more painful.
To get files over to the network, we'd have to download from internet and then burn to dvd and bring it over. The thinking was that DVD's with their write once capability would prevent unwanted files from hoping aboard. This didn't help if the file you were transfering was infected, but files were virus checked before burning.
Oddly files went Windows->Dvd->HPUX machines meaning the virus scan on windows was somewhat useless.
But having no access to cpan or online research on your main work machine was hard.
Are you sure? How much more difficult would it be for an intelligence agency to get an open source hacker to "accidentally" inject a vulnerability disguised as a bug, than to pressure MS to write a backdoor? (or to get MS to hire a mole)
I don't think this matters against the NSA. An adversary either has the resources to exploit some target, or not.
If the NSA has a lot of exploits to choose from for each of Linux, Mac, and Windows then it doesn't matter which one you're using.
Think of this in Bayesian terms. You have some prior beliefs that MS W software is less secure than other software. What we've gotten as a result of all these leaks is new likelihoods, so we have to modify our posterior.
I.e. A is more secure than B doesn't matter if both A and B are easily exploitable by your adversary.
What about isolation? With heavy use of virtualization one can make the air gapped machine even more secure:
- Only open documents in a virtual machine
- Only interface with the document transfer media (cd/dvd etc.) through virtual machines. Don't ever mount or use this media on your host.
- Clone a new throw-away virtual machine for opening EACH document and delete it after reading the document
About his points:
1) This is nonsense. It's possible to set up an OS (for example linux) with zero internet connectivity, just download the ISO on another computer, verify checksums and signatures, burn onto optical media and you're set.
8) Also, use one-time media. Write once on the internet host, fill up and finalize media, read once on the air gap host, destroy media.
Also, I don't think Schneier is recommending to use Windows for this task. He's just assuming that most people out there is using Windows and can use these tips to improve their security. For his own high security setup(s) I'm pretty sure he'd have the common sense to not use Windows.
I don't know for sure, but its possible that a windows machine, if it did accidentally leak any meta-data about itself would be less unique than a linux install. Just a guess though.
If you assume your connected machine is going to get p0wnd, and you rely on the air gap to prevent your secure machine from being penetrated, you could run any OS you like, no matter your opinion of how much the vendor cooperates with the NSA.
Indeed. If he picked Linux or a Mac he'd have the advantage of being able to read most MS proprietary formats without the disadvantage of embedded code executing.
Ironically it is a result of modern encryption that separating things from the internet is so difficult. If every app sends data over an encrypted channel it makes it much harder to audit what exactly it is doing. You can't impose rules if you don't know what the data is or where it will end up.
He forgot to mention keeping the computer in a faraday cage. If he has Snowden info, it seems likely that intelligence agencies would be monitoring him closely enough to use Van Eck phreaking to spy on his laptop display (or other part of the computer that leak info through rf, which is all of them).
How about an inline protocol analyzer that knows the USB mass storage device class protocol, and can detect when a write request is being sent?
That would perhaps also make it possible to optionally prevent such requests from ever reaching the USB stick, thus adding write-protection to legacy sticks.
Probably not 100% trivial given the signalling speed and general complexity of USB, but perhaps solvable using an FPGA? There is a software-only USB stack for 8-bit AVR:s, so it doesn't seem totally impossible, either.
No, I don't have a startup manufacturing such a device. :)
There are sticks out there that have hardware write-enable switches (I keep my medical records on one), so that you can at least control when writes occur.
Schneier is not as paranoid or as particular as I thought he would be:
> 1. When you set up your computer, connect it to the Internet as little as possible. It's impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible. I purchased my computer off-the-shelf in a big box store, then went to a friend's network and downloaded everything I needed in a single session. (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of that to the air gap machine using a one-way process.)
A friend's house is not "anonymous". If you have the need for an air gap, then you probably should assume that your attackers have the ability to suss out your off and online social network. In a not-too-distant future, it's not hard to imagine a surveillance operative being able to expand their examination of network traffic to not only include you, but associates of yours, and then to detect when an online-installation routine was run. At that point, the fact that that computer's fingerprint (however it may be calculated) was never seen again from that friend's home might be one flag of several in a comprehensive surveillance flag.
Though I guess if Schneier is talking about a off-the-parts computer, I'm assuming he means a desktop computer that can't be assembled in the Starbucks two states away to connect to the Wifi. OTOH, I think I would prefer a Linux laptop as my air-gapped computer
[+] [-] wikiburner|12 years ago|reply
=========================================
Just curious, how would airgapping be practical if you need Internet connectivity for your "real work"? For example, let's say you run a quant trading firm and the algorithms you're concerned about being stolen need connectivity to download live trading info, and then after processing that info they need to communicate buy/sell orders to the outside world. Are there any methods that could be used that would prevent all communication with a secure system (with an airgap level of certainty) besides the strictly defined data you need to do your "real work"? -----
gaius 19 days ago | link
Sure, you would just use Radianz, and that is in fact what everyone does. This is a very solved problem! Bloomberg also operates a private network, and there are others too. These systems can operate perfectly well without access to the public Internet. A couple of jobs ago I worked at a financial services firm with 2 networks and 2 PCs on everyone's desk. Rednet for outside connectivity, and an internal network for real work, and never the twain shall meet. NO-ONE needs the Internet for real work, let's be honest, just for goofing off. Time we all started to prioritize security over mere convenience. -----
*
wikiburner 19 days ago | link
Yep, maybe trading wasn't the best example, although they are still effectively at the mercy of the security of their data providers network - which admittedly is probably quite good. Let's say you're a P.I., journalist, researcher, law enforcement, or intel agency, and need to automate news or people searches for some reason. If you were able to very strictly define the data you're expecting to receive, isn't there any way you could automatically pass this data on to a secure system without opening yourself up to exploits?
[+] [-] Spearchucker|12 years ago|reply
There are four things you want to do -
1. Get a herd of cash together. The stuff that follows is not cheap.
2. Set up a hardware data diode (an appliance that only allows data to travel in one direction). [1]
3. Set up an air gap like Whale Comm's appliance used to do (two 1U rack-mount servers, back-to-back, which [dramatization alert] automates plugging a USB stick into one server, copying data onto it, pulling it out, sticking it into the other server, and coping the data onto it - at ~10Mb/s, if memory serves). [2]
4. Any time anything traverses the trust boundary, convert from one format to another, so PDF becomes RTF, DOC becomes TXT, PNG becomes GIF, and so on. The point is that converting attachments into other formats drops malicious payloads, or stops them from exploiting vulnerabilities in the apps that open the original formats.
[1] Tenix used to do one, but they cost crazy money (millions). I don't know much about this space anymore, but this might provide some pointers: http://en.wikipedia.org/wiki/Unidirectional_network
[2] Whale Communications was acquired by Microsoft. The product is now called ForeFront Unified Access Gateway, and while still a good application firewall, no longer provides that air gap (http://en.wikipedia.org/wiki/Microsoft_Forefront_Unified_Acc...). I've no idea who else can do this.
[+] [-] _wiv7|12 years ago|reply
Perhaps it's unrealistic to expect security without tradeoffs of convenience.
[+] [-] Retric|12 years ago|reply
That said, the usual rules of defense in depth still apply, ensure that your machine can only talk to a white-list of IP address, etc etc.
[+] [-] deftnerd|12 years ago|reply
You could have a Banking VM that only runs a certain browser and the firewall only lets traffic to-and-from your banks site. You Work VM could be set to only allow traffic through a VPN connection to your work. Your BitCoin VM could be set to not have any network traffic at all. You could even have a Tor VM with a browser.
http://qubes-os.org/trac
[+] [-] danbruc|12 years ago|reply
Finally connect both computers using something simple like a null modem cable and make both service communicate over this link using a very simple proprietary protocol. Assuming the disconnected computer is not already compromised before you start using the system and you have not been extremely sloppy when you designed and implemented the two services, it should be quite hard to compromise the disconnected computer.
One way would be to find valid data (passing the protocol checker in the receiving service) to be transferred from the connected to the disconnected computer that triggers a bug in any data consuming software leading to code injection and execution which in turn sends secret data over the null modem link to the (compromised) connected computer. That seems to be a quite a complex attack to me, especially if the data traveling over the link is something simple like stock price time series which enables very simple protocols and thorough validation.
To avoid some classes of bugs, e.g. buffer overflows, in the services linking both computers I would implement them using a managed runtime like .NET. This will of course expose the system to vulnerabilities in the underlying runtime.
[+] [-] 6d0debc071|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] leephillips|12 years ago|reply
[+] [-] jonlucc|12 years ago|reply
[+] [-] nlh|12 years ago|reply
But today, after all that I've read and learned recently, it makes perfect sense.
[+] [-] easytiger|12 years ago|reply
And speaking of stallman and airgaps: http://stallman.org/stallman-computing.html
> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I fetch web pages from other sites by sending mail to a program (see git://git.gnu.org/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it.
[+] [-] beagle3|12 years ago|reply
Internal network, NOT connected to the internet. External (small network) is connected to the internet, and has "terminal server" (Windows Terminal Server if I must, Xrdp if I can let the external servers be Linux).
Firewall between outside world and external network, configured to allow reasonable work on that network. Firewall between external network and internal network only allows internal network initiated connections to the RDP port (3389) on the external network.
Also, an rsync setup that allows some controlled transfer of files between inner and outer networks (preferable to USB drives - the USB ports should be disabled logically and physically, although I didn't always get to do that). This rsync setup goes through a different port, with a cable that is usually not connected (the air in "airgap"). When files need to go in or out, I plug the cable for a few minutes, and unplug when not needed.
From experience, this lets you keep a network reasonably secure, without having to put two PCs on everyone's desk.
Of course, there's risk: There might be a way to root the inside machines through a bug in RDP, after rooting the outside machines. However, it will work well, against "standard" attacks and malware that assume internet connectivity. Even if they get in (through a USB drive, as schneier says was done in the Iranian and US army facilities), they can't just call out to the internet.
[+] [-] j_s|12 years ago|reply
http://geekswithblogs.net/DesigningCode/archive/2010/04/19/f...
[+] [-] zactral|12 years ago|reply
[+] [-] dublinben|12 years ago|reply
[+] [-] hyperpape|12 years ago|reply
[+] [-] kephra|12 years ago|reply
I had scripts to maintain an air gapped Debian 10 years ago, but can no longer recommend them, as Debian now has signed archives, and the script breaks the sign.
[+] [-] claudius|12 years ago|reply
There’s even a apt-offline[0] to create a list of ‘needed’ packages on one system, then download these packages on another one and transport them to the air-gapped system. Of course, you will still have to decide whether to trust these downloaded packages, and unless you trust at least some Debian Developers to do the right thing, this will be hard to do even with GPG signatures on all packages.
[0] http://packages.debian.org/wheezy/apt-offline
[+] [-] yk|12 years ago|reply
[+] [-] jlouis|12 years ago|reply
[+] [-] csandreasen|12 years ago|reply
There's no technical reason you can't keep your airgapped computer completely off the internet for its entire life cycle. I'd even go so far as to commit heresy say that this is just plain bad security advice that Mr. Schneier is giving out here. Instead, you should probably get your install media from a trusted source and use that to install the OS and any initial updates (maybe that's a manufacturer's install CD or a Linux ISO that you burned yourself - avoid anything that isn't write-once). If the OS on your airgapped machine has a unpatched remote vulnerability, you're already putting that system at risk by connecting it to the internet even once.
Don't discard that trusted install media - if you need to create another airgapped machine, you're using the same airgapped data to perform the install. I realize that Bruce was discussing setting up a stand-alone computer, but I thought I'd share my experience: Years ago, around the same time that Blaster was a nuisance, I managed a network of airgapped machines. If any one of them had been hit because I chose to just let it download updates off the internet, the entire network would have been compromised. This would be much worse if you were worried about a targeted attack - every time you connect a fresh computer to the internet with the intent of moving that box over to the secure network, you're giving the attacker another opportunity to gain access.
For transferring data back and forth, I've used CDs in the past, but toyed with the idea of using a dedicated serial cable for transfers instead. Tar up the files, connect the cable, tell the remote machine to listen, shoot them over, then disconnect the cable. The connection has no network stack to worry about independent programs sending data across the channel; if extra data is added, the result on the other end likely won't untar; there's no auto-execution of programs to worry about. The only thing I have to worry about being compromised are my copies of tar and cat. Removeable media in general has issues - Schneier mentions a few examples in the article of successful compromises using USB sticks.
[+] [-] pdonis|12 years ago|reply
Even if it's behind a NAT firewall with no external ports open? And you only connect via SSL (or SSH) to specific known hosts?
[+] [-] john_b|12 years ago|reply
Just remember to burn the paper afterwards.
[+] [-] keyme|12 years ago|reply
Not good advice. If you plan to open anything other than text files on the machine, un-patched software is almost as big a risk as transferring executables. The only difference is that it seems less dangerous to you.
[+] [-] tptacek|12 years ago|reply
[+] [-] notacoward|12 years ago|reply
First, instead of using removable media from which data could still be recovered, I'd get a second Ethernet switch. Whenever I wanted to move data from my regular machine to my secure machine, I'd have to move the cable on my regular machine from one switch to the other. Thus it would be physically impossible to be connected to both internal and external networks simultaneously, and I wouldn't be leaving any persistent physical data trail like a USB stick or CD-ROM.
The second thing I'd do is a double air gap. Think of it as an airlock: you can't open the inner door until you're sure no contaminants got through the outer door. The intermediate host would have a single purpose: run malware checks. Thus, only data that had already been checked in a secure environment would even be allowed to touch the real secure machine.
[+] [-] acomjean|12 years ago|reply
To get files over to the network, we'd have to download from internet and then burn to dvd and bring it over. The thinking was that DVD's with their write once capability would prevent unwanted files from hoping aboard. This didn't help if the file you were transfering was infected, but files were virus checked before burning.
Oddly files went Windows->Dvd->HPUX machines meaning the virus scan on windows was somewhat useless.
But having no access to cpan or online research on your main work machine was hard.
[+] [-] kbart|12 years ago|reply
[+] [-] skrebbel|12 years ago|reply
[+] [-] JabavuAdams|12 years ago|reply
If the NSA has a lot of exploits to choose from for each of Linux, Mac, and Windows then it doesn't matter which one you're using.
Think of this in Bayesian terms. You have some prior beliefs that MS W software is less secure than other software. What we've gotten as a result of all these leaks is new likelihoods, so we have to modify our posterior.
I.e. A is more secure than B doesn't matter if both A and B are easily exploitable by your adversary.
[+] [-] scott_karana|12 years ago|reply
> "Since I started working with Snowden's documents, I have been using [...] BleachBit" -- Bruce Schneier
[+] [-] nivla|12 years ago|reply
[+] [-] mrpdaemon|12 years ago|reply
- Only open documents in a virtual machine - Only interface with the document transfer media (cd/dvd etc.) through virtual machines. Don't ever mount or use this media on your host. - Clone a new throw-away virtual machine for opening EACH document and delete it after reading the document
About his points:
1) This is nonsense. It's possible to set up an OS (for example linux) with zero internet connectivity, just download the ISO on another computer, verify checksums and signatures, burn onto optical media and you're set.
8) Also, use one-time media. Write once on the internet host, fill up and finalize media, read once on the air gap host, destroy media.
Also, I don't think Schneier is recommending to use Windows for this task. He's just assuming that most people out there is using Windows and can use these tips to improve their security. For his own high security setup(s) I'm pretty sure he'd have the common sense to not use Windows.
[+] [-] jeanjq|12 years ago|reply
[+] [-] dwiel|12 years ago|reply
[+] [-] Zigurd|12 years ago|reply
[+] [-] podperson|12 years ago|reply
[+] [-] 7952|12 years ago|reply
[+] [-] paul|12 years ago|reply
[+] [-] gluejar|12 years ago|reply
Get to work, people!
[+] [-] unwind|12 years ago|reply
That would perhaps also make it possible to optionally prevent such requests from ever reaching the USB stick, thus adding write-protection to legacy sticks.
Probably not 100% trivial given the signalling speed and general complexity of USB, but perhaps solvable using an FPGA? There is a software-only USB stack for 8-bit AVR:s, so it doesn't seem totally impossible, either.
No, I don't have a startup manufacturing such a device. :)
UPDATE: Ah, I just reinvented the WriteBlocker: http://www.wiebetech.com/products/USB-WriteBlocker.php. Sigh.
[+] [-] chiph|12 years ago|reply
[+] [-] jonnycowboy|12 years ago|reply
[+] [-] danso|12 years ago|reply
> 1. When you set up your computer, connect it to the Internet as little as possible. It's impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible. I purchased my computer off-the-shelf in a big box store, then went to a friend's network and downloaded everything I needed in a single session. (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of that to the air gap machine using a one-way process.)
A friend's house is not "anonymous". If you have the need for an air gap, then you probably should assume that your attackers have the ability to suss out your off and online social network. In a not-too-distant future, it's not hard to imagine a surveillance operative being able to expand their examination of network traffic to not only include you, but associates of yours, and then to detect when an online-installation routine was run. At that point, the fact that that computer's fingerprint (however it may be calculated) was never seen again from that friend's home might be one flag of several in a comprehensive surveillance flag.
Though I guess if Schneier is talking about a off-the-parts computer, I'm assuming he means a desktop computer that can't be assembled in the Starbucks two states away to connect to the Wifi. OTOH, I think I would prefer a Linux laptop as my air-gapped computer
[+] [-] unknown|12 years ago|reply
[deleted]