where 8887 is the port on your laptop that you will tunnel through, -p 22 is the port the ssh server is on (22 is the default but I use a different port so I am used to specifying this) and the rest is your username and the address of the server
Set your network to point to the proxy. On a Mac that would be…
... Open Network Preferences…
... Click Advanced…
... Click Proxies…
... Check the SOCKS Proxy box then in the SOCKS Proxy Server field enter localhost and the port you used (8887)
Also, remember that some programs don't respect the system's proxy settings and instead use their own. Firefox is one of those, you can find its proxy settings in "Advanced -> Network -> Settings"
That way I can just open a shell and type "socks" and be good to go (well and then do the system preferences deal, but I have an AppleScript that does that automatically).
SSH proxies are not end to end encryption. They only protect part of the path. Not sure why this is being down voted. It's true. The tunnel is only between the client and the SSH server. The HTTP websites that you visit beyond the SSH server see your clear text packets.
This is a stupid question, but what about a guy like me who has no access to a server?
I'm going traveling for all of next month, the only sites I'll be checking where I'll be logged in is my hotmail account, and I might check my bank account (Chase) - both use https, so I suppose I'm in the clear then? (also when I click "log out" on these sites, it logs me out, but if my session has been hijacked, will it log the hijacker out of the session he's hijacked of mine as well?)
I'd like to buy such a server at low purchase and maintenance cost.
The Pandaboard[1] looks like a good fit, but the instructions to install a Linux distro are a bit scary [2]. I guess I could do it, from my Mac, but I'm a bit afraid to mess things up with the low-level disk utilities.
Does someones sells SD cards with a distro pre-installed? Or an equivalent device with an easier setup?
There are probably going to be a lot of people negatively affected by this for quite some time to come. One thing to point out is that there are grades of things. There is "public", and then there is "top hit on Google". Similarly, there is "insecure" and then there is "simple doubleclick tool to facilitate identity theft".
How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal. By contrast it's fashionable among a certain set (no doubt including the author of this mess, Mr. Butler himself) to hold that the real culprits are the door manufacturers. What said facile analysis excludes, of course is that there is always a greater level of security possible. The level we currently employ reflects our tradeoffs between the available threats and the cost/convenience loss of bolting our doors and putting finials on our gates.
Butler has simply raised the threat level for everyone. He did not invent a new lock or close a hole. He's now forcing lots of people to live up to his level of security. Congratulations to the new Jason Fortuny.
This is kind of a big deal. Not a whole lot of people are aware of this vulnerability and among those who are it's likely only a small subset that knew how to exploit it until now. I suspect all of the coffee shops in the college town where I live will have people using this starting tomorrow.
I've personally been working from cafes and tunneling everything through SSH for years, but in my experience almost no one else does this.
I've personally been working from cafes and tunneling everything through SSH for years
To where? I suspect it's to a server, VPS, or similar, and the connection is unencrypted from there to its endpoint. This being the case, could someone with a server on the same subnet be running a browser remotely (or even just tcpdump) and doing a similar thing with your logins?
(This is just some thinking out loud and I may be totally wrong - correct me ;-))
This is one of many reasons Loopt has used SSL for all[1] traffic from the very beginning. At least WiFi has fairly limited range. Cell networks[2] (and satellite internet[3]) can be sniffed miles away.
In addition to making session hijacking harder, using SSL keeps crappy proxies from caching private data. Remember when some AT&T users were getting logged in as other users on Facebook's mobile site? The cause was a mis-configured caching proxy.
Raising awareness of issues like this gets them fixed. Until a service's users demand SSL, it won't be offered. Unless the service is Loopt :) It's not a noticeable computational burden, but it does increase latency and cost money (for certs).
1. Not images
2. Older GSM crypto can be hacked in real time with rainbow tables now
3. Usually not encrypted at all
Indeed, Loopt appears to be one of the few high-profile sites to have done this right. SSL for everything, and cookies that are relevant to login sessions are marked secure. This is what we need everywhere!
Nice. A solid demonstration to show next time your webmaster doesn't want to set up SSL everywhere.
That said, the current cartel-like setup of certificate authorities (protection money and everything!) makes SSL annoying and expensive if you want the browser to not have a fit. Especially for small-scale projects. But there's really no excuse for larger sites.
HTTPS also needs distinct IP addresses for distinct hostnames, so that the HTTP handshake, in which the Host: header appears, is already protected by an encrypted channel. No more having multiple websites on one IP address.
SSL is bad for the environment because it requires far more server side hardware... Well, I'm only partially serious about the environment thing, the question is, how can internet companies make it commercially viable to use SSL for everything? The added hardware and power costs make each user way more expensive, possibly to the point where they may not actually be worth it.
An alternative is to bind the user's session to their IP address, but that isn't fool proof either because of NAT, DHCP and certain big ISPs that tend to change IPs on the fly.
You can get SSL certificates for free for one domain, and they work with all browsers (except Opera, IIRC). Also, you can use Perspectives for Firefox, which I think is much better than the current system.
Startcom offers free no-BS certificate signing, and their CA is on most modern browsers, I believe. I think they could be suitable for small scale projects.
Thanks for posting this. It convinced me to upgrade SSL support from "something that would be nice to implement if I was bored someday" (BCC is not exactly security critical -- except, on reflection, the admin pages) to "drop everything and get it done."
I thought the title of this submission was slightly misleading. This is not a security vulnerability from within Firefox, it's a Firefox plugin to reveal security vulnerabilities in a wide range of websites.
Sorry if it was misleading somehow, this is definitely not a vulnerability in Firefox. It's a Firefox extension that makes it easy to execute HTTP session hijacking attacks.
Thanks to the EFF and the Tor Project we need not worry as much thanks to their HTTPS Everywhere project, a plugin for Firefox: http://www.eff.org/https-everywhere/
The explanation I've always heard for not using HTTPS 100% of the time is that it puts an substantial load on the server, and for many sites it's overkill. Setting aside the subjective topic of "overkill" ... how much more CPU-intensive is it to serve pages over HTTPS compared to HTTP?
Quoting from that, "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead."
The CPU intensive part of a HTTPS connection is the initial key negotiation/session setup (using asymmetric encryption methods). The symmetric encryption of the actual traffic is pretty trivial.
You can amortise the session setup cost by ensuring the HTTPS session caching is enabled on your server (in Apache, the directive is SSLSessionCache). This will let subsequent connections from the same client re-use the same SSL session.
Makes a strong case for everyone to start tunneling their traffic back to a trusted network.
I've been trying out sshutttle <http://github.com/apenwarr/sshuttle>. It only tunnels TCP traffic, so you still have DNS and UDP traffic on the local network.
I think this should be a call to arms to network, web and system admins everywhere. This is a problem that everyone knows about but nobody wants to do anything about since it requires additional setup. Usually the barrier is a technical issue that the end user can't figure out. However since submitting forms via SSL is something the developer can do without impacting the end user at all, this is a simple fix for just about any website. You need a static IP and an SSL certificate, and they are both cheap.
Running out of IPv4 space is an issue in this regard, but hopefully with more people wanting SSL it will push providers to IPv6 quicker. Nicely done EricButler!
It seems fine to just enable SSL everywhere. But indulge me for a second in thinking of alternate solutions.
Instead of sending a cookie, send a piece of javascript code (as part of the SSL-cloaked login handshake) that generates a new cookie for each request, and consider each new cookie in this sequence a "one time use" token. You can turn off SSL for subsequent requests and just use one of these new cookies each time to verify identity because an attacker won't have your cookie generator.
This javascript is really just an encryption key and algorithm, and if you implement it correctly, it should take quite some time for snoopers to reverse engineer the encryption key based on a sequence of one-time-use cookies.
Logistically, I suppose you would run into some trouble setting a new cookie for each request depending on how the page is loaded. For instance, if the user pastes a url into a new tab manually, then this system wouldn't have a chance to set the new cookie first.
However, I think you could architect a system that solves this. For instance, put the javascript token generator source in local storage. If a new page loads with an invalid key, that new page can just get the cookie generator code out of local storage and manually refresh the page's content by making a request with a valid token. This should be quick enough for most users not to notice, in the rare case that they circumvent the site's usual navigation.
A downside is obviously that the content itself is still not safe, but at least the account would be. Any thoughts?
I think all cookies are sent with every request, so cookies can't be used to (securely) pass data to the next page. It'd work just fine on the login page, but every page after that would have to renegotiate to generate a new cookie, meaning you basically just created SSL everywhere.
Local storage, however, could probably be used to do just such a thing, as it exists only locally. In which case you could just have the login page generate an RSA key pair, receive the server's public key in the response, and use that for any kind of secure communication on each page load. The server would have to remember sessions => encryption keys, but that's not too hard.
Haven't thought too hard about passive attacks, but you're not secure against an active MITM like airpwn (http://airpwn.sourceforge.net/Airpwn.html), because the MITM can inject JS into the unencrypted content that steals your JS security scheme's secrets. Effectively, an active MITM allows XSS on plain ol' HTTP sites.
[+] [-] gaoshan|15 years ago|reply
Open an ssh connection to a server you have access to using something like the following:
ssh -ND 8887 -p 22 [email protected]
where 8887 is the port on your laptop that you will tunnel through, -p 22 is the port the ssh server is on (22 is the default but I use a different port so I am used to specifying this) and the rest is your username and the address of the server
Set your network to point to the proxy. On a Mac that would be…
... Open Network Preferences…
... Click Advanced…
... Click Proxies…
... Check the SOCKS Proxy box then in the SOCKS Proxy Server field enter localhost and the port you used (8887)
... OK and Apply and you are done!
Now you can surf safely.
[+] [-] alanstorm|15 years ago|reply
[+] [-] conanite|15 years ago|reply
[+] [-] hdeshev|15 years ago|reply
I am using the Tomato firmware (http://www.polarcloud.com/tomato) which has an SSH daemon.
[+] [-] chrisbroadfoot|15 years ago|reply
[+] [-] someone_here|15 years ago|reply
[+] [-] jonknee|15 years ago|reply
alias socks="ssh -ND 8887 -p 22 [email protected]"
That way I can just open a shell and type "socks" and be good to go (well and then do the system preferences deal, but I have an AppleScript that does that automatically).
[+] [-] cturner|15 years ago|reply
[+] [-] 16s|15 years ago|reply
SSH_Server -> FaceBook == Unencrypted
SSH proxies are not end to end encryption. They only protect part of the path. Not sure why this is being down voted. It's true. The tunnel is only between the client and the SSH server. The HTTP websites that you visit beyond the SSH server see your clear text packets.
[+] [-] gasull|15 years ago|reply
Silence Is Defeat provides SSH accounts for a small donation.
(I am not affiliated with them)
[+] [-] Dornkirk|15 years ago|reply
I'm going traveling for all of next month, the only sites I'll be checking where I'll be logged in is my hotmail account, and I might check my bank account (Chase) - both use https, so I suppose I'm in the clear then? (also when I click "log out" on these sites, it logs me out, but if my session has been hijacked, will it log the hijacker out of the session he's hijacked of mine as well?)
[+] [-] pygy_|15 years ago|reply
The Pandaboard[1] looks like a good fit, but the instructions to install a Linux distro are a bit scary [2]. I guess I could do it, from my Mac, but I'm a bit afraid to mess things up with the low-level disk utilities.
Does someones sells SD cards with a distro pre-installed? Or an equivalent device with an easier setup?
If not, there's probably a market for that...
[1] http://pandaboard.org/
[2] http://omappedia.org/wiki/OMAP_Pandroid_Main#Getting_Started
[+] [-] dhyasama|15 years ago|reply
[+] [-] ramanujan|15 years ago|reply
How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal. By contrast it's fashionable among a certain set (no doubt including the author of this mess, Mr. Butler himself) to hold that the real culprits are the door manufacturers. What said facile analysis excludes, of course is that there is always a greater level of security possible. The level we currently employ reflects our tradeoffs between the available threats and the cost/convenience loss of bolting our doors and putting finials on our gates.
Butler has simply raised the threat level for everyone. He did not invent a new lock or close a hole. He's now forcing lots of people to live up to his level of security. Congratulations to the new Jason Fortuny.
[+] [-] carbon8|15 years ago|reply
I've personally been working from cafes and tunneling everything through SSH for years, but in my experience almost no one else does this.
[+] [-] n-named|15 years ago|reply
I can't think of a more effective way for him to convince them all to update now.
[+] [-] petercooper|15 years ago|reply
To where? I suspect it's to a server, VPS, or similar, and the connection is unencrypted from there to its endpoint. This being the case, could someone with a server on the same subnet be running a browser remotely (or even just tcpdump) and doing a similar thing with your logins?
(This is just some thinking out loud and I may be totally wrong - correct me ;-))
[+] [-] kogir|15 years ago|reply
In addition to making session hijacking harder, using SSL keeps crappy proxies from caching private data. Remember when some AT&T users were getting logged in as other users on Facebook's mobile site? The cause was a mis-configured caching proxy.
Raising awareness of issues like this gets them fixed. Until a service's users demand SSL, it won't be offered. Unless the service is Loopt :) It's not a noticeable computational burden, but it does increase latency and cost money (for certs).
[+] [-] cdine|15 years ago|reply
[+] [-] mmaro|15 years ago|reply
[1] http://www.radiolabs.com/products/antennas/2.4gig/long-range...
[+] [-] Groxx|15 years ago|reply
That said, the current cartel-like setup of certificate authorities (protection money and everything!) makes SSL annoying and expensive if you want the browser to not have a fit. Especially for small-scale projects. But there's really no excuse for larger sites.
[+] [-] limmeau|15 years ago|reply
[+] [-] bluesmoon|15 years ago|reply
An alternative is to bind the user's session to their IP address, but that isn't fool proof either because of NAT, DHCP and certain big ISPs that tend to change IPs on the fly.
What cost-effective solution would you suggest?
[+] [-] StavrosK|15 years ago|reply
[+] [-] jberryman|15 years ago|reply
[+] [-] chaosmachine|15 years ago|reply
Ouch. I think it's time to set up that VPN I've been putting off...
[+] [-] patio11|15 years ago|reply
[+] [-] leftnode|15 years ago|reply
[+] [-] wwortiz|15 years ago|reply
[+] [-] cdine|15 years ago|reply
[+] [-] StavrosK|15 years ago|reply
[+] [-] eapen|15 years ago|reply
amazon basecamp bitly cisco cnet dropbox enom evernote facebook flickr foursquare github google gowalla hackernews harvest live nytimes pivotal sandiego_toorcon slicemanager tumblr twitter wordpress yahoo yelp
[+] [-] muloka|15 years ago|reply
Any questions:
http://www.eff.org/https-everywhere/faq
[+] [-] kijinbear|15 years ago|reply
[+] [-] jorgem|15 years ago|reply
[+] [-] uptown|15 years ago|reply
[+] [-] pauldino|15 years ago|reply
Quoting from that, "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead."
[+] [-] caf|15 years ago|reply
You can amortise the session setup cost by ensuring the HTTPS session caching is enabled on your server (in Apache, the directive is SSLSessionCache). This will let subsequent connections from the same client re-use the same SSL session.
[+] [-] thought_alarm|15 years ago|reply
[+] [-] atomical|15 years ago|reply
EDIT: Sorry, I asking specifically how this FF extension works.
[+] [-] EricButler|15 years ago|reply
[+] [-] audidude|15 years ago|reply
[+] [-] jmreid|15 years ago|reply
I've been trying out sshutttle <http://github.com/apenwarr/sshuttle>. It only tunnels TCP traffic, so you still have DNS and UDP traffic on the local network.
[+] [-] mike_esspe|15 years ago|reply
[+] [-] ddrager|15 years ago|reply
Running out of IPv4 space is an issue in this regard, but hopefully with more people wanting SSL it will push providers to IPv6 quicker. Nicely done EricButler!
[+] [-] jdunck|15 years ago|reply
It just happens that they released w/ support for social networks as a demonstration.
[+] [-] mcmc|15 years ago|reply
Instead of sending a cookie, send a piece of javascript code (as part of the SSL-cloaked login handshake) that generates a new cookie for each request, and consider each new cookie in this sequence a "one time use" token. You can turn off SSL for subsequent requests and just use one of these new cookies each time to verify identity because an attacker won't have your cookie generator.
This javascript is really just an encryption key and algorithm, and if you implement it correctly, it should take quite some time for snoopers to reverse engineer the encryption key based on a sequence of one-time-use cookies.
Logistically, I suppose you would run into some trouble setting a new cookie for each request depending on how the page is loaded. For instance, if the user pastes a url into a new tab manually, then this system wouldn't have a chance to set the new cookie first.
However, I think you could architect a system that solves this. For instance, put the javascript token generator source in local storage. If a new page loads with an invalid key, that new page can just get the cookie generator code out of local storage and manually refresh the page's content by making a request with a valid token. This should be quick enough for most users not to notice, in the rare case that they circumvent the site's usual navigation.
A downside is obviously that the content itself is still not safe, but at least the account would be. Any thoughts?
[+] [-] Groxx|15 years ago|reply
Local storage, however, could probably be used to do just such a thing, as it exists only locally. In which case you could just have the login page generate an RSA key pair, receive the server's public key in the response, and use that for any kind of secure communication on each page load. The server would have to remember sessions => encryption keys, but that's not too hard.
[+] [-] swolchok|15 years ago|reply
[+] [-] meelash|15 years ago|reply