One of the things I'm taking from this episode is just how worthless a lot of prognostication about security by a load of "experts" on the internet is, especially of the "trust us to get it right" variety.
Frankly, their whining about how hard crypto is is partly responsible for the monoculture we have. Yes, it's difficult (more so in protocol than implementation), but they are so offputting to new people coming into the field it is insane.
Clearly OpenSSL dev is broken, at least partly because everyone assumes everyone else is auditing all 300k lines of it, but also I can't help wondering if this calls for stronger component isolation within cryptosystems. For example, protocol implementation, encoding and decoding seem like they should all be totally isolated, so a disaster like this doesn't mean you could be leaking information from the rest of the system. I imagine many a HSM vendor has been quite pleased by this news.
Surely, Nigel, if you'd just implemented your own encrypted transport, everything would have worked out great. Look how well that worked for Cryptocat.
The most consistent complaint about OpenSSL is that it is developed amateurishly (I think it's more complicated than that, but "amateurish" is a fair summary of the critique). Your response seems to be, "double down on amateurish!"
PS: The only reason you know about heartbleed is that one of the self-proclaimed "experts" found it for you. Neel Mehta is an insider's insider.
The good news (sort of) is that it's an ordinary buffer overflow attack. We've known that these are a problem for decades and we've known how to check for them automatically for decades. Even in C's niche there are research prototypes for safe C-like languages. And yet there seems little effort to make them production-ready and deploy them; what kind of market failure is that?
I think this is the final nail in the coffin for the ideas that "many eyes make all bugs shallow" or that code review alone by "experts" is enough to eliminate even the simplest of bugs. Use static analysis or go home.
There are plenty of other ways to screw up with crypto but they don't pass out the server's private keys just like that.
There's a few different "just trust us" mentalities, but I've become more comfortable with the "don't roll your own if you missed the first 10 minutes of the lecture" worldview.
Separating algorithm from implementation seems like it shouldn't be so difficult, but retrospect is a great teacher. So now we have approaches like NaCl, where the algorithms are designed to be hard to implement badly. And we have projects like LibTom, which aims to implement existing algorithms clearly. Both seem to be enjoying varying degrees of success.
This is (I hope) a once-in-a-lifetime incident, so we have to be careful not to extrapolate it too hard. On the other hand, it's seriously big deal. Overall, it seems to strongly challenge a couple of important assumptions.
As you say, it's a strong challenge to the mantra of "never implement your own crypto". I think (think) it still holds for the crypto primitives. If you're reimplementing AES you're probably doing it wrong. But the protocols? I'm not so sure now. Common wisdom seemed to be that if you implement the protocols yourself you'd screw them up, and you should stick with tried-and-true existing implementations. Now it's apparent that "tried" doesn't have to imply "true". Something being used by millions of people for years doesn't prevent it from having a huge vulnerability for years. Are your odds better or worse rolling your own? I'm not so sure now.
Consider Apple's "goto fail" bug, for example. Among a lot of other stuff, they caught some criticism for reimplementing TLS instead of just using OpenSSL. Well, if they had used OpenSSL instead, it turns out that they would have been shipping an even more serious bug for even more time.
It's also interesting to me how it challenges the idea of encrypting stuff by default. For years, people have been saying that as much traffic as possible should be encrypted, even unimportant stuff that nobody cares about. By doing that, the idea goes, using encryption isn't suspicious and you force attackers to spread out their resources. If only a small amount of traffic is encrypted, attackers can focus just on that traffic. Accordingly, a lot of sites that didn't really need it enabled SSL or even required it, including my own. By doing this, a lot of them inadvertently made things much much worse. A site that's only accessible over HTTP is much better off than a site that's accessible over HTTPS but vulnerable to heartbleed. I don't think the general idea is wrong, but it certainly gives me pause, and I think more consideration has to be given to the increase in attack surface you take on when you enable encryption.
In any case, I really hope we see some new crypto projects come out of this, or more resources put into existing OpenSSL alternatives.
"Catastrophic" is the right word. On the scale of 1 to 10, this is an 11.
No, it's not. If you asked me — I was a CISO not many years ago — I'd call this an 8. Schneier means well, but he has a tendency to exaggerate. (Here is an example of him suggesting that SOAP and other web services never be used because they "sneak" through HTTP and are therefore inherently insecure: https://www.schneier.com/crypto-gram-0006.html#SOAP)
A 10 would be a case where a bug was not easily patched, and gave complete control of servers to any interested script kiddie. Thousands or millions of web users would have had enough information for credit card and identity theft to be acted on before the hole could be plugged.
This is not that case. And it's certainly not an "11". Most vital websites have either already been patched or are about to be.
I'm going to have to go ahead and sort of disagree with you. A security library, specifically doing crypto, that is installed/embedded/used everywhere that trivially leaks plaintext data remotely to anyone who comes knocking (including passwords, keys, CC numbers, etc) is a compete failure.
Sure, it could hand out shells into a remote system, or hell it could launch a bunch of nuclear rockets as well...that would be very bad. But you seem to miss the point that perhaps someone's password to their shell (or maybe a nuclear launch code) is going over the wire and is intercepted...by a hostile government agency, or a 13 year old playing with a python script. There are endless, devastating scenarios one can think up caused by such a critical bug in the very fabric of the secure communication of the internet.
Heartbeep did, in theory, allow for millions of web users to have their credit cards, passwords, addresses, social security numbers, tax filings and more to be compromised.
Worse than "only in danger until patched", retroactively stored traffic is vulnerable if, like many sites, Perfect Forward Security wasn't used.
If the NSA took advantage of this at all, their logged traffic has become very useful...
"Even 11 is an understatement. Remember the servers involved have potentially been leaking their private key for their certificate! This means anyone can 'fake' being them.
It is not enough to do new certificates. All of the old certificates could now be used for man in the middle attacks! 2/3rds of the Internets certificates potentially need to be blacklisted! This is a MAJOR disaster.
It is unfeasible to blacklist such a large amount of certificates - as every device requires a list of all blacklisted certificates. This means all of the major CA's are going to have to black list their intermediate certificate authorities, and start issuing all new certificates under new CA's. This means even people who weren't effected will probably have to have their certificates blacklisted.
In short EVERY existing CA used on the internet may have to be black listed, and every single SSL certificate re-issued.
IMO SSL/TLS is now completely broken. The number of potential certificates that have been exploited and that could now be used for man in the middle attacks could be in the millions..... the list of black listed certificates will be in the millions and/or the number of blacklisted sub certficate authorities is probably going to be 10,000+. Vendors already hate just including one or two items on the blacklist, let alone this number of items.... "
It's not actually 2 thirds of the Internet, but it can be that the effects can be really bigger than most people imagine on the moment. Either the massive blacklisting or ignoring the period of potential exposure.
Hopefully all this can result in the push to change some of the principles of certificate verification. And maybe a different approach to OpenSSL development.
Eh, I remember how about a month ago when that recent GnuTLS bug was found the almost dominant sentiment on HN was along the lines of "how come anyone is using GnuTLS instead of OpenSSL", "GnuTLS codebase is horrible, use OpenSSL", "the guy maintaining GnuTLS is an idiot, use OpenSSL", "OpenSSL has more expert eyes on it", etc. Although I prefer OpenSSL (for no particular reason), this all seemed so obviously stupid and shortsighted, not to mention some of it factually wrong. And what do you know, a month later we get an order of magnitude worse bug in OpenSSL which was also probably an order of magnitude easier to detect. I made a comment[0] then along that line, thinking to myself that I'd really hate if I got to say "told you so" but that unfortunately I probably will get the chance. I didn't think it would be this bad though.
Even if we generate a new key pair and replace our certificate, aren’t we still vulnerable to MIM attacks if someone had downloaded the old private key and use the old certificate?
That's why it's so vital for everyone to implement Perfect Forward Secrecy. Yes, it's a little late for that now in regards to this bug, but who knows what others bugs like this will be discovered in the future. Let's at least not make the same mistake twice, by not taking advantage of PFS, which could've prevented most of the damage from Heartbleed.
Troy Hunt: ”The Heartbleed bug itself was introduced in December 2011, in fact it appears to have been committed about an hour before New Year’s Eve (read into that what you will). The bug affects OpenSSL version 1.0.1 which was released in March 2012 through to 1.0.1f which hit on Jan 6 of this year. The unfortunate thing about this timing is that you’re only vulnerable if you’ve been doing “the right thing” and keeping your versions up to date! Then again, for those that believe you need to give new releases a little while to get the bugs out before adopting them, would they really have expected it to take more than two years? Probably not.”
There is virtually no useful software vulnerability for which you can't conjure up a compelling-sounding narrative of deliberate introduction (or "bugdoor"). It's like numerology. So you should be wary of people insinuating about bugs.
What makes Dual_EC so compelling to experts is the "Nobody but us" nature of the flaw: the bug is cryptographically limited to a small number of actors. FedGov buys hundreds of millions of dollars of COTS gear with OpenSSL embedded, and this bug is so simple that middle-schoolers are exploiting it. You shouldn't even need to ask if it was deliberate.
Schneier.com Has Moved
As of March 3rd, Schneier.com has moved to a new server. If you've used a
hosts file to map www.schneier.com to a fixed IP address, you'll need to
either update the IP to 204.11.247.93, or remove the line. Otherwise,
either your software or your name server is hanging on to old DNS
information much longer than it should.
Ok, how should I "authenticate" that the site at the new address is the "real" one?
Forgive my naivety here, but is there any way to tell which sites over the last 2 years I/we have used that may require new passwords, and whether they've been fixed?
I'm kinda looking at a site that lists the major sites (Banks, Socials, Shops, etc) that shows a status on whether you're fine / should change password / await fix before change password)
Seriously, it's easier to just change all of your passwords than to hunt down a list (that will be incomplete and give you a false sense of security), cross-match against servers you might have an account on, then change their passwords.
Just change them all and be done with it.
The "whether they've been fixed" part is a little tougher, because that lets you know when you should change your password. General sentiment I've been seeing is give it a week for everybody to fix their stuff (even this might be a little long) and then change your passwords. If a given site says either "we weren't affected, here's why" or "we've patched our stuff, we're all good" then you should change your password on that site ASAP.
I'm 100% positive that many targets have had their keys extracted, but it's hard-to-impossible for the attacker to choose what fragment of memory the server returns, and it depends heavily on the server in question. What works against nginx won't work against lighttpd or apache.
I hit a site I control repeatedly yesterday and couldn't even get any common byte-arrays in common across hundreds of connections.
Of course, as good practice, all organizations should treat their keys as compromised and issue new ones.
Also, his "it leaves no trace" is a problem. It's trivial to recognize the traffic pattern.
And that's why every single login system should have two-factor auth -
I started using google's authenticator app for my google and github account and it works just great.
I wish I could use it for every account I have.
Just a question (and I don't know too much about this). Is there any chance that certificate authorities who give out warranties could actually have to pay out on them now? Do any of them use OpenSSL?
Without looking at the specifics, the CA can't be held responsible for you leaking the key yourself.
Or do you mean if the CA companies themselves were compromised? That's a big separate issue. Even if the web process is the one that generates the keys (I'm skeptical, but it's possible), any keys made that way would quickly be moved out memory, unless they were made that day.
I think the warranty only covers losses that occurred during the use of the certificate. If it wasn't limited liability, Heartbleed could have caused a "Lehman Brothers" style default for all CAs.
Is there really a point in changing keys and passwords? It seems to me that if an attacker got the passwords, I should assume they already installed a root kit on my server?
I'm honestly not sure how to react. I'm not really a sysadmin, but I have a server online.
I suppose I could start a new server, but how can I be sure that the provider has already patched all their holes? If they've been hacked, maybe the images they use for preparing new servers have been compromised, too? Might be better to wait a little before restarting everything from scratch?
Request for clarification from those who understand the bug's workings:
The memory it can expose is limited to that visible to the process using openssl, right? Or does the bug reside low enough in the kernal stack to disregard memory protections?
One question - this keeps talking about attackers being able to "read all of memory." Does anyone know whether that's limited to the process that is running OpenSSL code?
At this point it's safer to say that an intelligence agency is responsible than that they aren't responsible. This is precisely what Schneier, Greenwald, et al. mean when they say that the NSA tactics degrade the security of the overall internet architecture. It's incredibly dangerous.
How can you make such a claim? Do you have any proof that they were involved with this specific bug?
I get that the NSA is after us but when you consider that the bug is of the exact same class as a bug every C programmer has ever made in their career, it seems probable that it could have happened on accident. Where do you see the malicious intent?
I personally incline toward - memory safety in C is hard. We have had enough of those bug pop on their own to need encouragement. Whether interested parties knew of it and use it as a key towards all you can eat intelligence buffet is another story.
Nah, I am extremely pro-Snowden & extremely anti-NSA... but I'm also a person that enjoys programming in C.
C is hard. I really think this was just a bug. What _is_ possible though is that the NSA knew about this bug since awhile back and kept it secret. But then if they knew about the bug, why was nasa.gov vulnerable? I would not expect any .gov domains to be vulnerable unless to create plausible deniability - but this kind of conspiracy logic has no end.
Interesting point for me is that those who serve over HTTP only are not vulnerable, which I think is a good case study in the risk of complexity. A lot of security experts have been calling for HTTPS everywhere, on the basis that it is low cost. Clearly there is a cost to the extra complexity, and in this case a bug in the security layer that results in a worse situation than if there had been no security layer at all.
> Interesting point for me is that those who serve over HTTP only are not vulnerable
...yes they are. You don't even need remote private key disclosure to MITM an http-only server. The way HTTP digest is written means you either store all passwords in a retrievable form or drop down to basic auth where anyone capable of base64 decoding can read all passwords.
This situation is by no means worse than if everyone had just used plain HTTP.
That comparison is off. HTTPS secures communications; this bug exposes memory, you still want the first without the second and HTTP doesn't offer it. Minimizing complexity is a worthwhile goal but definitely not the only one since otherwise we could simply opt not to have communication that needs to be secured at all.
[+] [-] fidotron|12 years ago|reply
Frankly, their whining about how hard crypto is is partly responsible for the monoculture we have. Yes, it's difficult (more so in protocol than implementation), but they are so offputting to new people coming into the field it is insane.
Clearly OpenSSL dev is broken, at least partly because everyone assumes everyone else is auditing all 300k lines of it, but also I can't help wondering if this calls for stronger component isolation within cryptosystems. For example, protocol implementation, encoding and decoding seem like they should all be totally isolated, so a disaster like this doesn't mean you could be leaking information from the rest of the system. I imagine many a HSM vendor has been quite pleased by this news.
[+] [-] tptacek|12 years ago|reply
The most consistent complaint about OpenSSL is that it is developed amateurishly (I think it's more complicated than that, but "amateurish" is a fair summary of the critique). Your response seems to be, "double down on amateurish!"
PS: The only reason you know about heartbleed is that one of the self-proclaimed "experts" found it for you. Neel Mehta is an insider's insider.
[+] [-] skybrian|12 years ago|reply
I think this is the final nail in the coffin for the ideas that "many eyes make all bugs shallow" or that code review alone by "experts" is enough to eliminate even the simplest of bugs. Use static analysis or go home.
There are plenty of other ways to screw up with crypto but they don't pass out the server's private keys just like that.
[+] [-] dvanduzer|12 years ago|reply
Separating algorithm from implementation seems like it shouldn't be so difficult, but retrospect is a great teacher. So now we have approaches like NaCl, where the algorithms are designed to be hard to implement badly. And we have projects like LibTom, which aims to implement existing algorithms clearly. Both seem to be enjoying varying degrees of success.
[+] [-] mikeash|12 years ago|reply
As you say, it's a strong challenge to the mantra of "never implement your own crypto". I think (think) it still holds for the crypto primitives. If you're reimplementing AES you're probably doing it wrong. But the protocols? I'm not so sure now. Common wisdom seemed to be that if you implement the protocols yourself you'd screw them up, and you should stick with tried-and-true existing implementations. Now it's apparent that "tried" doesn't have to imply "true". Something being used by millions of people for years doesn't prevent it from having a huge vulnerability for years. Are your odds better or worse rolling your own? I'm not so sure now.
Consider Apple's "goto fail" bug, for example. Among a lot of other stuff, they caught some criticism for reimplementing TLS instead of just using OpenSSL. Well, if they had used OpenSSL instead, it turns out that they would have been shipping an even more serious bug for even more time.
It's also interesting to me how it challenges the idea of encrypting stuff by default. For years, people have been saying that as much traffic as possible should be encrypted, even unimportant stuff that nobody cares about. By doing that, the idea goes, using encryption isn't suspicious and you force attackers to spread out their resources. If only a small amount of traffic is encrypted, attackers can focus just on that traffic. Accordingly, a lot of sites that didn't really need it enabled SSL or even required it, including my own. By doing this, a lot of them inadvertently made things much much worse. A site that's only accessible over HTTP is much better off than a site that's accessible over HTTPS but vulnerable to heartbleed. I don't think the general idea is wrong, but it certainly gives me pause, and I think more consideration has to be given to the increase in attack surface you take on when you enable encryption.
In any case, I really hope we see some new crypto projects come out of this, or more resources put into existing OpenSSL alternatives.
[+] [-] area51org|12 years ago|reply
No, it's not. If you asked me — I was a CISO not many years ago — I'd call this an 8. Schneier means well, but he has a tendency to exaggerate. (Here is an example of him suggesting that SOAP and other web services never be used because they "sneak" through HTTP and are therefore inherently insecure: https://www.schneier.com/crypto-gram-0006.html#SOAP)
A 10 would be a case where a bug was not easily patched, and gave complete control of servers to any interested script kiddie. Thousands or millions of web users would have had enough information for credit card and identity theft to be acted on before the hole could be plugged.
This is not that case. And it's certainly not an "11". Most vital websites have either already been patched or are about to be.
We can now keep calm and carry on.
[+] [-] orthecreedence|12 years ago|reply
Sure, it could hand out shells into a remote system, or hell it could launch a bunch of nuclear rockets as well...that would be very bad. But you seem to miss the point that perhaps someone's password to their shell (or maybe a nuclear launch code) is going over the wire and is intercepted...by a hostile government agency, or a 13 year old playing with a python script. There are endless, devastating scenarios one can think up caused by such a critical bug in the very fabric of the secure communication of the internet.
This is a pretty big deal.
[+] [-] scott_karana|12 years ago|reply
Worse than "only in danger until patched", retroactively stored traffic is vulnerable if, like many sites, Perfect Forward Security wasn't used.
If the NSA took advantage of this at all, their logged traffic has become very useful...
[+] [-] wiredfool|12 years ago|reply
[+] [-] acqq|12 years ago|reply
http://arstechnica.com/security/2014/04/critical-crypto-bug-...
"Even 11 is an understatement. Remember the servers involved have potentially been leaking their private key for their certificate! This means anyone can 'fake' being them.
It is not enough to do new certificates. All of the old certificates could now be used for man in the middle attacks! 2/3rds of the Internets certificates potentially need to be blacklisted! This is a MAJOR disaster.
It is unfeasible to blacklist such a large amount of certificates - as every device requires a list of all blacklisted certificates. This means all of the major CA's are going to have to black list their intermediate certificate authorities, and start issuing all new certificates under new CA's. This means even people who weren't effected will probably have to have their certificates blacklisted.
In short EVERY existing CA used on the internet may have to be black listed, and every single SSL certificate re-issued.
IMO SSL/TLS is now completely broken. The number of potential certificates that have been exploited and that could now be used for man in the middle attacks could be in the millions..... the list of black listed certificates will be in the millions and/or the number of blacklisted sub certficate authorities is probably going to be 10,000+. Vendors already hate just including one or two items on the blacklist, let alone this number of items.... "
It's not actually 2 thirds of the Internet, but it can be that the effects can be really bigger than most people imagine on the moment. Either the massive blacklisting or ignoring the period of potential exposure.
Hopefully all this can result in the push to change some of the principles of certificate verification. And maybe a different approach to OpenSSL development.
[+] [-] nzp|12 years ago|reply
[0] https://news.ycombinator.com/item?id=7346879
[+] [-] clarkevans|12 years ago|reply
[+] [-] ig1|12 years ago|reply
http://news.netcraft.com/archives/2013/05/13/how-certificate...
[+] [-] higherpurpose|12 years ago|reply
[+] [-] danielweber|12 years ago|reply
I was working on a standalone certificate checker last year but couldn't figure this one out.
[+] [-] ibotty|12 years ago|reply
[+] [-] efficientarch|12 years ago|reply
[+] [-] yiedyie|12 years ago|reply
Troy Hunt: ”The Heartbleed bug itself was introduced in December 2011, in fact it appears to have been committed about an hour before New Year’s Eve (read into that what you will). The bug affects OpenSSL version 1.0.1 which was released in March 2012 through to 1.0.1f which hit on Jan 6 of this year. The unfortunate thing about this timing is that you’re only vulnerable if you’ve been doing “the right thing” and keeping your versions up to date! Then again, for those that believe you need to give new releases a little while to get the bugs out before adopting them, would they really have expected it to take more than two years? Probably not.”
[+] [-] tptacek|12 years ago|reply
What makes Dual_EC so compelling to experts is the "Nobody but us" nature of the flaw: the bug is cryptographically limited to a small number of actors. FedGov buys hundreds of millions of dollars of COTS gear with OpenSSL embedded, and this bug is so simple that middle-schoolers are exploiting it. You shouldn't even need to ask if it was deliberate.
[+] [-] yp_master|12 years ago|reply
I know, I'll use OpenSSL and HTTPS!
[+] [-] cliveowen|12 years ago|reply
http://blog.cryptographyengineering.com/2014/04/attack-of-we...
[+] [-] rossjudson|12 years ago|reply
[+] [-] feelstupid|12 years ago|reply
I'm kinda looking at a site that lists the major sites (Banks, Socials, Shops, etc) that shows a status on whether you're fine / should change password / await fix before change password)
[+] [-] CanSpice|12 years ago|reply
Seriously, it's easier to just change all of your passwords than to hunt down a list (that will be incomplete and give you a false sense of security), cross-match against servers you might have an account on, then change their passwords.
Just change them all and be done with it.
The "whether they've been fixed" part is a little tougher, because that lets you know when you should change your password. General sentiment I've been seeing is give it a week for everybody to fix their stuff (even this might be a little long) and then change your passwords. If a given site says either "we weren't affected, here's why" or "we've patched our stuff, we're all good" then you should change your password on that site ASAP.
[+] [-] rbanffy|12 years ago|reply
[0] https://xkcd.com/1353/
[1] https://en.wikipedia.org/wiki/Tears_in_rain_soliloquy
[+] [-] bcohen5055|12 years ago|reply
[+] [-] danielweber|12 years ago|reply
I'm 100% positive that many targets have had their keys extracted, but it's hard-to-impossible for the attacker to choose what fragment of memory the server returns, and it depends heavily on the server in question. What works against nginx won't work against lighttpd or apache.
I hit a site I control repeatedly yesterday and couldn't even get any common byte-arrays in common across hundreds of connections.
Of course, as good practice, all organizations should treat their keys as compromised and issue new ones.
Also, his "it leaves no trace" is a problem. It's trivial to recognize the traffic pattern.
[+] [-] dfa0|12 years ago|reply
To the point, when the nature of a thing is to foo and you remove all obstacles from that event, expect positive feedback...and lots of it.
[+] [-] kgo|12 years ago|reply
[+] [-] jypepin|12 years ago|reply
[+] [-] eridius|12 years ago|reply
[+] [-] TomGullen|12 years ago|reply
[+] [-] danielweber|12 years ago|reply
Or do you mean if the CA companies themselves were compromised? That's a big separate issue. Even if the web process is the one that generates the keys (I'm skeptical, but it's possible), any keys made that way would quickly be moved out memory, unless they were made that day.
[+] [-] hadoukenio|12 years ago|reply
[+] [-] facepalm|12 years ago|reply
I'm honestly not sure how to react. I'm not really a sysadmin, but I have a server online.
I suppose I could start a new server, but how can I be sure that the provider has already patched all their holes? If they've been hacked, maybe the images they use for preparing new servers have been compromised, too? Might be better to wait a little before restarting everything from scratch?
[+] [-] fixermark|12 years ago|reply
The memory it can expose is limited to that visible to the process using openssl, right? Or does the bug reside low enough in the kernal stack to disregard memory protections?
[+] [-] mattgreenrocks|12 years ago|reply
Yes. Only that process.
[+] [-] dllthomas|12 years ago|reply
[+] [-] mrfusion|12 years ago|reply
[+] [-] apawloski|12 years ago|reply
But if your site transmits or receives anything that a third party shouldn't see (hint: it probably does), you should start using SSL.
[+] [-] pvnick|12 years ago|reply
[+] [-] rtpg|12 years ago|reply
I get that the NSA is after us but when you consider that the bug is of the exact same class as a bug every C programmer has ever made in their career, it seems probable that it could have happened on accident. Where do you see the malicious intent?
[+] [-] venomsnake|12 years ago|reply
[+] [-] smtddr|12 years ago|reply
C is hard. I really think this was just a bug. What _is_ possible though is that the NSA knew about this bug since awhile back and kept it secret. But then if they knew about the bug, why was nasa.gov vulnerable? I would not expect any .gov domains to be vulnerable unless to create plausible deniability - but this kind of conspiracy logic has no end.
[+] [-] ahomescu1|12 years ago|reply
[+] [-] 0x006A|12 years ago|reply
[+] [-] dfa0|12 years ago|reply
[+] [-] jl6|12 years ago|reply
[+] [-] awj|12 years ago|reply
...yes they are. You don't even need remote private key disclosure to MITM an http-only server. The way HTTP digest is written means you either store all passwords in a retrievable form or drop down to basic auth where anyone capable of base64 decoding can read all passwords.
This situation is by no means worse than if everyone had just used plain HTTP.
[+] [-] andreasvc|12 years ago|reply