As stated in the announcement and Tom's email to -hackers, the reasons for advanced notifications are as follows:
* People watching for vulnerabilities and contributors are going to notice that we've stopped automatic updates -- it's better for our project to just tell them all why
* Upgrading relational databases is often not trivial -- we want to give our users time to schedule an upgrade rather than just dropping an important update suddenly
Wouldn't it make somewhat more sense to branch to a private repo without telling the public, make the required changes there, create the packages from that branch, and then later push the changes into the public repo?
The way they are doing it now entices hackers who don't know the exploit but happen to have a recent clone of the repo to look for the big hole in hopes of finding it ahead of the fix. Granted, hackers are probably already doing that sort of thing on high profile services like Postgresql to begin with, but in my experience it is easier to find something exploitable when you already know something exploitable exists than it is when you're just randomly poking around. At the very least it makes it easier to stay motivated and focused.
Just knowing there is a preauth RCE in the code base buys you very, very little. Statistically speaking there are probably quite a few unexposed flaws right now in any compact, core linux distribution. The fact that no one yet knows what they are is precisely what prevents the exploitation. Security holes are numerous and the ones that have escaped detection generally continue to do so - the rate of co-discovery is very low in the field.
Warning ahead of time is thus often very useful - it allows the infrastructure to prepare to make the changes quickly. This is the same reason that folks like Microsoft consolidate most patches into standardized cycles.
Yes, folks are already attempting to find exploitable weakness in these projects. We can assume they exist. Just mentioning that one is confirmed doesn't really lend any insight. The surface area is pretty huge on that project.
If I had to guess where it is, though, I'd bet it was in a PL module. I'm sure there is quite a bit of activity around finding NativeHelper-like situations.
The repo is only being hidden for a week ("until Thursday morning"), which to me implies that the fix is localized and/or well understood. That's a pretty small window of time, so I'm not sure that it will provide any additional impetus for exploit developers.
This is an interesting tradeoff between responsible security and open source transparentness that the Postgres team is facing. I personally think this is a good way to handle a situation with a serious bug, but there are some questions it raises...
Is Postgres working with downstream teams to have everything in place to do a coordinated security release? For instance, are they working with the likes of Debian's security team (for example) to not only make the direct source pullable, but also have releases available to as many users as possible in the platform preferred formats?
If they are, how do they do keep this under wraps? It seems like the kind of thing that would require a fairly wide "pre-disclosure", and managing trust in a large network gets hard.
Those are some good observations. Most likely they have given the information to Debian security. With something like this, there is a degree of trust that is maintained. The Debian security team has access to other zero-days on a regular basis, so ideally they aren't compromised. It wouldn't surprise me if AB tests were performed on security experts on a regular basis. E.g. two exploits discovered, one sent to half the team, the other sent to the other half. After a log(n) number of iterations, potential leaks are exposed.
Diving further though, at what level do you say you trust the system though? Do you trust your compilers to not inject malicious code? (see http://c2.com/cgi/wiki?TheKenThompsonHack) Do you trust peripheral devices? It's very easy to install a physical key logger into a system. Do you trust your chipsets? Compromised chipsets exist and can be used against you. (http://blogs.scientificamerican.com/observations/2011/07/11/...)
It's a tough situation to deal with. This is part of the reason layered security solutions are typically employed. Even if one system has a zero-day, ideally multiple layers should increase the overall complexity of triggering it. One of those layers are security teams and blackout periods where information is not released to the general public, even if they aren't always effective.
As much as I don't like stuff being hidden from me I think this is a good move. The title made me think it was a permanent move but it's just till this update is completed. The bad part of this is, that it's obviously a very serious vulnerability...
What sort of security vulnerability would justify this extra paranoia? The worst case is that it's something which affects the very-common case of postgres servers that only talk to local services, like a unicode or quoting error that made sites which nominally quote their queries correctly vulnerable to SQL injection. That would be as serious as the recent Rails vulnerabilities: drop everything, patch everything everywhere, or definitely be rooted.
Be ready to patch as soon as it's out; this could be a big deal.
This bug shouldn't be a huge deal because if you are treating your sensitive database server as anything but exploitable from any machine with network access, you've already lost.
Even if your DB server is properly restricted, you should still patch quickly, but there is no way that it should be reachable unless you're already heavily compromised.
[+] [-] selenamarie|13 years ago|reply
As stated in the announcement and Tom's email to -hackers, the reasons for advanced notifications are as follows:
* People watching for vulnerabilities and contributors are going to notice that we've stopped automatic updates -- it's better for our project to just tell them all why
* Upgrading relational databases is often not trivial -- we want to give our users time to schedule an upgrade rather than just dropping an important update suddenly
[+] [-] georgemcbay|13 years ago|reply
The way they are doing it now entices hackers who don't know the exploit but happen to have a recent clone of the repo to look for the big hole in hopes of finding it ahead of the fix. Granted, hackers are probably already doing that sort of thing on high profile services like Postgresql to begin with, but in my experience it is easier to find something exploitable when you already know something exploitable exists than it is when you're just randomly poking around. At the very least it makes it easier to stay motivated and focused.
[+] [-] trotsky|13 years ago|reply
Warning ahead of time is thus often very useful - it allows the infrastructure to prepare to make the changes quickly. This is the same reason that folks like Microsoft consolidate most patches into standardized cycles.
[+] [-] ibejoeb|13 years ago|reply
If I had to guess where it is, though, I'd bet it was in a PL module. I'm sure there is quite a bit of activity around finding NativeHelper-like situations.
[+] [-] Cyranix|13 years ago|reply
[+] [-] aporl|13 years ago|reply
[deleted]
[+] [-] sophacles|13 years ago|reply
Is Postgres working with downstream teams to have everything in place to do a coordinated security release? For instance, are they working with the likes of Debian's security team (for example) to not only make the direct source pullable, but also have releases available to as many users as possible in the platform preferred formats?
If they are, how do they do keep this under wraps? It seems like the kind of thing that would require a fairly wide "pre-disclosure", and managing trust in a large network gets hard.
[+] [-] ffk|13 years ago|reply
Diving further though, at what level do you say you trust the system though? Do you trust your compilers to not inject malicious code? (see http://c2.com/cgi/wiki?TheKenThompsonHack) Do you trust peripheral devices? It's very easy to install a physical key logger into a system. Do you trust your chipsets? Compromised chipsets exist and can be used against you. (http://blogs.scientificamerican.com/observations/2011/07/11/...)
It's a tough situation to deal with. This is part of the reason layered security solutions are typically employed. Even if one system has a zero-day, ideally multiple layers should increase the overall complexity of triggering it. One of those layers are security teams and blackout periods where information is not released to the general public, even if they aren't always effective.
[+] [-] danielweber|13 years ago|reply
They can tell Debian what is basically in this mail, and Debian can be ready to accept a new package.
[+] [-] overshard|13 years ago|reply
[+] [-] thomasvendetta|13 years ago|reply
[+] [-] danielweber|13 years ago|reply
EDIT: Here is the answer: http://www.postgresql.org/support/versioning/
8.0 was EOL'd in 2010, but 8.4 will go through July 2014.
[+] [-] jimrandomh|13 years ago|reply
Be ready to patch as soon as it's out; this could be a big deal.
[+] [-] zer01|13 years ago|reply
[+] [-] lawnchair_larry|13 years ago|reply
Even if your DB server is properly restricted, you should still patch quickly, but there is no way that it should be reachable unless you're already heavily compromised.
[+] [-] tptacek|13 years ago|reply
[+] [-] fulafel|13 years ago|reply
[+] [-] willvarfar|13 years ago|reply
A bug in query parameter parsing that would allow SQL injection attacks?
[+] [-] onedognight|13 years ago|reply
[+] [-] tiedemann|13 years ago|reply