Imagine you implement every type of possible security...
Keeping your entire server-stack up-to-date, making sure you have SSL, using strong encryption for logging-in, hashing the passwords, making sure your server can only be reached via SSH, adding firewalls, filters, etc. etc.
Then some hacker in Eastern Europe comes along (or some beginner at the NSA/GCHQ) and finds out that your .git is exposed and somehow gains all vital user-data and admin data.
Being bashed with a boulder repeatedly would probably be less painful than the torture of knowing "I did it all, but they got me with an HTTP request... because nobody thought of double-checking what our VCS is doing".
How many other glaringly obvious mistakes might be out there right now? I can only imagine.
You seem to imply this is a novel attack vector. But it's really just an instance of a very old mistake:
Don't use the root of your app as document root!
It's really as simple as that. Almost all modern apps have a subdirectory "public/" or similar. That one is meant to be used as document root. You only have to ensure there are no sensitive files in there.
If you fail to introduce such a directory, you'll have a game of cat-and-mouse, where you have to add extra webserver rules for each sensitive file: VCS, crypto secrets, private keys, and so on. In that setup it's easy and very likely to forget one. Of course, this then creates the feeling of "How can anybody keep track of this never ending list of security details?"
In the end, this is a blacklist vs. whitelist thing. Like with your firewall, you want one rule that blocks everything and allows only specific stuff. The alternative is to allow for everything, have rules to deny all sensitive stuff, and finally get in trouble for having forgotten one rule (e.g. probably because an additional service was introduced after the firewall rules have been written.)
This is the asymmetric nature of security in general.
You only need to make a single mistake and you are hosed. Your attacker can fail an arbitrary number of times and only needs to succeed once.
If you are 99.9% likely to make the right call on anything that could have a security impact then you only need to make 1000 decisions before you probably screwed one up and have a hole.
Some would say this means true security is impossible.
A good practice is to disable features that you don't use. I don't think many people need their hidden files to be remotely accessible, so maybe they should either remove the permissions or set a flag in their server so it doesn't allow downloading them.
I did a conference talk at derbycon on exactly this, regarding startups. The amount of obvious holes of founders not knowing what XSS is, or writing bad PHP apps with obvious code execution vulns, or glaring logic and auth mistakes allowing full account hijacks is incredible.
This is what makes security hard: To attack successfully you only have to find one significant mistake, to defend successfully you can't make any mistakes.
Well, that's why you either have a team of competent people making sure all your stuff is up to date, routinely performing pentests, etc., or you delegate as much as possible of those responsibilities to 3rd parties (e.g. Heroku).
Obviously you shouldn't be storing sensitive information in your codebase (I hope everybody knows that), but the problem here is that you might have been way back when you were prototyping and then moved them out of the codebase. It's really common to start a codebase just by hacking something together with hardcoded secrets.
If you have the proper secret segregation now, but you're deploying by doing a git pull, now you run the risk of not really having segregated secrets all over again.
You probably should revoke all your existing credentials and replace them with fresh ones as soon as you pull them out of the VCS. That way, your attackers have the credentials, but they don't work anymore.
> Obviously you shouldn't be storing sensitive information in your codebase (I hope everybody knows that)
Sadly, in my experience hardcoding secrets such as (database) passwords and encryption private keys is not uncommon at all in web applications. I don’t like criticising other developers, but sometimes the people who get to make these decisions don’t necessarily have the perspective or experience to make the rights calls.
Growing a project from one shot mindset prototyping is really problematic. Every time I wish I started by using a real project structure and design philosophy.
Is there an automated "security as a service" service that if I subscribed to it, it would have told me that this is a problem on my websites?
It really annoys me randomly hearing about critical security issues through tech news websites - there should be a more systematic way for "non-security professionals" to ensure their sites are protected to best practice levels.
It looks like for some reason Google actually searches for the character "." (U+FF0E, "fullwidth full stop") when performing those sorts of queries, not "." (U+002E, "full stop").
It seems like if you're storing secrets and the like in your code's repo, the solution is to not do that, rather than just putting a bandaid over it by hiding the repo.
Deploy the secrets separately: they don't belong in your site's codebase.
Hiding the repo is hardly a bandaid. It should never be exposed even if the repo is perfectly secret-free.
Except in the rare cases where it is intentional e.g. an open source repo and you happen to want people to download it from the same domain not github or git.domain.com.
This returns 403 and in my opinion logs should not be turned off for that. I would return 404 to not expose that you are blocking . files with your server.
My suggestion to put in each server { ... }
You find it hard to believe that 0.17% of all websites use git? I'm sure 10x that many do, most probably don't misuse git to deploy rather than solely as a source code manager.
If it's exactly the same repository - no. If it contains some extra branches with local changes, or potentially commits with private information / passwords - definitely.
So in general - it's better not to have it in the first place, because it's unlikely that the person doing the commits knows the whole deployment strategy.
90% of security incidents are due to human errors, not to some secretive hacker group spending $10m to crack TLS.
Doing system administration right (eg. no secrets in repos) has a lot more impact on security than implementing all the other complex controls.
I wonder what would happen if you searched for .svn, too. I'm sure you'd run into the same problem in many places. But would it be more or less likely to occur?
I think less likely. Svn actually had an `export` command, which allowed you to do a checkout of a specific commit with no svn metadata. If someone was actually using svn for deployment, they likely knew about it. (http://svnbook.red-bean.com/en/1.7/svn.ref.svn.c.export.html)
In svn's heyday, the standard way to install or update a popular app like WordPress was to download and extract a tarball. Only people who actually participated in the development of the app itself used svn.
Nowadays, lots of open-source projects encourage ordinary webmasters to clone a Github repo and run `git pull` to update.
So I suspect that public .svn folders will be less common.
Some of the other commenters suggest adding git-dir and work-tree to the git commands, but there's a better solution: use the --separate-git-dir option when cloning the repository.
where <repo dir> is outside of any directory served by the web server and <working copy> is the htdocs root.
This option makes <working copy>/.git a file whose content is:
gitdir: <repo dir>
The advantage is that all git commands work as usual, without the need to set git-dir and work-tree, and that there's nothing special to add to the web server configuration.
In another project, I found that the server had register globals turned on, and therefore could craft a URL like:
http://example.com/admin?valid_user=1, where valid_user was a PHP variable set to true iff their session cookie could be authenticated in the database.
I think it's terrifying that these things still make it through to production websites
Someone on StackOverflow says this will tell nginx not to serve hidden files.
location ~ /\. { return 403; }
My question - do I need to put this once at the top of my configuration file and all it good or does it need to go into multiple places in the nginx config?
It would be great if there was a simple, universal way to say to nginx "don't serve hidden files from anywhere under any circumstances".
Why are people serving web traffic to a folder with a .git folder anyways? I thought it was basic deployment practice to export your code OUT of the VCS before deploying... every shop I've worked at had this in place.
Other solutions just seem hackish to me, but every project is different I suppose.
[+] [-] phantom_oracle|10 years ago|reply
Keeping your entire server-stack up-to-date, making sure you have SSL, using strong encryption for logging-in, hashing the passwords, making sure your server can only be reached via SSH, adding firewalls, filters, etc. etc.
Then some hacker in Eastern Europe comes along (or some beginner at the NSA/GCHQ) and finds out that your .git is exposed and somehow gains all vital user-data and admin data.
Being bashed with a boulder repeatedly would probably be less painful than the torture of knowing "I did it all, but they got me with an HTTP request... because nobody thought of double-checking what our VCS is doing".
How many other glaringly obvious mistakes might be out there right now? I can only imagine.
[+] [-] vog|10 years ago|reply
Don't use the root of your app as document root!
It's really as simple as that. Almost all modern apps have a subdirectory "public/" or similar. That one is meant to be used as document root. You only have to ensure there are no sensitive files in there.
If you fail to introduce such a directory, you'll have a game of cat-and-mouse, where you have to add extra webserver rules for each sensitive file: VCS, crypto secrets, private keys, and so on. In that setup it's easy and very likely to forget one. Of course, this then creates the feeling of "How can anybody keep track of this never ending list of security details?"
In the end, this is a blacklist vs. whitelist thing. Like with your firewall, you want one rule that blocks everything and allows only specific stuff. The alternative is to allow for everything, have rules to deny all sensitive stuff, and finally get in trouble for having forgotten one rule (e.g. probably because an additional service was introduced after the firewall rules have been written.)
[+] [-] Dylan16807|10 years ago|reply
Don't put secret keys in your repository.
Someone getting a copy of your code should be a big annoyance at worst.
[+] [-] Negitivefrags|10 years ago|reply
You only need to make a single mistake and you are hosed. Your attacker can fail an arbitrary number of times and only needs to succeed once.
If you are 99.9% likely to make the right call on anything that could have a security impact then you only need to make 1000 decisions before you probably screwed one up and have a hole.
Some would say this means true security is impossible.
[+] [-] totony|10 years ago|reply
[+] [-] ejcx|10 years ago|reply
It's really bad out in AppSec land
[+] [-] flihp|10 years ago|reply
[+] [-] dschiptsov|10 years ago|reply
[+] [-] GuiA|10 years ago|reply
[+] [-] markwakeford|10 years ago|reply
[+] [-] Nate75Sanders|10 years ago|reply
If you have the proper secret segregation now, but you're deploying by doing a git pull, now you run the risk of not really having segregated secrets all over again.
[+] [-] nostrademons|10 years ago|reply
[+] [-] DrJokepu|10 years ago|reply
Sadly, in my experience hardcoding secrets such as (database) passwords and encryption private keys is not uncommon at all in web applications. I don’t like criticising other developers, but sometimes the people who get to make these decisions don’t necessarily have the perspective or experience to make the rights calls.
[+] [-] agumonkey|10 years ago|reply
[+] [-] Dylan16807|10 years ago|reply
[+] [-] hoodoof|10 years ago|reply
It really annoys me randomly hearing about critical security issues through tech news websites - there should be a more systematic way for "non-security professionals" to ensure their sites are protected to best practice levels.
[+] [-] rasapetter|10 years ago|reply
[+] [-] MichaelCrawford|10 years ago|reply
[+] [-] 317070|10 years ago|reply
When googleing for "inurl:.git", it returns no results. And on top of that, I need to enter a captcha first?
[1] https://www.google.be/search?q=inurl%3A%22.git%22
[+] [-] username|10 years ago|reply
[+] [-] matt_kantor|10 years ago|reply
For example, check the URLs of these search results: https://www.google.com/search?q=inurl:%22.hello%22
[+] [-] x0|10 years ago|reply
[+] [-] itg|10 years ago|reply
[+] [-] akerl_|10 years ago|reply
Deploy the secrets separately: they don't belong in your site's codebase.
[+] [-] bbcbasic|10 years ago|reply
Except in the rare cases where it is intentional e.g. an open source repo and you happen to want people to download it from the same domain not github or git.domain.com.
[+] [-] nodesocket|10 years ago|reply
[+] [-] therealmarv|10 years ago|reply
[+] [-] userbinator|10 years ago|reply
Git is popular, but I find it hard to believe that 1/600 of all websites on the Internet use it.
[+] [-] pessimizer|10 years ago|reply
[+] [-] rocky1138|10 years ago|reply
[+] [-] Zarel|10 years ago|reply
[+] [-] viraptor|10 years ago|reply
So in general - it's better not to have it in the first place, because it's unlikely that the person doing the commits knows the whole deployment strategy.
[+] [-] iiiggglll|10 years ago|reply
[+] [-] jvehent|10 years ago|reply
Of course, doing everything is much better.
[+] [-] aaronbrethorst|10 years ago|reply
[+] [-] viraptor|10 years ago|reply
[+] [-] kijin|10 years ago|reply
Nowadays, lots of open-source projects encourage ordinary webmasters to clone a Github repo and run `git pull` to update.
So I suspect that public .svn folders will be less common.
[+] [-] kilotaras|10 years ago|reply
[+] [-] quicksilver03|10 years ago|reply
For example:
where <repo dir> is outside of any directory served by the web server and <working copy> is the htdocs root.This option makes <working copy>/.git a file whose content is:
The advantage is that all git commands work as usual, without the need to set git-dir and work-tree, and that there's nothing special to add to the web server configuration.[+] [-] raverbashing|10 years ago|reply
It may be possible that gitdir is still accessible through a misconfiguration or security issue (and you're giving them exactly where to look)
Production servers have no business having the .git directory anywhere.
[+] [-] chdir|10 years ago|reply
If you're using a modern framework with url routing, you don't need to worry about hiding .git or .hg in your webserver config file.
[+] [-] georgerobinson|10 years ago|reply
http://example.com/?page=admin/index
which expanded to http://example.com/index.php?page=admin/index where the real admin was at http://example.com/admin/index.php and offered complete access to the backend without authentication or authorization - let alone other files in the file system.
In another project, I found that the server had register globals turned on, and therefore could craft a URL like:
http://example.com/admin?valid_user=1, where valid_user was a PHP variable set to true iff their session cookie could be authenticated in the database.
I think it's terrifying that these things still make it through to production websites
[+] [-] hoodoof|10 years ago|reply
location ~ /\. { return 403; }
My question - do I need to put this once at the top of my configuration file and all it good or does it need to go into multiple places in the nginx config?
It would be great if there was a simple, universal way to say to nginx "don't serve hidden files from anywhere under any circumstances".
[+] [-] therealmarv|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] kelyjames|10 years ago|reply
[+] [-] blindhippo|10 years ago|reply
Other solutions just seem hackish to me, but every project is different I suppose.
[+] [-] nmrm2|10 years ago|reply
I don't get it either.
[+] [-] sarciszewski|10 years ago|reply
So far a few people have requested my .git/ directory but none have attempted to plunder the riches they think they'll find within.
[+] [-] foobarbecue|10 years ago|reply