top | item 9952356

One in every 600 websites has .git exposed

424 points| jamiejin | 10 years ago |jamiembrown.com | reply

205 comments

order
[+] phantom_oracle|10 years ago|reply
Imagine you implement every type of possible security...

Keeping your entire server-stack up-to-date, making sure you have SSL, using strong encryption for logging-in, hashing the passwords, making sure your server can only be reached via SSH, adding firewalls, filters, etc. etc.

Then some hacker in Eastern Europe comes along (or some beginner at the NSA/GCHQ) and finds out that your .git is exposed and somehow gains all vital user-data and admin data.

Being bashed with a boulder repeatedly would probably be less painful than the torture of knowing "I did it all, but they got me with an HTTP request... because nobody thought of double-checking what our VCS is doing".

How many other glaringly obvious mistakes might be out there right now? I can only imagine.

[+] vog|10 years ago|reply
You seem to imply this is a novel attack vector. But it's really just an instance of a very old mistake:

Don't use the root of your app as document root!

It's really as simple as that. Almost all modern apps have a subdirectory "public/" or similar. That one is meant to be used as document root. You only have to ensure there are no sensitive files in there.

If you fail to introduce such a directory, you'll have a game of cat-and-mouse, where you have to add extra webserver rules for each sensitive file: VCS, crypto secrets, private keys, and so on. In that setup it's easy and very likely to forget one. Of course, this then creates the feeling of "How can anybody keep track of this never ending list of security details?"

In the end, this is a blacklist vs. whitelist thing. Like with your firewall, you want one rule that blocks everything and allows only specific stuff. The alternative is to allow for everything, have rules to deny all sensitive stuff, and finally get in trouble for having forgotten one rule (e.g. probably because an additional service was introduced after the firewall rules have been written.)

[+] Dylan16807|10 years ago|reply
Wrong lesson.

Don't put secret keys in your repository.

Someone getting a copy of your code should be a big annoyance at worst.

[+] Negitivefrags|10 years ago|reply
This is the asymmetric nature of security in general.

You only need to make a single mistake and you are hosed. Your attacker can fail an arbitrary number of times and only needs to succeed once.

If you are 99.9% likely to make the right call on anything that could have a security impact then you only need to make 1000 decisions before you probably screwed one up and have a hole.

Some would say this means true security is impossible.

[+] totony|10 years ago|reply
A good practice is to disable features that you don't use. I don't think many people need their hidden files to be remotely accessible, so maybe they should either remove the permissions or set a flag in their server so it doesn't allow downloading them.
[+] ejcx|10 years ago|reply
I did a conference talk at derbycon on exactly this, regarding startups. The amount of obvious holes of founders not knowing what XSS is, or writing bad PHP apps with obvious code execution vulns, or glaring logic and auth mistakes allowing full account hijacks is incredible.

It's really bad out in AppSec land

[+] flihp|10 years ago|reply
This is what makes security hard: To attack successfully you only have to find one significant mistake, to defend successfully you can't make any mistakes.
[+] dschiptsov|10 years ago|reply
What "all vital user-data and admin data"?
[+] GuiA|10 years ago|reply
Well, that's why you either have a team of competent people making sure all your stuff is up to date, routinely performing pentests, etc., or you delegate as much as possible of those responsibilities to 3rd parties (e.g. Heroku).
[+] Nate75Sanders|10 years ago|reply
Obviously you shouldn't be storing sensitive information in your codebase (I hope everybody knows that), but the problem here is that you might have been way back when you were prototyping and then moved them out of the codebase. It's really common to start a codebase just by hacking something together with hardcoded secrets.

If you have the proper secret segregation now, but you're deploying by doing a git pull, now you run the risk of not really having segregated secrets all over again.

[+] nostrademons|10 years ago|reply
You probably should revoke all your existing credentials and replace them with fresh ones as soon as you pull them out of the VCS. That way, your attackers have the credentials, but they don't work anymore.
[+] DrJokepu|10 years ago|reply
> Obviously you shouldn't be storing sensitive information in your codebase (I hope everybody knows that)

Sadly, in my experience hardcoding secrets such as (database) passwords and encryption private keys is not uncommon at all in web applications. I don’t like criticising other developers, but sometimes the people who get to make these decisions don’t necessarily have the perspective or experience to make the rights calls.

[+] agumonkey|10 years ago|reply
Growing a project from one shot mindset prototyping is really problematic. Every time I wish I started by using a real project structure and design philosophy.
[+] Dylan16807|10 years ago|reply
Retroactively remove them from the commit history. Your sensitive secrets should not be on every developer machine.
[+] hoodoof|10 years ago|reply
Is there an automated "security as a service" service that if I subscribed to it, it would have told me that this is a problem on my websites?

It really annoys me randomly hearing about critical security issues through tech news websites - there should be a more systematic way for "non-security professionals" to ensure their sites are protected to best practice levels.

[+] 317070|10 years ago|reply
It seems Google doesn't like people looking into the extent of this problem [1].

When googleing for "inurl:.git", it returns no results. And on top of that, I need to enter a captcha first?

[1] https://www.google.be/search?q=inurl%3A%22.git%22

[+] username|10 years ago|reply
The .git directory wouldn't be crawled though, correct?
[+] x0|10 years ago|reply
I've had more success with 'intitle: Index of /.git'
[+] itg|10 years ago|reply
I often get captcha requests when doing any google search with inurl or intitle.
[+] akerl_|10 years ago|reply
It seems like if you're storing secrets and the like in your code's repo, the solution is to not do that, rather than just putting a bandaid over it by hiding the repo.

Deploy the secrets separately: they don't belong in your site's codebase.

[+] bbcbasic|10 years ago|reply
Hiding the repo is hardly a bandaid. It should never be exposed even if the repo is perfectly secret-free.

Except in the rare cases where it is intentional e.g. an open source repo and you happen to want people to download it from the same domain not github or git.domain.com.

[+] nodesocket|10 years ago|reply
In nginx, best to just not serve dot files:

    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
[+] therealmarv|10 years ago|reply
This returns 403 and in my opinion logs should not be turned off for that. I would return 404 to not expose that you are blocking . files with your server. My suggestion to put in each server { ... }

    location ~ /\.  { deny all; return 404; }
[+] userbinator|10 years ago|reply
More precisely, it's "one in every 600 websites examined"

Git is popular, but I find it hard to believe that 1/600 of all websites on the Internet use it.

[+] pessimizer|10 years ago|reply
You find it hard to believe that 0.17% of all websites use git? I'm sure 10x that many do, most probably don't misuse git to deploy rather than solely as a source code manager.
[+] rocky1138|10 years ago|reply
I agree with your first point, but not your last.
[+] Zarel|10 years ago|reply
Related question: Is there any risk to exposing .git if your Git repository is already publicly available (e.g. on GitHub)?
[+] viraptor|10 years ago|reply
If it's exactly the same repository - no. If it contains some extra branches with local changes, or potentially commits with private information / passwords - definitely.

So in general - it's better not to have it in the first place, because it's unlikely that the person doing the commits knows the whole deployment strategy.

[+] iiiggglll|10 years ago|reply
Not going to name names, but there are mobile apps have their .git packaged up with them too.
[+] jvehent|10 years ago|reply
90% of security incidents are due to human errors, not to some secretive hacker group spending $10m to crack TLS. Doing system administration right (eg. no secrets in repos) has a lot more impact on security than implementing all the other complex controls.

Of course, doing everything is much better.

[+] aaronbrethorst|10 years ago|reply
I wonder what would happen if you searched for .svn, too. I'm sure you'd run into the same problem in many places. But would it be more or less likely to occur?
[+] kijin|10 years ago|reply
In svn's heyday, the standard way to install or update a popular app like WordPress was to download and extract a tarball. Only people who actually participated in the development of the app itself used svn.

Nowadays, lots of open-source projects encourage ordinary webmasters to clone a Github repo and run `git pull` to update.

So I suspect that public .svn folders will be less common.

[+] quicksilver03|10 years ago|reply
Some of the other commenters suggest adding git-dir and work-tree to the git commands, but there's a better solution: use the --separate-git-dir option when cloning the repository.

For example:

    git clone --separate-git-dir=<repo dir> <remote url> <working copy>
where <repo dir> is outside of any directory served by the web server and <working copy> is the htdocs root.

This option makes <working copy>/.git a file whose content is:

    gitdir: <repo dir>
The advantage is that all git commands work as usual, without the need to set git-dir and work-tree, and that there's nothing special to add to the web server configuration.
[+] raverbashing|10 years ago|reply
I disagree

It may be possible that gitdir is still accessible through a misconfiguration or security issue (and you're giving them exactly where to look)

Production servers have no business having the .git directory anywhere.

[+] chdir|10 years ago|reply
A comment mentions this deep below, but I think this deserves a bit more attention:

If you're using a modern framework with url routing, you don't need to worry about hiding .git or .hg in your webserver config file.

[+] georgerobinson|10 years ago|reply
I once jumped-in on a PHP project where the previous developers had written:

    $page = $_GET['page'];
    include ($page.".php");
Whilst allow_url_include (http://php.net/manual/en/filesystem.configuration.php#ini.al...) was set to false, I could still craft a URL like:

http://example.com/?page=admin/index

which expanded to http://example.com/index.php?page=admin/index where the real admin was at http://example.com/admin/index.php and offered complete access to the backend without authentication or authorization - let alone other files in the file system.

In another project, I found that the server had register globals turned on, and therefore could craft a URL like:

http://example.com/admin?valid_user=1, where valid_user was a PHP variable set to true iff their session cookie could be authenticated in the database.

I think it's terrifying that these things still make it through to production websites

[+] hoodoof|10 years ago|reply
Someone on StackOverflow says this will tell nginx not to serve hidden files.

location ~ /\. { return 403; }

My question - do I need to put this once at the top of my configuration file and all it good or does it need to go into multiple places in the nginx config?

It would be great if there was a simple, universal way to say to nginx "don't serve hidden files from anywhere under any circumstances".

[+] therealmarv|10 years ago|reply
it has to go in every server { ... } section. Also use "deny all;" to really block. See my other answer here in comments.
[+] unknown|10 years ago|reply

[deleted]

[+] kelyjames|10 years ago|reply
Looks like this returns some results inurl:.git "intitle:index.of
[+] blindhippo|10 years ago|reply
Why are people serving web traffic to a folder with a .git folder anyways? I thought it was basic deployment practice to export your code OUT of the VCS before deploying... every shop I've worked at had this in place.

Other solutions just seem hackish to me, but every project is different I suppose.

[+] nmrm2|10 years ago|reply
So that deployment is "just" git pull.

I don't get it either.

[+] sarciszewski|10 years ago|reply
I have a fake /.git on my personal website to troll would-be hackers into wasting their time. (PROTIP: I don't run Laravel there.)

So far a few people have requested my .git/ directory but none have attempted to plunder the riches they think they'll find within.

[+] foobarbecue|10 years ago|reply
What I find more interesting is that github is full of passwords and credentials.