top | item 15365656

(no title)

megamark16 | 8 years ago

I used to work for a company that had a big security hole that would allow you to log in as any user as long as you knew the user's UUID (I know, right?) I logged a ticket and raised the issue up the flagpole to let folks know that if someone slipped in some code (we ran a lot of third party javascript) to harvest UUIDs, they could fairly trivially log in as an admin and do some serious damage. The issue sat for months (MONTHS!) until finally a user complained about some non-https content being loaded on our login page, which sparked a whole security review, and gave me an opportunity to bring additional attention to my ticket, which finally got fixed.

This kind of crap is out there, and people don't give it the attention it deserves until they get bitten in the ass. Thankfully, my company didn't get bitten, but if we had, it could have been very bad, and the fact that the issue was called to people's attention and they didn't do anything about it would have made it look that much worse.

discuss

order

eterm|8 years ago

UUIDs aren't exactly guessable, any hole which lets someone "slip in some code" is way more serious than a persistent login token.

It's not great to have a non-revocable login token, but a "UUID that lets anyone log in as you" is how a lot of API access tokens work, which is why they usually have a mechanism where you can regenerate them if you know they are compromised.

I don't disagree with your premise that "a lot of crap is out there" though. Working in small to medium enterprises (SMEs) really opens your eyes about the real level of security of most sites.

lostcolony|8 years ago

No.

Yes, a suitably random and therefore 'unguessable' secret is, fundamentally, the underpinning for auth systems, and some of those secrets utilize UUIDs.

No to the idea that these are comparable. Those are not -user identifiers-. A user identifier, vs a 'secret', require different perspectives in how they're treated, in API, in UI, etc.

For -any- sort of security model you figure out what bits of data must be kept secret, vs what bits of data should be treated as 'known'. A user identifier should always falls into the latter camp, a password or other credential falls into the former.

You said it yourself, "usually have a mechanism to regenerate them if you know they are compromised" - you really, REALLY don't want to have to regenerate your user identifiers if they leak out; that's almost invariably going to involve a great deal of complexity, breakages, regressions, etc. You're effectively changing the primary key of every entry in every database you have that this user exists in. Better to just not make them required to be kept secret for your security model. And even -that- assumes that they were -meant- to be secret; no developer is going to assume that about user identifiers, so you better have made that explicit to everyone who ever touched the code, or you just introduced a bunch of avoidable security holes.

blktiger|8 years ago

Why didn't you fix it? (I don't mean this harshly, just curious.)

Ultimately, this kind of stuff is something IMO a professional programmer should just do. It's irresponsible to let stuff like this go and you should do whatever it takes to make management understand. In a healthy organization it shouldn't even be questioned by management, you just tell them you found a security issue that will cost the company billions and has to be fixed immediately. In an unhealthy organization, maybe you just slip this into some other work without telling management.

lostcolony|8 years ago

Not to sound patronizing, but you've clearly never worked in a large enterprise.

Teams are siloed. Code is siloed. The deployment process is siloed. Etc.

Do I know where the code lives? If I do, do I -have access to the code-? Write, as well as read? Will my checking in code trigger a huge change review process that will cause people to yell at me for touching code I'm not in charge of? Will my checked in code be picked up as part of what goes to prod? If not, do I have a way to get the code into that process? Etc.

Very few companies of that scale are just a "check the code out, fix it, create a pull request, and watch it work its way into prod".

megamark16|8 years ago

Great question, and in the end it comes down to politics and team siloing. A large corporation with a lot of projects and priorities, and no single Security person to raise the issue with. At the time I wasn't in a position to Just Do It and then tell everyone "Hey, this needed to be done, I got it done, now I need a QA resource to test it and then we need to deploy it to prod" without some backlash from multiple source (my boss, the team that owned the product, etc).

Now (and given everything that's happened in the industry in recent years) I would definitely push more, and maybe fix it on my own, but at the time I just shook my head, and sent follow-up emails every few months to try to keep visibility on the issue.

sillysaurus3|8 years ago

Negatory. I vividly remember being chewed out at a mega-company for even downloading the source code to `touch`. Yes, touch.

Obviously I got out of that environment pretty quickly, but you're not in a position to just do things at big companies. Probably most of them.

cm2012|8 years ago

At many big-ish companies, you're not allowed to work on codebases not in your jurisdiction.

richardknop|8 years ago

At large companies with politics and bureaucracies you can't really just come in and create a PR for a bug you've found like you'd do in a smaller shop or a startup.

If the code is not responsibility of you (your team / department) all you can do is create work requests / JIRA tickets for people/team responsible for the code.

You wouldn't even have access to the repository or dev/test environments for the affected system most likely.

And it might just sit there for months until it eventually gets picked up and fixed. Or it will never be fixed.