top | item 15566289

The OWASP Top 10 is killing me

116 points| sidcool | 8 years ago |insights.hpe.com

87 comments

order
[+] sixhobbits|8 years ago|reply
Not all vulnerabilities stem from ignorance, although this seems to be the default assumption of the infosec community.

Writing secure code takes more time than writing insecure code. Time is expensive. In every organization I've worked, security has been neglected pretty explicitly. It's not a case of "OK this looks secure", but instead more like "I am aware that our codebase has some security issues but I need to prioritize rushing out this new feature/improving our CPA".

And this is not always the wrong choice. For most organizations, the probability of having someone who is both malicious and competent enough to exploit an XSS vuln visit your site is pretty small. The chance that you'll go under if you don't get that new feature out or improve your CPA is pretty high.

If you want to criticise the state of security (and there is definite room for criticism), I think there is a need for tools and education to allow people to better make these decisions. We need ways to communicate

a) how likely it is that we'll be attacked?

b) what would the consequences be?

For now, when these questions are asked, the answers are almost always "pretty small" and "uh, possibly like really bad, depending on the attacker"

We need ways to translate these into numbers that we can compare with profit margins, etc. This is way more important than actually learning how to mitigate SQLi.

[+] micaksica|8 years ago|reply
> We need ways to communicate

Microsoft developed quite a few of these ideas internally with the TwC (Trustworthy Computing) initiative in the early 2000s, and built a protocol - and development workflow - around threat modeling and security awareness. Most of their internal security-oriented protocol is listed for free:

https://www.microsoft.com/en-us/sdl/

As are some of their tools. For individual developers wanting to have a better sense of what threats their applications may face during the design stage, there’s a good Wiley book on threat modeling:

https://www.amazon.com/Threat-Modeling-Designing-Adam-Shosta...

If you’re really in a hurry, a lot of the typical OWASP vulnerabilities are mitigated by choosing higher—level, long-standing frameworks and abstractions (e.g. Rails, Symfony, ASP.NET MVC) that handle a lot of the things that can hurt you. From there, most of the low hanging fruit skids will find can be mitigated simply by following the security best practices documentation for your framework before you start writing code in it.

Anecdotally, auditing web applications for security issues is my day job. The majority of the time, ignorance is the real issue, not speed of development. They simply don’t have any idea what threats they are facing, or any real education in secure coding principles. Very rarely have I dropped vulnerabilities and had teams say “yeah, we know about that”. It’s way more “whoa, I didn’t even know you could do that”. Basic security education really matters.

[+] Spooky23|8 years ago|reply
I think the infosec community is the biggest barrier to improving security.

Security is like a bug light for ambitious idiots now. In large companies the function has been staffed up as a separate vertical with lots of CISSPs and other alphabet soup people who run around chasing nonsense and reporting how valuable they are.

Security expertise needs to be embedded in projects and programs so that leadership with domain knowledge can make smart decisions.

[+] mindcrime|8 years ago|reply
We need ways to translate these into numbers that we can compare with profit margins, etc.

There is a way. I'll refer you to How To Measure Anything by Douglas Hubbard. His model is based on a combination of things:

1. Calibrated Probability Assessments

2. nth order effects

3. building a mathematical model

4. Monte Carlo simulation

apply his methodology and you can determine the impact of "hard to quantify" variable like "security" and get a probability distribution that can be used to assign values to specific scenarios.

Yeah, it's a little bit complicated and time-consuming; but the best things in life are, no?

[+] collingreene|8 years ago|reply
I don't think anyone in security would disagree with you.

The problem is measuring something that is sort of definitionally unknowable (how many vulns are in this code, where, how likely is it someone outside the company will find it, then exploit it?) is hard (the book referenced has some ideas which boil down to "get some experts in a room and ask them, then average it")

A good security team will do their best at this but its unfortunately not as easy as "ok we found all the xss bugs which reduces our chances of getting owned by %2.5".

The further (maybe depressing) question is to the degree getting breached actually harms a company, my favorite argument that the two are tenuously related is this: http://www.cs.umd.edu/~awruef/HNYM.pdf and my favorite within is comodo. Comodo was hacked and the hacker gained the ability to sign certificates of their choice with comodo key. Comodo had one job, be worthy of trust and not get hacked. Did it harm them? They are still the #1 cert company. Look at the target breach or any others.

The only spot where a breach can be company-ending is all these bitcoin companies, which from my spot in application security makes them fascinating test cases. Here are a bunch that blew up after they got hacked: https://magoo.github.io/Blockchain-Graveyard/

[+] irl_zebra|8 years ago|reply
Sorry for the non sequitur, but I got into cryptography and security generally and designed some internal systems that people use. I work in the field, or at least transitioned from SRE to SRE with a security focus. I started following vocal members of the "infosec community" online, Twitter mostly. It took a while, but I came to the conclusion that most (with a lot of exceptions) of self titled security experts had little to no experience in anything. They were more evangelists repeating what other people said with no contributions of their own or research of their own. Notable exceptions are people like Tavis Ormandy of course, but "infosec" seems to have become a title for people who want to be part of an in crowd and just repeat things over and over like "use multi factor authentication".
[+] readams|8 years ago|reply
Web platforms in particular have a pretty long list of mistakes that, even if you're smart and careful, if you don't know about them you're virtually certain to make. Injection attacks, XSS, CSRF, etc. are very subtle and nonobvious.
[+] nfriedly|8 years ago|reply
This is one thing I like about IBM: they have a separate security team that audits stuff before you ship it. I was working on a react app where I set up server-side rendering, and then had it JSON-encode the state and dump it into a script tag in the end of the HTML. My thinking at the time was "It's JSON-encoded, and it's all the user's own data anyways, so it's safe."

Eventually I needed something from the querystring and for whatever reason put it into the state. It turns out that a <script> tag from the querystring in a string in a blob of JSON in a HTML page will execute. Oops.

Fortunately IBM's security team caught it before it ever shipped. Now it's been fixed and the app has a CSP header to help nullify any future mistakes.

[+] mmcnl|8 years ago|reply
That's weird, shouldn't that be a responsibility of your router? Or did you roll your own?
[+] methodover|8 years ago|reply
We experienced our first successful attack at my startup a few weeks ago.

What got us wasn't anything on the top ten list. I'm pretty sure it isn't covered anywhere in OWASP to my knowledge.

Users reuse passwords across different websites. An attacker tried a database of usernames/passwords sourced from elsewhere; a small percentage (about 1000 out of more than 10M requests) succeeded. 100 of those had something to steal. Attacker used a botnet, so our IP-based fail-to-ban logic was ineffective.

We thought about lots of ways to deal with this moving forward. My boss (CEO) didn't want to implement any kind of 2 factor authentication, because it's cumbersome and will lower conversion rates. We took a different strategy which is kind of complicated to explain, but it's not nearly as secure.

Anyway. What gets me is like: Password authentication SUCKS. It's a terrible terrible authentication strategy. It's awful. It should not be relied upon. It would be good if humans didn't reuse passwords. But we do. So it sucks.

[+] kogepathic|8 years ago|reply
> Anyway. What gets me is like: Password authentication SUCKS. It's a terrible terrible authentication strategy. It's awful. It should not be relied upon. It would be good if humans didn't reuse passwords. But we do. So it sucks.

There's an easy solution for this! Have I been Pwned offers a free API [0] you can use to check if a password has been in a previous breach.

If the password was previously disclosed in a breach, simply inform the user politely "Sorry, this password has previously been previously disclosed and will likely be used by attackers to compromise your account" and make them pick a new one.

No need for 2FA. Simply add a check to prevent users from re-using passwords.

(If you don't want to start sending user passwords to HIBP, you can also download the list and use it internally)

[0] https://haveibeenpwned.com/API/v2#PwnedPasswords

[+] k4ch0w|8 years ago|reply
Hey man,

Did you add a captcha to your login? You could add one after 2 failed attempts from an ip.

I understand not wanting 2FA for lower conversion rates, but I'd recommend implementing it in the future. It's one of the best ways to mitigate an attack against knowing your password. The session cookie is another story.

As for the passwords, I'd enforce standards around what a user is allowed to use. You want to have it be at least 10+ chars with special characters. If they reuse the password on another site and that gets compromised there really isn't much you can do. However, if we assume the reused password from the other site is hashed with a salt and you enforce more chars, they are much harder to crack and thus prevent the plaintext password from being recovered and accessing your site.

A great way to demo what more chars does to a password here

https://www.betterbuys.com/estimating-password-cracking-time...

You should also maintain a record of the common IP address an account uses and require an email/text to allow access that would further mitigate your problem. I know your a startup, so this is more of a luxury feature.

[+] sixhobbits|8 years ago|reply
You can mitigate this almost completely by finding that "database of usernames/passwords sourced from elsewhere" (they're not hard to find) and blacklisting them. Users should not be allowed to use any breached password when they register. A simple message saying "this message was included in a recent password breach and is therefore not secure" should suffice to prevent users getting annoyed that they can't use their favourite password on your site.

Enforcing a minimum length of 10 or even 12 is a great way to eliminate nearly all previously leaked passwords from being used on your site, and it further encourages users to use password managers.

Passwords are shit, but they're here to stay for a while still.

[+] angry_octet|8 years ago|reply
It is quite feasible to try to brute force you own users passwords. Run a background job that tests against common passwords and close variants, and if it is too simple, either force a harder one, or force email auth if any of their metadata changes (IP, ASN, browser fingerprint).

C.f. https://haveibeenpwned.com/Passwords

[+] iraklism|8 years ago|reply
We deal with this almost every week, as in, we get into systems by searching through email:password leaks and use them.

There are a number of mitigating controls that can be applied here. Most will hamper usability, some will not.

There is a “simple” solution. Enforce 2FA. If not at the login, then before “dangerous” actions (transfer funds , change password , buy X/Y/Z )

[+] alexk|8 years ago|reply
Have you considered more conservative rate-limiting external IPs on login endpoints?

E.g. with nginx it will make sense to set a custom rate limiting zone to prevent many requests from the same IP specifically to the login page:

https://www.nginx.com/blog/rate-limiting-nginx/

This will not fix the root cause, but will make it considerably harder to scan for matching passwords.

You can also set up Fail2Ban to block IPs that failed to authenticate many times:

https://serverfault.com/questions/421046/how-to-limit-nginx-...

[+] e12e|8 years ago|reply
> What gets me is like: Password authentication SUCKS.

There's a conference for that. Has been since 2010. Not quite sure about 2017/2018

> My boss (CEO) didn't want to implement any kind of 2 factor authentication, because it's cumbersome and will lower conversion rates

True. "Security is not a convenience". But there's no easier way to get people authenticating off of something at least a bit secure than straight totp 2fa.

[+] gspetr|8 years ago|reply
> What got us wasn't anything on the top ten list.

From my PoV nothing got to you. Yes, you may have got some bad publicity, but the fault lies with users who have poor (non-existent?) OPSEC.

Just as there are 2 types of people: those who don't have backups and those who will make backups, there are those who don't have password compartmentalization and those who will have it.

[+] busterarm|8 years ago|reply
Make 2FA mandatory for users who were breached or are using passwords that are in known password lists.

I don't know how much you spent in support, but U2F Zeros are dirt cheap. You could probably just proactively mail them to your clients and encourage them to use 2FA.

Or offer discounts or other perks to users with 2FA.

[+] skywhopper|8 years ago|reply
Passwords suck, okay. What alternative is there?
[+] beager|8 years ago|reply
The OWASP Top 10 isn't changing because we can't (or won't) stop not patching those issues. Quite telling that when talking about how to move beyond the baseline security struggles of the OWASP Top 10, TFA provides only superficial suggestions, rather than actual links to libraries, tools, and implementation guides that can be used to quash or audit OWASP Top 10 issues.
[+] ynniv|8 years ago|reply
Many of these are due to the use of unstructured strings, which we do because we’re lazy. We’re so lazy about it that our modern languages don’t even support the ability to distinguish user strings from application strings (perl’s taint mode). The workaround in development has been extensive testing, but this is insufficient in an adversarial environment. The best solution is to bring structure to your strings so that you can tasing about how they can be abused.

Parse your strings, kids.

[+] minitech|8 years ago|reply
> We’re so lazy about it that our modern languages don’t even support the ability to distinguish user strings from application strings (perl’s taint mode).

“User strings vs. application strings” is too coarse. You just need to enforce types (a type for SQL – see query builders, a type for HTML – see MarkupSafe, etc.) and provide safe constructors for those types. Safe syntactic sugar for those is supported by JavaScript (template strings), Rust (macros), C++ (overloadable string literals), Haskell (overloadable string literals and Template Haskell), and probably plenty of other modern languages. For the others, explicit type wrappers are generally enough (like the aforementioned MarkupSafe in Python) – the only thing that’s lacking is enforcement by libraries.

[+] pornel|8 years ago|reply
For correctness it's not a binary yes/no problem of "trusted" vs "untrusted" (or "sanitised" vs "unsanitized"). Necessary escaping is dictated by context (such as HTML, JS, SQL, e-mail header), sometimes even nested multiple times (URL argument in JS in HTML attribute in a QP email body).

Most tools and practices miss that point. This leads to creation of general-purpose (not type-safe) templating systems that can't automatically enforce correct escaping everywhere by default, so they leave it to programmers to do it manually where needed, which is error prone and ensures permanent OWASP entry as long as it exists.

Also it's impossible to convince programmers to always implement "redundant" escaping for "harmless" values (such as IDs which are assumed to always be alphanumeric), which is a vulnerability waiting to be exploited (e.g. your policy is not to allow in '"' usernames, so you think you never need escape it, but then some code reads a username from a query string argument and game over).

[+] jlesk|8 years ago|reply
This is why I introduced LockStrings as a key feature of THT (a language that compiles to PHP). It takes the opposite approach to Perl's taint mode. You mark string literals as safe -- everything else is untrusted.

Functions that do risky things (Database, System calls, etc.) only accept LockStrings and are responsible for escaping, so all you have to do is provide the placeholders.

https://tht.help/tutorials/language-tour#lock-strings

[+] taeric|8 years ago|reply
I challenge the "lazy" claim. We do it in large, because it is hard not to. At the border, you are either passing binary data around, or you are passing structured strings. Building structured strings to send through a border though, is often easiest done by building up an unstructured one and then sending it to the other side.

Worse, it is not uncommon to find that the layer you are using to keep things structured becomes a major source of complexity in the application.

[+] gcb0|8 years ago|reply
I love OWASP. but everything they do has zero usability.

At times it looks like a bunch of 7yr old trying to mimic a big corporation.

This list is a huge example of it. Instead of a text, they have a repo, that generates a huge PDF, mentioned in a press release, with the release described verbosily in a wiki!

And I went trhu all those hops, and I still couldn't find a single link that points me to what "Injection" means.

[+] rst|8 years ago|reply
It's a generalization of SQLi, to cover situations where the queries (or commands, or whatever) built up by unguarded string concatenation are something other than SQL. (Though, oddly, the examples in the current draft seem to all be SQL based.)
[+] ianamartin|8 years ago|reply
In my experience, it’s not been a lack of understanding or knowledge on the devs’ part. It’s been more about how much of a hurry we are in to deploy.

I’ve tried a few different strategies to get around this.

1. Build the backend first. Don’t show a UI that looks anything like it’s functional until you sneak in the requirements that you know are needed but can’t get buy-in from.

Fails because PMs and stakeholders don’t see progress fast enough.

2. Plan security into the design specs and feature list.

Fails because there’s always someone who (like when presenting speed as a feature) is higher up than you who will cross it off the list because “we’re behind a VPN/our users are too stupid to hack us/the only speed that matters is how fast we can deploy this.

3. Build the entire front end first with absolutely no backend wiring at all and slowly add the connecting db functionality and take your time adding security checks along the way.

This also fails because once PMs and stakeholders see the pretty stuff, they assume it’s almost done and have no tolerance for “slow” progress.

Direct, straightforward communication about the importance of security doesn’t work.

Obfuscating your team’s process to sneak in best practices doesn’t work.

The bottom line is that—again, much like speed—-if your leadership doesn’t see the value or can’t be persuaded to see it, it’s not going to happen, even from very well-educated teams.

This is a cultural issue that an individual contributor can only do so much about by choosing the safest frameworks to start with. And that’s about it.

It’s added a number of items to the list of things I ask in interviews now that I’m on the job market again.

Where does the company prioritize security in web applications? Where does it prioritize speed?

How hard do people have to fight to get these included as product features?

I won’t make a blanket statement that if those answers are not to my liking, I won’t take the job. But you need to know where these things stand as company commitments before you accept a job with a primary role of web developer.

[+] Sacho|8 years ago|reply
> 3. Build the entire front end first with absolutely no backend wiring at all and slowly add the connecting db functionality and take your time adding security checks along the way. This also fails because once PMs and stakeholders see the pretty stuff, they assume it’s almost done and have no tolerance for “slow” progress.

It sounds like this approach should work, because you can sell a bunch of reasons rather than just security. If you don't take the needed time to develop the code, you will have correctness(not enough testing) and maintenance(not enough refactoring) problems alongside security issues. If the company's leadership shuns all three in favor of quicker deployment, then security is most likely not going to be the biggest problem, it would be all the bugs you have to chase down in spaghetti code.

[+] SomeStupidPoint|8 years ago|reply
> Create a culture of writing and deploying secure code.

How?

That may sound glib, but this is really just asking everyone to try, right? I would guess that the vast majority of security mistakes stem from ignorance not apathy, and that most coders are trying. Relying on people trying clearly isn't working because there's simply too much to know and it requires too constant of attention.

I think we actually do need better tooling, in terms of things like using type systems to flag sensitive data and automatically suggesting a threat modeling report include that item.

The suggestion that people spend a lot of effort all the time is clearly not going to work -- why can't we ease that barrier by focusing on better tooling so security becomes a natural part of the process, enforced by actual mechanisms?

[+] module0000|8 years ago|reply
You can't control humans unfortunately. Humans write code, and some of them will care more about the quality of work than others do. These people will at some point work above/below/with you, and their mistakes will cause you some sort of inconvenience.

My mother taught medical school, and she had a saying... "What do you call the least qualified idiot who passes my class?", the answer is "Doctor". There are good coders and bad coders, and unless we start somehow forbidding the bad(but still good enough to get hired) ones to work with/for us - this isn't going to change.

[+] IncRnd|8 years ago|reply
We've created the beginnings of this type of culture at my employer. More and more, developers reach out to the security team to point out their security conscious thoughts on how to improve new features or the existing codebases.

But, this took over a decade. It certainly didn't happen overnight. The inception for this is in training, a security team that markets itself, and in constant communication with managers.

Key to this is that devs don't set the direction of their coding projects, which is where many security teams get it wrong. The C-level on down to team managers set the direction. They need to be sold on the security team and the strategic value of their output. Just like passing functional tests, passing security tests needs to be part of a minimum viable product in a way that doesn't stop shipment of products.

> The suggestion that people spend a lot of effort all the time is clearly not going to work -- why can't we ease that barrier by focusing on better tooling so security becomes a natural part of the process, enforced by actual mechanisms?

Because, in 2017 developers need to understand more about security than they did just 3 years ago. Universities really haven't kept up with this. The reason you can't push security off to tooling, other than the minimal low-hanging fruit, is that human ingenuity is required. Vulnerabilities happen from wrong function calls, yes, but also through using the wrong types of calls for the particular purpose.

[+] mtgx|8 years ago|reply
I think it's less about the culture and it's more about the big language developers making their languages safe by default for 90% of the developers using them.

The "culture" aspect may come in when you're encouraging people to use languages such as Rust over C++, rather than something like "always follow this 50-point security checklist no matter what language you use".

[+] alexnewman|8 years ago|reply
Every big bug I have seen has been known by a developer and not ticketed and triaged. Apathy, sometimes, ignorance more often, lack of organization, almost always
[+] tofflos|8 years ago|reply
The article mentions home grown authentication and authorization mechanism and suggests that we stick to proven solutions. The problem is that, at least within the Java community, that library-, framework- and application server authors are not providing easy to use solutions that integrate well with applications. Instead there are a bunch of complex solutions that require manual configuration, proprietary extensions and arcane programming models for something that sits in front the application making it difficult for application authors to provide a seamless user experience. No wonder so many people are rolling their own.

This is why JSR-375 was created. It needs to happen! I've tried the reference implementation and it was awesome! If you're working on the JSR or the RI then I'm rooting for you! But I don't know if anyone is working on them these days?

[+] mmcnl|8 years ago|reply
Perhaps security isn't as easy as (often self-proclaimed) security experts think it is. Unlike them, developers don't devote 100% of their time to security. I couldn't care less about people standing on the sideline yelling at me what I can't do. How about proactively seeking out and suggesting meaningful improvements which actually help increase security?

Security in big corporations often boils down to a unit of people ranting about everything and nothing, and telling people what they can't do, while in fact, they should be doing the opposite.

[+] tim333|8 years ago|reply
It's always seemed to me that the web2py approach of providing a secure starter app with auth included and then letting developers break it if they want seems quite a sensible way to go. Not sure how well that works in other frameworks. http://www.web2py.com/book/default/chapter/01#Security
[+] BrandoElFollito|8 years ago|reply
It is a shame that A10 and A7 were rejected.

In our mobile held, APIs are often unprotected because authentication is hard for machine to machine transactions. OpenID and the misused oAuth are a solution but it is hard to implement.

A7 addressed an organizational issue completely absent from the top 10.

Since there à so much controversies, they sold have made it a top 12.

[+] JeanMarcS|8 years ago|reply
> This means that the malicious script can read the user's cookies, session tokens, stored usernames and passwords, or files on a local hard drive.

I've seen those. On a website for a company who hired me for building their server infrastructure. The password was in clear text in the main cookie.

I signaled it and the dev team corrected it. It was only 3 years ago...

[+] partycoder|8 years ago|reply
A functional prototype is not finished software, but it is for many people considered to be a product.

Functional prototypes in many cases do not even implement their functional requirements properly, let alone the non-functional ones like security.

Security in any form is not a priority for many startups. Especially the ones that aim to be acquired before their hot potato blows up.

[+] vacri|8 years ago|reply
Why should the top 10 change? We still secure our houses with locks, secure our neighbourhoods with police, secure our borders with armies. We drive safer cars these days yet we still secure our road edges with barriers at dangerous points. Why would the categories of risk change on a bi-annual basis?
[+] MattPalmer|8 years ago|reply
One might hope that these low hanging fruit would be addressed, leaving more sophisticated attacks to fill the top 10.

Buffer overflows used to be a major vulnerability. These only stopped being such a major problem when languages that prevented them became widely used.

The lesson is probably that developers and the business don't have the time or inclination to address them, and the nest defence is to make the problem impossible rather than relying on good security practices being followed.

[+] ianamartin|8 years ago|reply
Also, a response to some of the mitigation’s suggested here:

1. Prevent people from reusing passwords from other websites/lists.

Fail: you shouldn’t know if the pw is the same as any other pw. If you can tell, you are already doing it wrong.

PW + random salt protects you against reused passwords. If your application is able to compare other passwords to the current password, not only did the othe site fuck up, but you did too.

(re)Captcha: fuck you. Even if it’s after the second failed attempt. Fuck you. I hate you.

You are implementing security theater, making everything worse for the user, and killing your conversation rate for everyone but spammers,who have this down pat.

Pushing x numbers of rules, whether or not they are special characters or not. 8 vs. 10 doesn’t matter that much.

Push passphrases instead.

Multi-factor is sort-of okay, but the implementations are garbage and the user experience is awful.

I’m not a security expert or a researcher. I’m a data engineer with a lot of web app experience.

But most of the advice in this thread is total garbage.

Web apps need to find a way to make the gold-standard of authentication accessible to users: per-device public/private key pairs.

Until we do that well, we suck at life and our jobs compared to native apps.

I include myself when I say that we have held ourselves to an incredibly low standard.

OWASP is a pathetically low bar. Yet we often fail.

It’s time to step up our game, people. And it’s on us to do it.

[+] jjnoakes|8 years ago|reply
> PW + random salt protects you against reused passwords.

That only protects you if every other site does it. If you salt your passwords and some other site which doesn't is compromised, you are hosed too if your users reused passwords.

> If your application is able to compare other passwords to the current password, not only did the othe site fuck up, but you did too.

Sorry, don't follow. How is it a mistake to compare a password your user is entering to a known blacklist of compromised passwords?