top | item 20005230

First American Financial Corp. Leaked Hundreds of Millions of Insurance Records

418 points| PatrolX | 6 years ago |krebsonsecurity.com

164 comments

order
[+] client4|6 years ago|reply
I did a penetration test for $NATIONALINSURER and they had an FTP site with weak credentials where all the remote offices uploaded claims. Millions of records and scans of SSNs, home addresses, bank information, etc. Their mitigating controls were: we put it behind a firewall.

Then again I didn't expect much, their MSSQL in prod had SA/SA credentials active.

[+] Someone1234|6 years ago|reply
I did EDI work for several major national and international companies you've definitely heard of. This is all too common, we're talking about millions of dollars of transactions per day flowing over insecure FTP sitting on the internet. VANs, originally used dial-up modems to deliver EDI, now they often use insecure FTP.

A few brave companies have tried to put their FTP systems behind VPNs, but the momentum is hard to overcome. What is more popular is firewall rules that only allow large blocks of IPs owned by other vendors they deal with. It is good in theory, until you see how large/diverse some of these blocks are (e.g. all of AWS's Eastern data center).

It was a very loud wake-up call seeing what inter-business stuff looked like. It is the wild west or a flashback to the 1990s security wise.

[+] bloopernova|6 years ago|reply
I'm currently fighting against management dragging their feet on using 2FA.

On HIPAA PHI.

(I know HIPAA doesn't actually mandate 2FA, but it's recommended by many best practices and guides.)

Apparently some tech folks don't like the inconvenience of 2FA.

[+] ownagefool|6 years ago|reply
I've seen Retail stores with revenues in the 10s of billions using Telnet for the POS clients in 2019. They also used FTP glaore and were worried about the security of cloud. :)
[+] throwawaymath|6 years ago|reply
Depending on how weak those credentials were, this sounds like something you should report to Krebs as well.
[+] zhte415|6 years ago|reply
A lot of discussion on technical side, but not from organisational.

How could audit, both internal and external, not find this? 2003 to today is 16 years. Audit is a last line of defence and certainly not to be relied on upon as a buddy to catch your errors. But... how? This is a major financial institution in the most developed country in the world (the clue's in the name). It should subscribe to the the highest integrity and tightest scrutiny. This seems an opportunity for both internal and external auditors to tighten their game.

Outside of audit, surely an employee might have noticed? Was there no formal method to speak up without fear of recrimination? According to Wikipedia [1] there are eighteen thousand employees. Someone never noticed?

This seems an organsiational failing, not a technical one.

[1] https://en.wikipedia.org/wiki/First_American_Corporation

[+] jordanbeiber|6 years ago|reply
Is not the tech a part of the organizations way of doing business?

These things are highly related to what’s going down in a thread [1] from yesterday (about “shitty projects”).

I’m sure these guys spend many millions each year on security products, but either people in the know on the tech side is ignored, or they have no competencies left.

In the thread I mention above I have actually posted about my general experience from a major insurance player.

A concrete example:

We were making changes to a custom software and as there were concerns about bandwidth requirements and latency I took it upon myself to figure out what a specific process looked like, from the business perspective.

In short, in the middle of the workflow, customers journals was written to CD and mailed to physicians. Encryption? Eh, no... Any process in place to ensure safe keeping and return/destruction? Uh, forget about it...

This was in the time when a lot of these “lost usb devices” and hacked systems seemed to pop up daily.

I obviously raised this with the security team, the security officer and the business unit.

No one wanted to touch this finely tuned business process.

It felt like I was working at fawlty towers.

Again, that companies have drawn this line between business and tech, “‘cause tech is not core bidniz”, will haunt a lot of big players for years to come.

[1] https://news.ycombinator.com/item?id=19998806

[+] toong|6 years ago|reply
And this is from a huge organisation. There are many more medium-large organisations that still operate under the Chinese-walls model: perimeter defense, but once you are inside the VPN/intranet the security is a lot more relaxed (if any). That is the security culture and very hard to change.

The market forces those orgs to start offer services online. They run those (relaxed security) services inside their intranet, so they start poking holes in their firewall. The next decade is not going to be pretty in that regard.

[+] nerdponx|6 years ago|reply
You know what's a great incentive to actually care about this stuff? Legal consequences for not caring about it.

Anyone conceivably responsible for ignoring the developer's complaint should be on trial right now.

[+] TomVDB|6 years ago|reply
You can access documents all the way back to 2003.

That doesn’t necessarily mean that this hole has existed that long.

[+] angry_octet|6 years ago|reply
Whenever you are compelled to upload/send a photocopy of an ID document it is sensible to write the date and purpose / file reference on it. If it appears in a document dump at some later date you know the path and date of the leak.
[+] zhte415|6 years ago|reply
This is excellent advice. You might not be able to write on, for example, a passport taking a photo of, but can certainly put a postit note-type sticker on it.
[+] OrgNet|6 years ago|reply
that is a good idea but most of the time I need to hand over my actual ID, and not just a scan of it
[+] throwawaymath|6 years ago|reply
Yet another security vulnerability caused by:

1. Using sequentially incremented integer sequences as object IDs, and

2. Failing to protect sensitive data using some kind of authentication and authorization check.

This is becoming a trend with data breaches. Several of Krebs' other reports on behalf of security researchers were originally identified by (trivially) walking across object IDs on public URLs.

My cynical take is that Krebs couldn't go public before this afternoon because First American wanted it to hit the news at an opportune time, then get ahead of it with their own messaging. Krebs got in touch with First American on Monday May 19th. The story is only just breaking now on a Friday afternoon at 5 pm; markets are conveniently closed for the weekend.

I expect them to issue a hollow PR statement about valuing security despite being unable to act on security reports until an investigative journalist threatens to go public.

[+] mattmanser|6 years ago|reply
I once made an app not using sequential integers as object ids, as you suggest.

It was an absolute nightmare. Maintenance was a nightmare, you're constantly having to generate or replicate these things that add an extra layer of complexity to everything, and almost always unnecessarily.

It's also extremely bad for db performance, causes massive page fragmentation, indexes become useless almost straight after rebuilding them, etc.

For almost everything, sequential int IDs are fine. It's the things you expose to the users that you need to be careful with, and then don't use the primary key to access them, add another unique key to them, but keep the id in there for the db to use and for your own use.

My lesson was to go back to always using int ids, and on a few objects have a separate unique key column to expose to users for sensitive stuff.

[+] scarface74|6 years ago|reply
There is nothing wrong with using sequential ids in and of themselves.

The typical web app has the concept of a validated user session per request. How hard is it really to

  Select ... From Documents where documentid = ? and userid = ?

So even if the user does a

  GET /Document/{id+1}
No documents would be returned.

Every web framework that I am aware of let’s you add one piece of middleware that validates a user session and won’t even route to the request if the user isn’t validated.

[+] bryant|6 years ago|reply
(It's apparent that my initial reply didn't resonate, so I've made substantial edits to my reply for clarity's sake. If you've read it once, give it another read; it's from the angle of an organization with much in the way of legacy impairment.)

> Yet another security vulnerability caused by...

I mean, yes, but these are also some of the easiest vulnerabilities to miss even with out-of-the-box static analysis (code scanning and data analysis), automated dynamic analysis (pentests [edit to clarify for tptacek: automated pentests]), and a basic code review process. They're usually identified in live environments during manual penetration tests or, in more security-mature environments, with custom static analysis checks and custom linting rules.

As for best-case prevention: accomplished generally architecturally, e.g. language/framework decisions that enforce secure coding practices by design, or implementing certain patterns in development which whisks away some of the more risky coding decisions from engineers who may not be qualified to be making them, such as mandating authn/z and limiting exceptions only to roles and change processes qualified to make them. Checks including linting for specific privacy defects (direct object referencing using sensitive data or iterative identifiers as opposed to hashes/guids/etc) can help with catching them during development, and as you might've guessed, such checks tend to be custom for a given environment rather than out of the box.

I distinctly recall a card issuer whose name starts with a C in the United States having an http endpoint which allowed for enumerating account details by iterating full PANs (16 digit card numbers)... around a decade ago. Here we are today, and you're seeing the same bugs continue to arise.

Mitigation options in organizations with immature security practices typically rule out remediation simply because their existence might not be known, and practices traditionally reserved for defense-in-depth may need to be relied-upon instead (think monitoring web requests for anomalous behaviors and blocking traffic when detected) rather than trusting that one can fix all the defects, and even then you'll still lose a few records... but that might be the only solution available to you as a CTO, CIO, or CISO simply because of resource constraints and bureaucracy in an entrenched org e.g. in the financial or insurance space.

--

tl;dr: these defects are among the harder ones to catch for legacy applications especially in environments with weaker security postures, and they're as old as time. What I'm saying is that as much as we can call companies out for making these mistakes in hindsight, their existence in larger legacy systems is to some extent inevitable and must be managed in other ways.

[+] raesene9|6 years ago|reply
By the sounds of it, another breach from a well-known, not new web application security vulnerability, "Insecure Direct Object Reference".

That vuln has been an explicit part of the OWASP Top 10 since 2007...

Unlike other common web app vulns (e.g. XSS SQLi) IDOR usually can't be fixed by a development framework (e.g. ASP.Net or Rails), it needs app. specific coding for proper Authentication/Authorization checks.

[+] dmix|6 years ago|reply
> He said anyone who knew the URL for a valid document at the Web site could view other documents just by modifying a single digit in the link.

Good thing he didn't post this bug online after getting no response. I remember reading about someone who did that on an AT&T website a while back and was sent to jail for simply incrementing an id number in the URL and talked about it on Twitter.

[+] luckylion|6 years ago|reply
That was probably about weev, and they were after him long before that case, so it's not likely that it would get some random person (that the FBI doesn't have a file on and an interest in picking up) in the same trouble.
[+] LinuxBender|6 years ago|reply
That is an incredibly low friction interface to our documents. /s

What are the odds they have access logs going back to 2003?

[+] PatrolX|6 years ago|reply
Pretty good, everything was probably set up and configured with default settings by that unpaid intern they had running their infrastructure back in 2003.
[+] reilly3000|6 years ago|reply
I just closed on my first house this week, and First American was of course my title company. I'll be interested to see if my data is included in this breach settlement or not.

I did notice when I was reviewing my docs that they emailed links to unauthenticated copies of docs, but they were mostly public records so I didn't think twice about it.

So they have my Name, address, email, SSN, copy of ID, copy of check from my bank with account/routing on it and much more, all in the open apparently.

I just went through an SSO implementation with a small team for a large user base. It was a bigger project than we had anticipated, but nonetheless manageable. I can't fathom that a financial institution of that scale could be that lax with basic security. Wouldn't their systems be subject to some regulation and require some kind of audit on a regular basis? Is this a failure of auditing systems, as well as internal security or even basic IT?

[+] JimmyDugan|6 years ago|reply
Programmers fault? Audits fault? Securities fault? Pentesters fault? It fault?

Listen until C-level funds these programs properly and security is taken seriously by all issues like this will forever be in the news.

I would be willing to bet their security like most have a long list of security gaps they cant get fixed because resource issues just hope they documented or it could fall on them.

Most coding classes just teach how to make things work in Mister Roger's world. Secure coding is an elective! Most run the DevOps model instead SecDevOps and only involve security after it is ready to go into production no matter what flaws security finds.

Why are black box pentests still taking place? Because company required to have pentest but really do not want testers to find things. Their goal is not to improve security rather check that box ... we had a pentest.

C-level, this keep the lights on budget you give Security/IT is costing you more than properly funding us! Oh yeah you put that $ into cyber insurance! Lol let's see how well that works.

[+] mjparrott|6 years ago|reply
If the financial penalty was high enough they would increase budgets. There is no accountability for losing customers personal information. If you can make a strong business case behind the average risk a company takes on it would help this discussion more. For each example of "company X had a major financial impact" you need to average it out against "company Y lost hundreds of millions of SSNs and had zero penalty".
[+] JimmyDugan|6 years ago|reply
I see a lot of comments on sequential as the issue. Really is that the issue?

Not the fact that John Doe can get to John Doe2 stuff without authenticating? WTF

Sequential or not if no auth I can run a scanner and get it all so what the hell does that have to do with the price of tea in China?

[+] zeroDivisible|6 years ago|reply
I like how this news was posted on Friday afternoon before the Memorial Day weekend.
[+] jammygit|6 years ago|reply
Where does one go to learn how to not cause this one day?
[+] JimmyDugan|6 years ago|reply
Lol everyone fights security and it is way under funded so can only get like 1 out of 100 risks fixed but must be securities fault.
[+] gesman|6 years ago|reply
"First American has learned of a design defect in an application that made possible unauthorized access to customer data. At First American, security, privacy and confidentiality are of the highest priority and we are committed to protecting our customers’ information..."

Who is coming up with these statements?

If you kept royally screwing something for years that you claimed to be your "highest priority" - then what can one expect from your normal lines of business?

[+] Nicksil|6 years ago|reply
>At First American, security, privacy and confidentiality are of the highest priority and we are committed to protecting our customers’ information.

is such a meme.

Things will continue this way until there are serious repercussions for entities carelessly handling data.

[+] saagarjha|6 years ago|reply
The next sentence too:

> We are currently evaluating what effect, if any, this had on the security of customer information.

It's downright dishonest to even say "if any": they were presented with concrete examples of leaking customer information; they don't get to wonder whether it had an effect on their security anymore.

[+] snovv_crash|6 years ago|reply
At some point people will realise that holding large quantities of sensitive information is a liability, not an asset. Mindsets are slowly changing in this direction already.

The chickens will continue to come home to roost until people treat digital security as seriously as physical security.

[+] munk-a|6 years ago|reply
While everyone here says "Oh that's terrible!" the market says "Oh that's terr-SQUIRREL" and then forgets it ever happened. Additionally, no appropriate fines have been levied nor jail time handed out for this sort of thing - right now the sane approach (money wise) is just occasionally have a breech and offer up an apology.
[+] OldHand2018|6 years ago|reply
> The chickens will continue to come home to roost until people treat digital security as seriously as physical security

Do people take physical security seriously? It doesn't seem like it.

Anyway, when I was an undergrad in the 1990s and took a computer security class our professor (Gene Spafford) talked about security being primarily an economic question. And that is generally how security, both physical and digital, has been treated since forever. And how it will always be.

The economic and physical damage caused by poor digital security is a rounding error compared to everything that happens in the real world.

As long as you understand that the following link is at least partly tongue-in-cheek, you may find this to be an entertaining read:

Cybersecurity is not very important http://www.dtc.umn.edu/~odlyzko/doc/cyberinsecurity.pdf

[+] guitarbill|6 years ago|reply
Only if there are laws making it a liability, since investors don't seem to care much in the long term.
[+] PatrolX|6 years ago|reply
I wonder if we should take mandatory breach reporting a step further too and require them to list all security vendor products and services that were in place at the time of the breach.

Should security solution vendors be held to account for failing to live up to the bold claims they make?