top | item 20380395

British Airways faces record £183M fine for data breach

232 points| adzicg | 6 years ago |bbc.com

116 comments

order
[+] K0nserv|6 years ago|reply
> At the time, BA said hackers had carried out a "sophisticated, malicious criminal attack" on its website.

Compromising a single JS resource that was being carelessly loaded on a payment page doesn’t qualify as sophisticated in my mind. It might not be uncommon in the industry, but tools like SRI and CSP stop these attacks dead in their tracks.

I believe we are about one huge attack[0] of this kind away from realising how dire the situation truly is.

As a victim of the earlier Ticketmaster attack I’m curious as to if the ICO is investigating that too.

0: https://hugotunius.se/2018/11/29/how-to-hack-half-the-web.ht...

[+] pjc50|6 years ago|reply
So: now what?

This is an absolutely vast fine. There have to be companies around the world that are all of a sudden going to take the security of Javascript seriously, who already have a big web app that pulls in hundreds of scripts compiled from tens of thousands of NPM modules. Is security band-aiding going to be applied, or are we going to have to see a radical re-architecting that acknowledges that every dependency is a liability as well as a benefit?

[+] zemnmez|6 years ago|reply
> but tools like SRI and CSP stop these attacks dead in their tracks.

this isn't... usually true. in the case of this attack, the Javascript was appended directly to a 'known good' library resource[1]. Typically CSP whitelists based on origin, which would make CSP ineffectual in this case. Also, websites like BA have advertising, and it's nearly impossible to run CSP with ads unless you use the `strict-dynamic` directive, which whitelists any javascript loaded by your javascript recursively... including this javascript[2].

There are other, more uncommon modes for CSP which you might be referring to:

1. Nonce, which provides protection by ensuring each script has a server-calculated one-time-use token. This would not have any effect, as the script is part of one naturally loaded by the server.

2. Hash[2], which provides protection by checking the script's content against a predetermined hash. This possibly could be effective, but in practice rarely is. If the attackers could edit this script, there's absolutely no reason to believe either (1) this just updates the generated hash or (2) the attackers cant just update the hash. (this is essentially the same as SRI). Hash is potentially effective where taking advantage of a third-party CDN with known content that should not change, but by the look of the url: http://www.britishairways.com/cms/global/scripts/lib/moderni... the resource was not on a CDN.

[1]: https://medium.com/asecuritysite-when-bob-met-alice/the-brit... [2]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...

[+] CuriousSkeptic|6 years ago|reply
How would SRI and CSP help in this case? Wasn’t the compromised script served from their own infrastructure? And if hackers had access to that, why wouldn’t they they be able to change the SRI and CSP too?
[+] planetjones|6 years ago|reply
Exactly it was a crude and unsophisticated attack. I am pleased the ICO looked at the facts, not the spin and recognised the negligence of BA.
[+] Silhouette|6 years ago|reply
It might not be uncommon in the industry, but tools like SRI and CSP stop these attacks dead in their tracks.

It depends on what you're using them for.

If you're talking about static resources loaded from something like a CDN -- in other words, a situation where you also control the intended content to be downloaded -- then SRI is potentially useful. But how could that sort of system work if you rely on a payment service to supply the client-side scripting that runs behind the credit card form on your site and turns the user-provided card details into a token of some sort?

For example, if you use a service like Stripe, you're almost certainly loading a script from their servers that they control to run the card details form on your site. Between PCI-DSS rules and the obvious need for Stripe to be able to deploy changes quickly in the event of discovering a vulnerability, you can neither self-host the equivalent script nor provide any useful checksum or similar that is guaranteed to remain correct.

I suppose in theory Stripe could provide an API that you can access from your servers to fetch the current checksum for the current version of a resource like https://js.stripe.com/v3/, and then you could render your link to that script with the integrity attribute included when you serve your page with the card details form, and as long as no-one breaks anything with a cache or the like then that might work. I'm not quite sure what attack vector it would be guarding against though, since any scripts like this will surely already be served over HTTPS and therefore trigger warnings if anyone without Stripe's cert tries to serve them via some sort of MITM attack.

So SRI and the like take care of some aspects of this problem, but given you might be relying on a lot more external services than just (presumably relatively careful) ones like Stripe, it doesn't solve the whole problem.

[+] philjohn|6 years ago|reply
Similar way that TicketMaster were attacked ... that wasn't fun.
[+] EnderMB|6 years ago|reply
I'm glad to see a solid fine given for a data breach.

I've worked on projects in this sector before, and it's a common story to others - client cuts cost as much as possible, until the risk of an inferior product has grown too high to handle. It's a race to the bottom, and security rarely comes into consideration outside of a basic pen test being mentioned (if it happens).

Still, I'm quite annoyed at the lack of follow-up against what is blatant bullshit from BA. When your business is so heavily reliant on taking payments online, their security procedures should be airtight. I can understand that it's quite a clever hack, but it's security 101 to know what third-party code is doing on your server.

The fine is good, but it would be nice to enforce rules where a company caught in a data breach has to accept liability and not contest the severity.

[+] jrpt|6 years ago|reply
For those who are wondering what happened, British Airway’s website had malicious JavaScript included in some files they were using. Compromised third party libraries (in this case, the Modernizr library) was the attack vector. The malware would take sensitive data off the webpage and send it back to the hackers surreptitiously.

Most sites still would have no idea if this were to happen to them today.

That’s why I’ve developed Enchanted Security (https://enchantedsecurity.com/) - a virtual content security policy that tracks the network requests and even blocks malicious ones. It’s like a firewall but running on your users’ browsers. This would’ve prevented what happened to British Airways. Get in touch if you’re interested in learning more.

[+] donaltroddyn|6 years ago|reply
Interesting approach, but doesn't using your product add another attack vector? Surely Enchanted's CDN will be a rich target for attackers.

Full disclosure: I'm in private beta with a product that periodically loads production sites and compares all executed JS to the last-known-good profile (usually from CI prior to deployment), and raises warnings if anything changes. We don't run in actual users' browsers, so we won't see malicious code as early as Enchanted, but you don't have to trust us.

[+] buro9|6 years ago|reply
3 things I don't understand about your product (but I like the idea in general of helping people get over the initial cliff of enabling CSP):

1. How does including a bit of JavaScript apply a CSP and block things if the CSP is not sent via HTTP headers? Are you installing a service worker and proxying every request through that so that you can apply the blocking or something? (I have not looked into this area for a while)

2. How is adding another 3rd party resource not expanding the attack surface rather than reducing it?

3. How do you know what is good/bad and should be allowed/denied?

On #3 I run a few hundred forums that have user generated content and whitelist a small number of 3rd party embeds such as YouTube, Google Maps, Strava, etc. I want those to keep working but don't wish anything not in my whitelist to work. But if I expanded the list in future to allowed embedding Twitch... how would your system know that this is a "good" action and to allow it?

[+] mardoz|6 years ago|reply
I think this needs some clarification. From my reading of the issue the Modernizr library and its NPM entry were never compromised, instead the version hosted by BA on their website was overwritten by the hackers with one that exfiltrated the sensitive data from the payments pages.

Not only is the version of Modernizr used on the BA website extremely old (2.6.2 was released in 2012) but for hackers to be able to modify hosted scripts demonstrates an extreme lack of care; I wouldn't be entirely surprised if they just hadn't updated the CMS hosting the script.

Some people have been saying that anyone could be caught by this but I do think that the lack of process to allow this sort of thing to happen warrants this sort of fine.

[+] planetjones|6 years ago|reply
Seems very just. BA clearly had no controls to understand the code running in production was the code they had deployed. I hope this serves as a wake up call to other companies who have a blatant negligence for infosec.
[+] buro9|6 years ago|reply
> clearly had no controls to understand the code running in production

This applies to everyone who has advertising or third party anything on their page, no?

[+] ddalex|6 years ago|reply
At my company, we used fixed dependency version numbers for external libraries. The libraries are tested and we do take sample traffic snapshots from browsers using automation to see what our users see.

But this approach only takes care of simple, entry level attacks. A highly targeted attack, that lies dormant in a compromised library for ears, and it's engineered to avoid detection, e.g hiding for certain IPs or self-removing the code when the debug console is open - this is impossible to defend against, to my knowledge. How would you defend yourself?

[+] topogios|6 years ago|reply
"The information included names, email addresses, credit card information such as credit card numbers, expiration dates and the three-digit CVV code found on the back of credit cards, although BA has said it did not store CVV numbers."

Is it standard for airlines to handle storing payment card details themselves and hence having to be PCI certified instead of delegating to a PSP?

[+] robjan|6 years ago|reply
Someone injected Javascript into their pages which collected this information. But, yes, it's standard practise for airlines to store card information (excluding the 3/4 digit code) in the customers' PNR (Passenger Name Record) in the airline's GDS (Global Distribution System). The details are on this page: https://servicehub.amadeus.com/c/portal/view-solution/965353...
[+] ajdlinux|6 years ago|reply
Given that the airline industry actually runs its own payment card network (UATP, which has been around since 1936 apparently) it does not surprise me at all that airlines do much of their payment card stuff in house.
[+] jajag|6 years ago|reply
Statement on the Information Commissioner's website: https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...
[+] Anthony-G|6 years ago|reply
The statement shows that the ICO investigated this data breach on behalf of other data protection regulators of other EU states. As an Irish citizen, I wish our Data Protection Commission was as serious about protecting the personal data of EU citizens.
[+] OliverJones|6 years ago|reply
UK's ICO and other data security enforcers are acting. That's good. They're changing companies' calculus about putting resources into infosec. That's even better.

The public and press perceive that "justice is served," so we're tempted to think the problem is solved. I don't think that's helpful. These fines don't address root causes of the problem. They don't make our systems more resilient.

They're drawing a significant amount of money from the system and transferring it to their governments' general accounts. Is that the best use of that money? Should some of that money be used to help address infosec problems? To fund training for citizens, legislators, and governments? To step up law enforcement efforts against cybercreeps? To publicly fund independent security researchers (white-hat hackers) to help detect this stuff and nip it in the bud? To help subsidize the significant expense of comprehensive infosec audits for municipal governments, ngos, and small firms?

Here in USA, the National Security Agency has, by hoarding zero-day exploits and inadequately protecting them, done major infosec damage to civil institutions worldwide (UK's NHS, the Baltimore city government, you name it). I suspect similar things have happened in other governments. To what extent is it their responsibility to help clean up the mess? Can other governments use their resources to backfill where the US government can't or won't act?

Do governments now join identity thieves as enemies of people doing infosec? That cannot be good. We have to get this right and we can't do it if we're fighting each other rather than the criminals causing the trouble.

[+] M2Ys4U|6 years ago|reply
>They're drawing a significant amount of money from the system and transferring it to their governments' general accounts. Is that the best use of that money? Should some of that money be used to help address infosec problems?

Well the point of these fines is to say to organisations "put your money into infosec, or when you have a breach it'll probably be your fault and we'll take that money as a punishment".

>Should some of that money be used to help address infosec problems? To fund training for citizens, legislators, and governments? To step up law enforcement efforts against cybercreeps? To publicly fund independent security researchers (white-hat hackers) to help detect this stuff and nip it in the bud? To help subsidize the significant expense of comprehensive infosec audits for municipal governments, ngos, and small firms?

The NCSC[0], GCHQ's defensive side, publishes advice for the public, companies, charities, schools, government departments etc.

[0] https://www.ncsc.gov.uk/

[+] sugarpile|6 years ago|reply
FWIW the NSA has been behind many coordinated 0day disclosure efforts. It’s a balancing act.
[+] jfk13|6 years ago|reply
Just FTR, note that BA might appeal against this, so it may be subject to revision before it's all over...

“BA has 28 days to appeal. Willie Walsh, chief executive of IAG, said British Airways would be making representations to the ICO. "We intend to take all appropriate steps to defend the airline's position vigorously, including making any necessary appeals," he said.”

[+] xhgdvjky|6 years ago|reply
This may be the beginning of the end of hiring front end devs in house. Suddenly they are a serious liability... much nicer if you can pass on the fine to a third party!
[+] barking|6 years ago|reply
I'd say it's likely to be a game changer when it comes to skimping on IT in general, whether in-house or outsourced. It's good news.
[+] olliej|6 years ago|reply
So this is ~$366 per person whose data was compromised. That seems fairly cheap all things considered.

It's a far sight better than the "credit protection" they normally provide (from our point of view, rather than the people who are used to not having any penalties for abusing their customers). Remembering of course that the typical cost to companies making when they settle with "credit protection" is much lower than the already low $30 individuals would have to pay.

I'm also tired of newspapers parroting press releases that say things like "sophisticated, malicious criminal attack". Just like a few years ago every publicly exposed+default password service was compromised by "Nation state attackers", and before then "Advanced Persistent Threats". If you make a claim like this, you should be required to provide the full details of the attack:

- what level of employee account was compromised, and if none was needed, why not? Otherwise, did the targeted employee need the level of access that the attackers used? If not, why did they have it? Simply being a C-level executive does not imply requiring access.

- Did it make use of any software exploits? If it did, were those exploits fixed in the release versions? If those exploits were fix in released software, why was that out of date software being used?

- Is your company using established best practices: 2FA for all accounts, TLS for all networking, service isolation.

- Did the compromise come about due to loading content from a third party? If so, how was that code authenticated (multiple browsers support SRI)? Was that code used to support the site functionality, or was it for tracking or advertising?

This seems like a perfectly reasonable bare minimum if you want to support a claim that the compromise was unavoidable.

[+] biddlesby|6 years ago|reply
Do the regulators take into account whether the firm is actually at fault?

Without considering what happened in this specific scenario, surely there are cases where companies take the utmost care, follow standard security principles and still get hacked; or the issue was not with the company operating the website but rather with, say, a hardware manufacturer?

[+] daveoflynn|6 years ago|reply
> Do the regulators take into account whether the firm is actually at fault?

To echo others: yes, a lot. To quote the Information Commissioner:

> "I have no intention of changing the ICO’s proportionate and pragmatic approach after 25 May [the GDPR intro date] ... Hefty fines will be reserved for those organisations that persistently, deliberately or negligently flout the law."

A good overview of the ICO's approach: https://www.pinsentmasons.com/out-law/news/gdpr-uk-watchdog-...

The whole draft policy for how the ICO applies its powers is here. It's a good read, but not short: https://ico.org.uk/media/2258810/ico-draft-regulatory-action...

[+] dmitriid|6 years ago|reply
Yes, the regulators do take all things into consideration. A fine is the final measure.

In case of BA they ran third-party scripts on account and payment pages without users’ consent, did bot remove them even after being alerted to it, and then succumbing to a data breach because of that.

[+] dalbasal|6 years ago|reply
They do in some form. Largely though, "regulator" action tends to be outcome based. Relying on "standards" can be difficult. In some caes, standards exist and ignoring them can point to negligence. Conversely though, standards don't exist for a lot of things and when they do, they're not a full solution. IE, it's possible to follow "standard security practices," while still being insecure. If regulators make that a "get-out"... you may as well just have legislation instead of a regulator.

In recent times, regulators and legislators don't understand the problems (maybe no one does) sufficiently to be specific with rules. They demand general things, outcomes (you will not lose data) and general operating principles (you will secure your users' data , have good policies, and enforce them).

Both data protection (eg gdpr) and anti money laundering rules are examples of recent areas that work this way. If a bank's customer has been depositing stolen money, financing terrorism or something... the bank is at risk. Their policies will be examined and circumstances do get taken into account, but the "standards" they're judged against aren't absolute and standards compliance doesn't totally protect them. OTOH, if they don't adhere to their own policies or the policies are bad... it is enough to get them in trouble.

Lawyers, btw, hate this emerging system.

In short, modern "regulator enforcement" is a lot less legible & "letter of the law" oriented than legal environments that we are going used to.

[+] alkonaut|6 years ago|reply
Excellent. Now I wish they pick another big corporation (just pick one) and hand them a similar fine for using a standard GDPR opt-in-by-default popup.

They need to make it clear through action, not just vague wording, that having a default of allowing all tracking is not ok.

Pop ups should say “hi and welcome to site X. Click the yelliow button to enter with tracking/personalization and the blue button to enter without”.

[+] TomAnthony|6 years ago|reply
Is there a good reason for them not to launch a bug bounty program?

The cost of doing so would be significantly cheaper than any future fines, and would reduce the chances of future breaches.

[+] jeffail|6 years ago|reply
"amounts to 1.5% of its worldwide turnover in 2017"

I imagine that's a significant sum but I'm struggling to get my head around it. If so then good for the ICO I suppose. I remember reading endless comments a few years back speculating GDPR would never have any bite.

[+] dtf|6 years ago|reply
The maximum fine of 4% of parent company IAG's turnover would have been almost €1bn.
[+] Semaphor|6 years ago|reply
> I remember reading endless comments a few years back speculating GDPR would never have any bite.

I remember those. And those about how the GDPR will be bankrupting every company ever.

[+] simion314|6 years ago|reply
Is good that we see on the font page of HN more GDPR fines for non US companies because I seen a lot of US users accusing that only US companies are targeted(there are other non US examples but those did not appeared or stayed on the first page here on HN ).
[+] pjc50|6 years ago|reply
Yes, there was always that weird nationalist claim that GDPR was just a protectionist measure rather than a sincere attempt to achieve its stated aims.
[+] sbhn|6 years ago|reply
So who gets the money? The people who had there data stolen?

Who gets the money are the people who create laws. The more crimes commited, the safer there jobs are. The people who had there data stolen, are now on a register sold to the insurance industry, and the insurance industry decides they are a greater risk to insure, so the costs to the consumer go up. Strange how crime really drives the economy.

[+] claudius|6 years ago|reply
> Who gets the money are the people who create laws. The more crimes commited, the safer there jobs are.

Hu? Of course the jobs of people working at ICO etc are slightly safer if more criminal activity happens, but office workers at ICO do not get that money. It goes to the treasury and hence, by extension, the British public.

> The people who had there data stolen, are now on a register sold to the insurance industry, and the insurance industry decides they are a greater risk to insure, so the costs to the consumer go up. Strange how crime really drives the economy.

The fine punishing a criminal activity nearly nowhere goes to the actual victim of the crime, the victim is instead compensated in a second payment. Of course it would be nice if in addition to this fine, some kind of blanket compensation mechanism (e.g. 1000€ per datum per person) would be installed.

[+] dtf|6 years ago|reply
The Treasury. Fines are sent to the Consolidated Fund (the government's general account at the Bank of England).
[+] mirekrusin|6 years ago|reply
Yeah, people got "we're sorry" paragraph and that's about it. They won't get "we're sorry" paragraph when their bank accounts will get wiped though.
[+] dijit|6 years ago|reply
This is a death knell for BA, my friends father is a high level manager and if he’s to be believed they are running on major thin margins.

Mostly due to compensating employees fairly in the 90’s-early-2000’s. Now they’re desperately trying to remove those compensation packages.

Although it could just be cost aversion masquerading as a hard requirement.

[+] sneak|6 years ago|reply
Too bad this will all go to the state and not to any of the people who were actually damaged in the breach. :/