Hi all, Alex here, CTO at Covve. Just got alerted of incident db8151dd in . We’re investigating as top priority with our security experts what relation this may have with Covve. We are monitoring the feedback in this blog and would really appreciate any additional information you may have on this as we investigate (alex@covve.com).
service_bus|5 years ago
You're either going to have logs pointing to an IP that the individual used to siphon your data, or nothing.
With an exposed elasticsearch database, you possibly had the data being siphoned by many parties, and are only aware now because of this particular incident.
If you have any operations regarding customers in Europe, you need to notify your relevant Data Protection Authority
https://edpb.europa.eu/about-edpb/board/members_en
You should also sign your engineers up for this course:
https://www.elastic.co/training/specializations/elastic-stac...
outworlder|5 years ago
sigh
Why is everything being deployed publicly accessible? If one is relying on their database configuration as their only protection, they are one fuckup away from disaster.
Layers, people, layers. If this is on a cloud provider, put it on a private VPC/subnet. Add a load balancer or similar serving traffic only to the instances you need traffic routed to(which are unlikely to be databases themselves, more likely web servers). Configure firewalls accordingly. And of course, configure the servers properly.
the_mitsuhiko|5 years ago
The entire company is in the EU. The need to reach out to their DPA ASAP.
michaelcampbell|5 years ago
mdip|5 years ago
Kudos for reaching out to the greater HN community as a channel for information. A lot of companies are concerned that such a public request gives the impression that they don't have a handle on things. Let's be honest: there isn't a company on a planet that, immediately following a breach, has a handle on things. Honesty is a pre-requisite to re-establishing trust in the (seemingly likely) event that this is a breach if your customers' data.
I don't envy the position you're in. By now, you've hopefully downloaded the link to the data dump[0] and have compared it against your own data to confirm that it is or is not a breach of your own operations. Please put out a communication as soon as possible if you confirm it's their data. Immediately after closing off access to the data (and I'd consider taking the whole thing offline[1]), before you take the additional steps to protect your environment from breach.
The next step is to lock it all down, everywhere. Rank the risks associated with your data; bubble that up to the components that touch it. Encrypt data and protect your private keys (HSM/virtual HSM), to the extent possible, segregate your data by risk, assign separate accounts to different risk categories and ensure lower risk accounts lack permissions to the data and cannot acquire the key to decrypt. Your "Staging", "Development" and "Test" databases ... any chance they have a snapshot of production from some point in the past[2]? Reduce the public exposure of your infrastructure -- create multiple private networks; ensure data can be accessed only by the thing or things that need to access it on the permissions and network layer. Depending on how you're set up, isolate management interfaces to a private network requiring separate authentication in addition to device authentication. Grant permissions to staff on a "minimum required to work" policy. For staff that require day-to-day permissions to high-risk assets, minimally get them a separate (individually assigned) administrative account to avoid accidental changes. But generally stick with "this person, and this alternate (bus factor), only, can alter permissions related to accounts used in production infrastructure"; ideally, requiring both for permissions changes would be awesome, but I'm not aware of broad adoption anywhere.
Audit roles assigned to everything. If this is AWS, you're going to be spending some time in AAM-related tools. Look at every account, every permission and everything it's assigned to and challenge it: does it need this much access? Can I make the access more specific (device narrowed)? Can I assign less access and achieve the same result? Can I separate out these two services with different risk profiles so permissions can be assigned more carefully?
All the best -- not a fun situation to be in.
[0] Someone posted one in the comments; might be gone, dig around the usual places and find a link from a "direct download" site if it's been taken down. (aim for mega.co.nz links; less costly, or google awesome-piracy for workarounds).
[1] I co-authored large parts of the internal security policy at Global Crossing (carried forward to Level 3) about a decade ago - we had a "Critical" category -- when triggered, a situation call started and didn't end until the issue was stable and root causes/solutions were identified. It also meant "if a device was categorized as being able to be infected (we were often dealing with aggressive malware), it was allowed to be taken offline regardless of impacts to the business" - i.e. the cost of failing to contain this is higher than the cost of turning off customers' service. We threw the switch a handful of times. It was hell.
[2] I used to lose it when I saw people doing this with live customer data... except that I've encountered it on 80% of projects I've worked, so I'm numb to it. You can roll fake data pretty easily with various different tools (online and CLI); nobody protects staging/test/dev like they protect prod and since you've determined you must protect this data in production, you don't want to have to protect dev/staging/test that same way.
unknown|5 years ago
[deleted]
bob33212|5 years ago
ItsPrivacyFool|5 years ago
[deleted]
dirtydroog|5 years ago
[deleted]