It's been quite a few years since I did this kind of stuff for a living, so this may be an antiquated notion...
"In my day," desktop computers saved their files to a server. That server would get backed up daily. The backup tapes/drives would be stored offline and rotated to an offsite location. (Back then you were more concerned about the building burning down than a ransomware attack.) The same would be true for any apps running on servers; their data/databases would be backed up daily and the tapes/drives used for backup would be stored elsewhere.
What is this old guy missing? If a process like this were in place, nearly all of their data would be intact. Yes, it will take some time to do a full restore and you will be missing some amount of data that was created since the last backup. But it's survivable in many cases. And you're not negotiating with criminals.
The big change is that many places now send data to an offsite location (or cloud!) through a network instead of physically moving tapes, and the attacker can often use the same network connection to destroy backups.
The article does not say anything about Oakland negotiating. They may just be in the "it takes some time" phase at the moment. Tapes are not exactly the fastest medium.
Plus, you may want to determine the exact time at which you were compromised, or else you'll be restoring potentially tainted backups. Depending on how well you're organized that alone will take quite some time, especially considering that your logs may be encrypted as well. Sometimes you don't even know how to contact everyone, because your comms are down, too.
Sure, if you do everything right and adhere to all the best practices, it won't be that big of an issue. Just don't forget about the amount of legacy crap and budget constraints many orgs have to deal with. That comes with many pitfalls and a lot of opportunities to make a mistake.
You also have organizations that have certain retention periods - say for example, keep all data for 6 months.
If your ransomware stays resident in your systems for 6 months, any backup you recover from ends up being infected and can potentially be considered useless to restore from unless you're very careful in how and what you restore from.
A lot of organizations also don't have the money or processes in place to manage backups. It's a huge cost outlay and in cash strapped SLGs, it simply ain't happening - especially when any half decent talent can make way more money working remotely for companies that respect Engineering.
There is no longer any discipline around filesize and data. People regularly attach huge files and have terabytes of useless data in databases. This makes daily backups prohibitively expensive for most orgs.
This sort of stuff doesn’t surprise me any more. I’ve been on a number of “desktop support” sessions over the last few years and seen some shit. The common denominator seems to be entirely unpatched obsolete stuff (stock RTM windows 7 with stock IE in 2021 was my favourite) where either someone turned the updates off because they knew better or stopped paying their MSP for service immediately after they had been set up and assumed it’d just work forever.
People like that and the associated competence level are rolling out the red carpet.
If its really important. Airgap. Or VM-Wrapped with restore points.
I completely understand that somebody does not want to upgrade into the warp-abyss-abomination of modern windows, especially if huge expenses software was written once, that needs backwards compatability or contains sensitive data. You can not use windows if you work for anything with sensitive data.
In todays world the legacy is the good stuff. Just needs protection.
I love people that believe there exists a version of windows that could be deemed secure.
I was there once.
Install the latest update to fix the security problems. Don't worry, our software becomes 300mb larger due to 500 other security problems we are rolling out today, but we managed to close off this one tiny hole over here.
Why does it matter anyways. With both Intel and AMD running processors independent of your machine, there's really no way to keep anything secure unless you use a machine that's over 20 years old.
So I work in this space and I am honestly quite surprised by the users here who think a Linux deployment would do any better. They won't.
This isn't a Windows vs Linux vs Solaris vs BSD issue, this is a "did I manage and configure ACLs, RBAC, GPO, and other security features correctly" issue.
For example, I've had customers have had RHEL 6.x enviromments that still got hit because they wrote a security group that allows all traffic from all
ports from 0.0.0.0/0 (aka everywhere).
Security issues always come down to misconfigurations and the lack of best practices in my experience. In that regard, the MS suite is actually superior to Linux because if you need a Security Solution Partner, Microsoft Professional Services is infinitely more competent than the largest Linux solution partner righ now (IBM).
I'm with you right up to the "infinitely more competent" line.
The big thing that Microsoft and Windows have against them, is the crapshow that is all that they include on a standard installation. That said, from what I'm seeing, this is not really unique to Windows anymore. Seems everyone wants everything on the machine.
So, yes, it is theoretically possible to setup all access rules correctly. But it is essentially a lines of code problem, at this point. Given a mountain of things to setup, you will make a mistake somewhere.
The emergency declaration will assist with equipment and materials and the activation of emergency workers as the city seeks to safely restore its systems.
It's important to remember that 'state of emergency' is less of a 'everybody stop and listen to this' than a legal circuit breaker that allows the signing of checks and assignment of tasks without being bound by the normal web of procedure and contractual obligation. We tend to imagine (in popular culture) the executive aspects of government as being somewhat by fiat, but much of the time it's more like incremental product development, with most of the job being workarounds, excuse-making, bullshitting, and tedious social obligations.
I don’t get why any user has the ability to cause so much damage. Sure they can lock their own files out and need to restore from backup, but how can that knock out other departments, let alone things like email.
When ransomware attacks began, it was more typical to see the blast radius centered around a single user who did something stupid, like run an exe or enable macros.
But that’s not how it’s done on these large enterprise networks. Ransomware gangs will still use single user entry points, but the hackers will work quietly inside the network to escalate privileges and determine key servers that should be targeted first.
I'm a penetrations tester. When the client gives me a windows laptop with low privilege credentials, I'm typically domain admin by lunch time. Sometimes even before I finish my first cup of coffee. As a domain admin I could encrypt almost any computer, often including the backups.
Privilege escalation in Windows Active directory domains is really easy. Securing a large corporate network is really hard. Especially on a tight budget.
It's not any user, it's a ransomware attack. So it was intentionally done to limit their ability to work. Also, don't assume they had backups, or that these backups weren't also targeted.
At this point it's too late, and before that they didn't really need advice or some fancy technology, they needed to dedicate enough resources/people/effort to simply do proper maintenance of their IT infrastructure. It's also plausible they simply couldn't afford the required resources, but that's not something fixable by CISA or other federal agencies.
Hardly anyone is interested in defensive security because if you do it well your job looks unnecessary. This goes both at the national security level and the individual organisation.
When an extremely high profile attack like this happens, CISA ends up taking over the organization and revamping the entire organization's IT team. This happened to Atlanta back in 2018-19. It doesn't mitigate the current incident, but helps prevent the next one.
I don’t think there’s much to be done retroactively. I’m sure there’s an option for proactive help (trainings, advice) but it is a big country, some attacks will slip through.
What crypto are Ransomware asking for these days? After all the Bitcoin mixers seem to be taken offline (have they?). Sorry, I'm kinda out of the loop and was wondering how these thugs were cashing their attacks.
imho we have to look at what limited set of tools and functionality we really use. The days where we didn't know what computers were used for are long gone and the justification for doing everything in software along with it. You want to exchange strings of text with video and images. Not much more than morse code offered. Direction of dataflow can be easily enforced in hardware. The backup drive takes input that you can't read, you break off part of the print and it becomes read only permanently. It can easily be made an insane amount of work to regain write ability.
A completely finished os can be stored on a read only device.
We just have to start from scratch :) that is all it takes :)
> A completely finished os can be stored on a read only device.
ChromeOS has entered the chat
Seriously, if it's good enough for school children, it surely is good enough for government. I love my Chromebook, and while I cannot yet do my day-job on it, I did interview at a crypto company that did do their day jobs on it, so I believe it's possible
In the modern threat environment it's no longer viable for small and medium enterprises to maintain their own IT infrastructure. This includes city governments. They should outsource infrastructure to one of the major cloud vendors with the scale and technical competence necessary to counter advanced persistent threats. It's a shame that we all have to pay this "tax" and give more control to a few big tech companies, but that is our reality.
This doesn't help. You still need people to configure the group policies and firewalls. You will also need a local installation on various PCs running on-prem to connect a lot of hardware.
You might get away with Azure AD instead of a local domain controller and exchange but you won't get much farther than that. And if there isn't a backup strategy in place already, this won't change with cloud.
how many of these systems will be safe if they had linux running? just saying because the linux is a smaller target and it would be a long time till it reaches the "year of linux desktop"
The same amount as would be if they were using MacOS or Windows 11. This isn't an OS issue, this is a "I didn't manage and configure my ACLs and RBAC correctly to minimize lateral movement in my environment" problem. Linux isn't anymore secure than Windows in that regard, as can be seen with ransomware such as Elbie. I can also say with extremely high confidence that in a number of orgs that are ransomware victim are running Linux seployments for their servers (usually Centos 6.x-7.x or RHEL6-7)
[+] [-] chriscjcj|3 years ago|reply
"In my day," desktop computers saved their files to a server. That server would get backed up daily. The backup tapes/drives would be stored offline and rotated to an offsite location. (Back then you were more concerned about the building burning down than a ransomware attack.) The same would be true for any apps running on servers; their data/databases would be backed up daily and the tapes/drives used for backup would be stored elsewhere.
What is this old guy missing? If a process like this were in place, nearly all of their data would be intact. Yes, it will take some time to do a full restore and you will be missing some amount of data that was created since the last backup. But it's survivable in many cases. And you're not negotiating with criminals.
[+] [-] PeterisP|3 years ago|reply
[+] [-] mr_mitm|3 years ago|reply
Plus, you may want to determine the exact time at which you were compromised, or else you'll be restoring potentially tainted backups. Depending on how well you're organized that alone will take quite some time, especially considering that your logs may be encrypted as well. Sometimes you don't even know how to contact everyone, because your comms are down, too.
Sure, if you do everything right and adhere to all the best practices, it won't be that big of an issue. Just don't forget about the amount of legacy crap and budget constraints many orgs have to deal with. That comes with many pitfalls and a lot of opportunities to make a mistake.
[+] [-] scohesc|3 years ago|reply
If your ransomware stays resident in your systems for 6 months, any backup you recover from ends up being infected and can potentially be considered useless to restore from unless you're very careful in how and what you restore from.
[+] [-] alephnerd|3 years ago|reply
[+] [-] aeternum|3 years ago|reply
[+] [-] unxdfa|3 years ago|reply
People like that and the associated competence level are rolling out the red carpet.
[+] [-] qikInNdOutReply|3 years ago|reply
I completely understand that somebody does not want to upgrade into the warp-abyss-abomination of modern windows, especially if huge expenses software was written once, that needs backwards compatability or contains sensitive data. You can not use windows if you work for anything with sensitive data.
In todays world the legacy is the good stuff. Just needs protection.
[+] [-] hanselot|3 years ago|reply
Why does it matter anyways. With both Intel and AMD running processors independent of your machine, there's really no way to keep anything secure unless you use a machine that's over 20 years old.
[+] [-] alephnerd|3 years ago|reply
This isn't a Windows vs Linux vs Solaris vs BSD issue, this is a "did I manage and configure ACLs, RBAC, GPO, and other security features correctly" issue.
For example, I've had customers have had RHEL 6.x enviromments that still got hit because they wrote a security group that allows all traffic from all ports from 0.0.0.0/0 (aka everywhere).
Security issues always come down to misconfigurations and the lack of best practices in my experience. In that regard, the MS suite is actually superior to Linux because if you need a Security Solution Partner, Microsoft Professional Services is infinitely more competent than the largest Linux solution partner righ now (IBM).
[+] [-] taeric|3 years ago|reply
The big thing that Microsoft and Windows have against them, is the crapshow that is all that they include on a standard installation. That said, from what I'm seeing, this is not really unique to Windows anymore. Seems everyone wants everything on the machine.
So, yes, it is theoretically possible to setup all access rules correctly. But it is essentially a lines of code problem, at this point. Given a mountain of things to setup, you will make a mistake somewhere.
[+] [-] anigbrowl|3 years ago|reply
It's important to remember that 'state of emergency' is less of a 'everybody stop and listen to this' than a legal circuit breaker that allows the signing of checks and assignment of tasks without being bound by the normal web of procedure and contractual obligation. We tend to imagine (in popular culture) the executive aspects of government as being somewhat by fiat, but much of the time it's more like incremental product development, with most of the job being workarounds, excuse-making, bullshitting, and tedious social obligations.
[+] [-] midasuni|3 years ago|reply
[+] [-] ianhawes|3 years ago|reply
But that’s not how it’s done on these large enterprise networks. Ransomware gangs will still use single user entry points, but the hackers will work quietly inside the network to escalate privileges and determine key servers that should be targeted first.
[+] [-] mr_mitm|3 years ago|reply
Privilege escalation in Windows Active directory domains is really easy. Securing a large corporate network is really hard. Especially on a tight budget.
[+] [-] jmpz|3 years ago|reply
[+] [-] hulitu|3 years ago|reply
[+] [-] Keyframe|3 years ago|reply
[+] [-] PeterisP|3 years ago|reply
[+] [-] pjc50|3 years ago|reply
[+] [-] alephnerd|3 years ago|reply
[+] [-] bee_rider|3 years ago|reply
[+] [-] santiagobasulto|3 years ago|reply
[+] [-] Maxious|3 years ago|reply
[+] [-] latchkey|3 years ago|reply
[+] [-] throwaway14356|3 years ago|reply
A completely finished os can be stored on a read only device.
We just have to start from scratch :) that is all it takes :)
[+] [-] mdaniel|3 years ago|reply
ChromeOS has entered the chat
Seriously, if it's good enough for school children, it surely is good enough for government. I love my Chromebook, and while I cannot yet do my day-job on it, I did interview at a crypto company that did do their day jobs on it, so I believe it's possible
[+] [-] nradov|3 years ago|reply
[+] [-] foepys|3 years ago|reply
You might get away with Azure AD instead of a local domain controller and exchange but you won't get much farther than that. And if there isn't a backup strategy in place already, this won't change with cloud.
[+] [-] alephnerd|3 years ago|reply
[+] [-] qwertyuiop_|3 years ago|reply
Are they ever going to hold the leadership accountable for sleeping on the job ?
[+] [-] 2Gkashmiri|3 years ago|reply
[+] [-] alephnerd|3 years ago|reply
[+] [-] ChewFarceSkunk|3 years ago|reply
[deleted]