top | item 14688679

Windows 10 will use protected folders to thwart crypto ransomware

185 points| Errorcod3 | 8 years ago |helpnetsecurity.com

169 comments

order
[+] ChuckMcM|8 years ago|reply
One of the "features" (back in the day) of running a diskless system was that you could set change policy on the server hosting the file which was completely out of reach of the "client" machine that was running the program. For nearly all of the system files there was no reason for them to change. NetApp turned this into a huge win when they could use snapshots to support multiple VM images with just the small configuration changes.

Given the well known benefit there, and that the processor on your hard drive is about as powerful as your phone, why not have the drive set up files that are 'read only' unless allowed to change out of band. Here is how it would work.

Your disk works like a regular SATA drive, except that there is a new SATA write option which can write a block as 'frozen'. Once written that way the block can be read but not written. You add an out of band logic signal and wire it up to a switch/button that you can put on the front (and/or) back panel. When the button is pressed the disk lets you 'unfreeze' or write frozen blocks, when it it isn't pressed they can't be changed.

Now your hard drive, in conjunction with a locally operated physical switch, protects sensitive files from being damaged or modified.

[+] klodolph|8 years ago|reply
So basically, there's a switch on my computer which I have to flip every so often or things stop working? Or maybe I can just leave it in R/W mode because I'm tired of flipping a switch every time I ctrl+S...
[+] WalterBright|8 years ago|reply
I would like to have a physical write protect switch on drives I connect via USB ports. It would be great for backup drives, so you wouldn't inadvertently goof them up when restoring from them. (Like get the arguments reversed in an rsync.)

I used such a lot with floppy disks back in the 1980s.

[+] LoSboccacc|8 years ago|reply
eh that's load of work for something that'll require constant manual intervention, besides, block level protection is the wrong level of abstraction and will get in the way of getting anything done, unless you rewrite the whole operating system to be aware of that (just run a lsof / openfiles )

a software defined version would be: make an opt in sandbox for processes that ties a folder and it's content to a single executable, with the executable pinned by the operating system and let the whole thing be mediated by the kernel.

of course that's as thigh as the kernel security, but if you're worried by that, offsite incremental backups are a cheaper answer.

[+] eli|8 years ago|reply
Seems like it'd be a lot simpler and more reliable to just rely on the server to create frequent snapshot backups of user data in a place where malware couldn't touch it.
[+] readams|8 years ago|reply
The drive could just not overwrite blocks and expose an interface to access old copies. Flash drives already do this except for the interface to access old blocks.
[+] kdbg|8 years ago|reply
Similar to this there is a program I use called Faronics DeepFreeze. It allows you to freeze a drive (the OS drive is what I use it for).

The difference is that it allows writes but any modifications are removed on reboot. I use it to lock down public access machines, users get a network drive they can write to but without being able to modify the OS they can't do much damage.

Not the solution your presenting but it works pretty well.

[+] nickpsecurity|8 years ago|reply
I agree with others that append-only is best way to accomplish this. Maybe with an additional feature that specific files won't be overwritten when it starts running out of space. Far as doing stuff on the HD, there were Australian products in high-assurance sector that had user profiles w/ access controls on partitions. Most products like this disappeared since even the military wasn't buying enough of them. Here's one you could build that in that retains lots of good capabilities:

http://securesystems.com.au/index.php/high-assurance-silicon...

The other reason these products didn't take off is that it's really the operating or file system's job to do this. That's where it's easiest to enforce access control whether using labels or just crypto. There were and are systems that can do that with small, attack surfaces (i.e. TCB's). So, the integrators offer a combination of stronger OS's (eg trusted OS's, separation kernels) for data in use and encrypted drives for data at rest. Two examples: one of the first, security kernels (GEMSOS) enforcing MLS at FS level; a modern, crypto-oriented filesystem with small TCB usable in a variety of setting.

http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=048...

https://www.usenix.org/legacy/event/atc11/tech/slides/weinho...

The first one was deployed in the field for a variety of applications including controlled access of files. Similar kernels were used in databases. The other one could be modified to do access control (i.e. write-protect) on files that had been labeled as such by the operating system when it was in a clean state after trusted boot. It would be a configuration sent over IPC to an isolated app w/ privileged access to secure filesystem.

So, there's how I see it happening. The hard disk could also be used as an accelerator by offloading interrupt handling, some file access, and the crypto parts. The filesystem would then be mainly doing startup and handling issues reported by hardware. They'd have to be designed compatibly, though.

[+] mycall|8 years ago|reply
I feel Copy on Write can give you all the benefits you seek, easier. In case of fire, rollback changes. Daily snapshots when verified as OK.
[+] PhilWright|8 years ago|reply
The problem is that someone will then work out a way of hacking the drive so that it ignores the physical switch and allows then to change the contents. Then you have lots of inconvenienced users having to use a switch which ends up not protecting them anyway.
[+] zeta0134|8 years ago|reply
Okay, so I know Windows probably doesn't actually work this way, but from a user interface perspective... what's the rationale on giving an App permanent access to the user's home folder directories? Don't most well behaved apps have a file open / folder open dialog, which should be able to grant access to files at runtime? If the file opening dialog is provided and controlled by the operating system (I realize many, many legacy apps work differently in Windows) then the OS can silently grant permissions at the time of open, rather than letting apps either have free reign or no access at all.

I feel like this is the expected behavior anyway; Power Users may run utilities that need to touch the whole system, but most regular users are doing pretty good to juggle more than a handful of open files in their mental model of the machine while they're using it. The idea of file permissions is already pretty foreign to the average end user. Applications already have a designated area (%APPDATA%) where they can store their temporary files and things, so perhaps the documents folders should be more locked down by default.

[+] cube00|8 years ago|reply
I've always wondered why Windows and other OSes don't offer a 'cold storage' area where you need thaw out files before editing. Files not modified within a selected time freeze from further modification. I've got plenty of files that are archived that I'd never want to change, but it's a hassle to unmount/remount just to add a new file to an existing directory.
[+] megamindbrian|8 years ago|reply
How about just enabling Shadow copies by default! I don't understand why Windows has great "Time machine like features", but every fucking time I right click and go to Properties and look at the "Previous versions" tab and it is completely empty.
[+] mtgx|8 years ago|reply
I'd also really like Microsoft to develop the Application Guard (app in a VM) feature faster and make it widely available to almost any app, or at least any browser, and of course to everyone, not just enterprise users.

Microsoft has some interesting new security features on its roadmap. Unfortunately, 90% of them are for enterprise users-only and some only for its own applications.

It also wouldn't hurt to overhaul/replace UAC with something better, but I imagine that would require deeper architectural changes (which I think would be worth the pain).

Microsoft should also push users towards creating a Standard account when installing Windows, and setting up an Admin password, too. It shouldn't be too difficult/disruptive. They just need to create an easy process for it at installation.

The vast majority of Windows malware infections happen because users are also Admins. This alone would give Windows a huge security boost on average.

https://www.avecto.com/news-and-events/news/94-of-critical-m...

Once they do this, they could also start encrypting Windows devices by default with the Admin key, similar to how Android does default encryption.

Windows is pretty much the last major operating system not to encrypt by default. Hopefully, if they do this, they at least give users the option to keep the key locally, and not automatically upload it to Microsoft's servers, as they do now if you login to your Microsoft account.

[+] rbanffy|8 years ago|reply
> don't offer a 'cold storage'

If the malware gets privileged access, it's game over. If it can't, good file system permissioning fixes the problem.

[+] VectorLock|8 years ago|reply
How is 'thawing' your files more/less of a hassle than mounting a drive?
[+] balls187|8 years ago|reply
Interesting idea.

Can you explain the thawing procedure, and how a normal everday user would experience it?

[+] Meph504|8 years ago|reply
My concern is first off, this seems like it is going to break a massive number of applications. It also seems that they are pushing this layer of access management that doesn't have proper support on any platform but UWP.

I see this as Microsoft taking yet another step to force people to move to their new Appstore model. by choking the access to the operating system away from any other platform, which I find really amusing because their own top tier applications aren't built on these platforms (office, visual studio, etc..).

[+] pjmlp|8 years ago|reply
Better update yourself.

The next version of Office and Note for Windows 10 are going to be store only.

At Build they also had people from Adobe, Cakewalk and Kodi showing their desktop apps ported to the UWP via the Desktop Bridge.

Like they did with WPF and Visual Studio, they are pushing everyone into the train by dragging their own devs into it.

[+] ctrlaltdestroy|8 years ago|reply
I imagine Office, VS etc are too big to "port" to Appstore model. Also people still use these applications in Windows 7 and so that would mean having two parallel versions of the same app and release features and support for both.
[+] hippich|8 years ago|reply
So last ransomware we seen in the news actually tried to reboot system and encrypt files before OS is loaded. So unless that new tech gonna protect MBR (which should be protected anyway) - not sure how it going to stop encryption.
[+] jakobdabo|8 years ago|reply
Completely unrelated, but am I the only with an impression that MS has switched Windows into a rolling release OS (like Gentoo or Arch) with infinite updates of Windows 10? This would be a genius move to solve the issue of the users remaining on the old unmaintained release like it was with XP, and like it is now with 7.
[+] ComodoHacker|8 years ago|reply
I always thought protecting users from malicious code they willingly download and run themselves is futile and a waste of developers' resources.

Do I miss something and this is actually a viable security approach?

[+] pfg|8 years ago|reply
It's not going to do much for targeted attacks, but there are definitely ways to limit the damage for large-scale ransomware attacks. As it is right now, ransomware doesn't even need to bother with privilege escalation because files valuable to users are most likely owned by them. Not to say that all ransomware malware sticks to just user privileges, but it's usually enough do get the job done.

Having a sort of firewall for file systems that's enforced by the system means that in addition to getting code to run with user privileges, the malware authors need to trick the victims into giving the software root (which might be impossible on enterprise networks), or use a privilege escalation vulnerability to do that.

Of course, people could still click through prompts, allow access to all apps due to warning fatigue, etc., but it's an improvement - if done correctly.

[+] akerl_|8 years ago|reply
It's one of the few non-futile uses of developer resources, when it comes to security.

It's a virtual certainty that users will download malicious code, so as a security person you're left trying to mitigate the impact when they do.

[+] callumjones|8 years ago|reply
Given malicious code is hidden in applications that appear to be safe or appealing to users I don’t think they are usually willingly downloading malicious code.
[+] floatboth|8 years ago|reply
> If an app attempts to make a change to these files, and the app is blacklisted by the feature, you’ll get a notification about the attempt

So it's allow default? That sounds useless.

We need a deny default thing. Like Little Snitch but for disk. Every time an app accesses a directory it hasn't accessed before, ask. (Skip asking when files are opened using the system "Open file" dialog for a bit less annoyance.)

[+] vxNsr|8 years ago|reply
I think that the most recent attack in Ukraine already overcame this obstacle. They were able to use an in-place update system by a trusted software vendor to install their malicious code on the victim's computer. That software would almost certainly have had permissions even under this list, so it's not that effective.
[+] sitkack|8 years ago|reply
How about using ML to detect profiles of access and disallowing un-common access patterns? If I only use VS Code to access my source, prevent win-malwr.sys from accessing that folder.
[+] yjftsjthsd-h|8 years ago|reply
And then one day you want to zip up the project to send to a friend, run an external linter on it, or make backups. ML depends on an adequate training set, and real life uses change quickly enough to break it.
[+] d8421l01vv4r|8 years ago|reply
Having an OS that arbitrarily denies applications access to files would drive me mad very quickly. I'm guessing that seemingly unpredictable behaviour would annoy the average user as well.
[+] bpodgursky|8 years ago|reply
I'm surprised Google hasn't run a Chromebook advertising campaign which just says "use a Chromebook and never care about ransomware again"
[+] d--b|8 years ago|reply
This sounds like a feature that will be painful to work with for regular apps, but that malware will easily work around.

I mean I am no security expert at all, but you kind of need administrative privilege to install a malware, so why not keep it to access all the folders you need?

[+] ocdtrekkie|8 years ago|reply
This seems like a good idea, and I'm pretty excited to see this step. Though I suspect if certain apps are whitelisted to edit in those folders, ransomware will simply turn to finding exploits in those apps. And most of your document and photo editing apps out there may not have been designed with security in mind, as they never expected to be gatekeepers of file access.

This will also probably be a UAC-level nightmare for getting old software to work on newer PCs, as today's software generally just assumes it can have file access to document folders.

[+] Santosh83|8 years ago|reply
Many of the ideas seem good for a corporate/enterprise setup where you lock the system down to run a few business/tech apps, but not so pain-free for desktop users. I mean, nearly every app on my system needs access to the usual folders. Unless MS bundles a good whitelist of approved apps, granting permissions is going to get really annoying.
[+] bsder|8 years ago|reply
How about we just have "copy-on-write" filesystems by default?

Something which then tries to "encrypt" your hard drive merely winds up creating another layer on top which you wipe out to get back the original files. You only have to flip a "hardware switch" when your disk fills up or you get a catastrophe.

I cry every time I see something that IBM or DEC got right 40 years ago that we STILL haven't adopted.

[+] lucb1e|8 years ago|reply
What are "end-to-end security features"? They mention it once but then never again.

As far as I know, the term end to end is about communications: an exchange between two or more parties, or endpoints, which can be encrypted "end to end". I'm afraid they just dropped it as another term nobody knows the meaning of, so we'll have to find a new term to describe why Signal and Wire are better than (non-PGP) email.

[+] Meph504|8 years ago|reply
"end to end" has been a term in common use in language since the 1800s meaning complete coverage. look it up on the oxford english dictionary for more details.
[+] Kenji|8 years ago|reply
I'm skeptical. The cost of managing these permissions might outweigh the benefit. But hey, why not try it. As long as I can disable it when it ends up getting in my way...
[+] MichaelBurge|8 years ago|reply
Linux has had the same issue for the longest time: You need root or a capability to set the time, but any program you run can wipe your entire home directory.
[+] dboreham|8 years ago|reply
Perhaps the place to implement countermeasures is in the disk drive (SSD these days)?

e.g. arrange for the drive to never delete anything unless some key exchange has recently been done, that depends on user input (bio parameters, or password).

From a user perspective you'd see this as :

All deletes (and file version changes) go to a recycle bin. Emptying the bin can only be done upon presentation of the secret.

[+] ksk|8 years ago|reply
I wonder MS has given any thought to 'sealing' executable regions so no new instructions can leak into memory. IOW Once executed, a process can only reference instructions present in the binary itself. Basically make running JIT-ed code, self-modifying code, etc, a special process privilege, that can then have a limited process context for I/O.
[+] muricula|8 years ago|reply
A lot of code in the windows ecosystem uses UPX unpackers. The code extracts itself before executing the actual application. This is common for certain installers.

Windows does a pretty good job of enforcing Data Execution Prevention for code which opts in.

[+] topkeker|8 years ago|reply
This seems like another strange workaround. We need to change the way the operating system behaves for the future. The problem is default allow for untrusted code to execute. Everyone recognises this as the problem, no one wants to step forward and implement the change.

We do it for mobile, mostly, the desktop needs the same shift.

[+] muricula|8 years ago|reply
That basically means forcing everyone to sign their code and offer it through the App Store. You'll see developers complaining about that upthread.

Windows 10 does make code signing mandatory for new drivers, and the drivers must pass a suite of acceptance tests.