Microsoft's %Appdata% directory is a security nightmare in my opinion. Ideally applications should only have access to their own directories in %Appdata% by default. I recently came across a python script on GitHub that allows to decrypt passwords the browser stores locally in their %Appdata% directory. Many attacks could be prevented if access to %Appdata% was more restricted.
I also found a post of an admin a few days ago where he asked if there was a Windows setting for disallowing any access to %Appdata%. The response was that if access to %Appdata% is completely blocked Windows won't work anymore.
"AppData" is where user specific application data is supposed to be stored.
"The Registry" is where application configuration is supposed to be stored.
"ProgramData" is where application specific data is supposed to be stored.
"Program Files" is where read-only application binaries and code is supposed to be stored.
It really is a simple concept from a Windows perspective. What ruins everything is overzealous and/or ignorant programmers who don't take any pride in their work, or lack all respect for the users environment. For example; an .ini file should not be a thing in Windows. That is what the registry is for. But the programmer writes the code for Linux, half-ass ports it to Windows, and leaves the .ini file because his code is more important to him than the end-users operating system.
There is nothing wrong with AppData permissions. The problem is with the users understanding of what it is for, and the developers understanding of how it should be used.
Microsoft is trying to do that with msix and a new filesystem driver that transparently restricts file system access to app. Should land into Windows 11 this year. See https://youtu.be/8T6ClX-y2AE for the functionality explaination.
There's probably nothing that I hate in programming more than having full access to the file system. Any time I write a program that has to delete a file I just make it move into a trash folder instead just in case I mess up somewhere and accidentally delete the entire file system.
Is there even a way to opt in to having a secret be accessible only for your process?
Like, a way to maybe sign your executable and then use a windows api that then gets "oh. This process is made by the same vendor that created this secret, so it’ll be allowed access".
It’s just ridiculous that the most trivial, unprivileged process can just steal any file and any secret accessible by the user it’s run as. Unless that secret is protected with a key derived from a separate password the user has to put in.
> The response was that if access to %Appdata% is completely blocked Windows won't work anymore.
Yikes. I really wish that instead of Microsoft wasting resources on telemetry nonsense, they would focus on optimizing their OS and modernizing some of these blatant security issues.
I guess it wont happen until we have another wave of ransomware malware or something of the sort.
I'm glad they made some improvements to security as a result of this finding. This "attack" is still very specialized though and requires local access which (as mentioned) could've exposed the user to keyloggers and other malware.
Yes, it requires an attacker in a powerful position with local access. However, it does not require special privileges or techniques that may trigger endpoint security (such as keyloggers or memory dumping). The only requirements are reading a JSON file and making a single Windows API call to retrieve the key.
Ok but it assumes the domain is compromised as stated in the article, and if the domain controller is compromised, it’s a game over for connected machines hence these attacks usually focus on domain admin or schema admin.
Edit: it seems the second non-biometric method doesn’t need domain, it’s still however need that local access
> S-1-5-21-505269936…
Kind of off topic but around 20years ago when I had my first portable harddisk, I used this method by creating these type of folders and remembering the numbers sequence in a creative way to hide my files when traveling/crossing borders while putting some decoy files in the plain sight, before knowing/using data encryptions, and it worked, I remember the agent taking my hdd and seeing him going through the decoy files and then returning my hdd normally.
> "We recently conducted a penetration test with the goal of compromising the internal network of a client in a Windows environment. As usual, we managed to get administrative access to the domain controller"
This article feels like click-bait, when they buried the lede.
I've always considered password vaults as a single point of failure that will compromise all of your passwords. I've had lots of intelligent, well-informed programmers argue that my concern is groundless.
Everything is a tradeoff - but the basic balance is very strongly in favor of password managers:
1. without a password manager that is shared on all your devices, you WILL re-use passwords out of frustration.
2. without a password manager, if you do any sort of regular sharing passwords with a engineering team, friends & family, you'll resort to pretty insecure channels.
3. true E2E encryption, while still providing some surface area, has proven in the field through multiple pretty bad breaches[1], that it's a security model that holds up under real-world circumstances.
On the flip side, you are right: you are one compromised browser extension / binary away from having your local vault decrypted, and ALL your passwords compromised. But think about this: if someone has this much local access, chances are they can install a keylogger anyway, or read your clipboard, so the real difference is you've conveniently pre-loaded all your sensitive information in one go for the bad actor.
You can use password vaults without creating a single point of failure by enabling 2FA for the accounts in the vault, without storing the keys there. Of course, it would still be bad if the vault was compromised, but it would be unlikely that anyone could access those accounts without accessing your 2FA.
That's because it is a SPOF. However, a password manager seems to me the best compromise along the security / convenience axes.
I memorise good passwords for a handful of my most critical stuff (and have MFA). They don't go in my password manager.
If my password manager gets compromised then I probably could lose some cash, maybe get embarrassed by being impersonated on social media - it could get very inconvenient but not catastrophic.
The way I look at it is, password vault is a single point of failure with a very VERY tiny attack surface that attacker will need to directly target you with a sniper rifle to actually hit you (assuming you are not using things like Lastpass. I personally use Keepass and synchronize the local vault across devices using Syncthing). Suffice to say, unless your last name is Snowden, it should not be a concern to you.
Comparing to the common way of "managing" password (i.e. reusing one password everywhere), it is still a single point of failure. The difference is the attack surface balloons up in proportion to the number of website you sign up to. And just like a balloon, all it need is one poke, one website storing your password in plaintext to blow it all up.
Before password managers people used the same password on every site. Vaults being a SPOF is true but not really relevant. They're still an improvement over what people did before.
I don't think anybody is arguing that password managers are the be-all and end-all of secure user authentication.
But what would you use instead for services that support only password authentication? And even for services with 2FA: If one of the factors is a password, where do you store it?
Sounds like the bigger issue in this case is that it’s not clear to developers in which cases they can rely on DPAPI to be entirely local, which I assume is what’s needed for password manager style applications.
This means that any process that runs as the low-privileged user session can simply ask DPAPI for the credentials to unlock the vault, no questions asked and no PIN or fingerprint prompt required and Windows Hello is not even involved at all. The only caveat is that this does not work for other user accounts.
Yikes
Bitwarden has since made changes to their codebase to mitigate this particular scenario, which we will quickly summarize in the next section. They have also changed the default setting when using Windows Hello as login feature to require entering the main password at least once when Bitwarden is started.
Phew
Props to the security researchers for finding this bug! It's great that we have the infosec community to help protect us. Feels like one of the few industries whose monetary incentive is to help the public.
The complexity of deployed identification/auth chain/secrets management/ec. is pretty terrifying; even if you can somehow understand it for one OS and hardware platform, if your service needs to support multiple OSes plus web plus multiple auth technologies plus a recovery path and everything else, dragons.
This is one of the few things cryptocurrency gets right in one specific way better than most other applications -- in most cases, everything is explicitly about operations with a key, and you build up protections on both sides of that. Unfortunately those protections themselves are often inadequate (hence billions of dollars in losses), but it's at least conceptually simpler and potentially could be fixed.
I'm not convinced crypto is inherently less secure; I'd argue it's more secure on average. Data breaches happen every day; whether in financial services or not. The difference is that a breach is catastrophic for crypto; but just bad for most businesses.
Tangential, what is the state of security on Linux desktop nowadays? Say out-of-the-box Debian 12 using Wayland. Is it still just that nobody is attacking Linux so it's safe?
> As usual, we managed to get administrative access to the domain controller
As usual? Is that the state of Windows Server security these days? I never managed a Windows-based network so I have no idea. I heard about these things back in the 2000's but I'm surprised this is "usual".
Yes. If you have LLMNR, NTLM enabled, unsigned SMB allowed, and nonencrypted LDAP bindings then your domain controller can be popped with zero effort by metasploit.
Legacy protocols can be very sticky and most repeat pentest engagements I am able to use the same exact method every time because they will never get addressed. Modern windows (since like vista-era) will use better stuff out of the box but will also allow downgrade attacks in the name of compatibility.
Interestingly, the latest versions of bitwarden for mac that are available for download from github no longer work with biometric authentication, requiring the user to download the app from the app store in order to use that functionality.
There are a few convenient scapegoats here but ultimately in this case it is not biometric unlock that enabled this but rather characteristic of the Active Directory's design (I'm not sure I will call it a weakness).
For Android and iOS if you forget your PIN code I believe you are screwed, as in no one can decrypt your device for you.
Probably not, given that Android's security model sandboxes apps and accordingly can identify which one is trying to access a given keychain credential.
I worked in managing bug bounty programs at a previous job. If there is one thing I have learned it's that blog posts like this are heavily skewed towards making the problem seem much larger than it is. It's what gets the clicks, so it's not a surprise. It makes dealing with penetration testers and bug bounty participants really stressful and frankly, annoying.
Our policy was that we would be happy if someone were to discuss bounties we paid out for, but we wanted the discussion to be fair and accurate. It did not ever really feel like it was mutually beneficial relationship. I don't miss that work at all really lol.
TL;DR: It's definitely interesting, but this is about attacking vaults with biometric unlock enabled (and are thus stored on disk) on Windows, and requires workstation access and a Bitwarden design flaw that was fixed in April.
> the attack already assumes access to the workstation of the victim and the Windows domain
> The underlying issue has been corrected in Bitwarden v2023.4.0 in April 2023
> As it turns out, we were not the first to discover this in March 2023, it had already been reported to Bitwarden through HackerOne.[1]
I could have sworn [1] had a dedicated post here on HN but couldn't find it, it's worth a read too.
>the attack already assumes access to the workstation of the victim
I seldom can take "vulnerabilities" that require physical access seriously, because if a hostile is physically next to my computer I have more pressing concerns than some passwords.
I've always thought the trust placed in password managers was deeply misplaced. Like any company, it's only a question of time and circumstance until one of them is massively breached, but right here on HN, a whole bunch of people who should know better recommending them as if they were flowers from heaven. Because of course hey, "it's just convenient".
And why would this be downvoted? Is there some specific holy aspect to password management services that makes them immune to being the victims of the same massive data leaks that frequently affect a whole broad range of tech companies?
walki|2 years ago
I also found a post of an admin a few days ago where he asked if there was a Windows setting for disallowing any access to %Appdata%. The response was that if access to %Appdata% is completely blocked Windows won't work anymore.
zelon88|2 years ago
"The Registry" is where application configuration is supposed to be stored.
"ProgramData" is where application specific data is supposed to be stored.
"Program Files" is where read-only application binaries and code is supposed to be stored.
It really is a simple concept from a Windows perspective. What ruins everything is overzealous and/or ignorant programmers who don't take any pride in their work, or lack all respect for the users environment. For example; an .ini file should not be a thing in Windows. That is what the registry is for. But the programmer writes the code for Linux, half-ass ports it to Windows, and leaves the .ini file because his code is more important to him than the end-users operating system.
There is nothing wrong with AppData permissions. The problem is with the users understanding of what it is for, and the developers understanding of how it should be used.
shortsunblack|2 years ago
mrguyorama|2 years ago
Yes, otherwise known as "if you run code on your computer, it can run code on your computer".
If a random python program can "decrypt" the passwords, that's not encryption. And browser password management isn't about security, but convenience.
dist-epoch|2 years ago
> Ideally applications should only have access to their own directories
This happens for Windows Store apps, which are sandboxed similarly to mobile phone apps.
AlienRobot|2 years ago
Pesthuf|2 years ago
It’s just ridiculous that the most trivial, unprivileged process can just steal any file and any secret accessible by the user it’s run as. Unless that secret is protected with a key derived from a separate password the user has to put in.
bhdlr|2 years ago
giancarlostoro|2 years ago
Yikes. I really wish that instead of Microsoft wasting resources on telemetry nonsense, they would focus on optimizing their OS and modernizing some of these blatant security issues.
I guess it wont happen until we have another wave of ransomware malware or something of the sort.
adontz|2 years ago
hypeatei|2 years ago
RedTeamPT|2 years ago
nati0n|2 years ago
tamimio|2 years ago
> S-1-5-21-505269936…
Kind of off topic but around 20years ago when I had my first portable harddisk, I used this method by creating these type of folders and remembering the numbers sequence in a creative way to hide my files when traveling/crossing borders while putting some decoy files in the plain sight, before knowing/using data encryptions, and it worked, I remember the agent taking my hdd and seeing him going through the decoy files and then returning my hdd normally.
alberth|2 years ago
> "We recently conducted a penetration test with the goal of compromising the internal network of a client in a Windows environment. As usual, we managed to get administrative access to the domain controller"
This article feels like click-bait, when they buried the lede.
WalterBright|2 years ago
lordofmoria|2 years ago
1. without a password manager that is shared on all your devices, you WILL re-use passwords out of frustration. 2. without a password manager, if you do any sort of regular sharing passwords with a engineering team, friends & family, you'll resort to pretty insecure channels. 3. true E2E encryption, while still providing some surface area, has proven in the field through multiple pretty bad breaches[1], that it's a security model that holds up under real-world circumstances.
On the flip side, you are right: you are one compromised browser extension / binary away from having your local vault decrypted, and ALL your passwords compromised. But think about this: if someone has this much local access, chances are they can install a keylogger anyway, or read your clipboard, so the real difference is you've conveniently pre-loaded all your sensitive information in one go for the bad actor.
[1]For example: https://blog.lastpass.com/2022/12/notice-of-recent-security-...
hypeatei|2 years ago
You could use a local vault and sync yourself, use a piece of paper in a safe, or use your brain to store them.
All of these come with tradeoffs and their own risks. Pick your poison.
tamimio|2 years ago
chimprich|2 years ago
I memorise good passwords for a handful of my most critical stuff (and have MFA). They don't go in my password manager.
If my password manager gets compromised then I probably could lose some cash, maybe get embarrassed by being impersonated on social media - it could get very inconvenient but not catastrophic.
firen777|2 years ago
Comparing to the common way of "managing" password (i.e. reusing one password everywhere), it is still a single point of failure. The difference is the attack surface balloons up in proportion to the number of website you sign up to. And just like a balloon, all it need is one poke, one website storing your password in plaintext to blow it all up.
Sohcahtoa82|2 years ago
_moof|2 years ago
lxgr|2 years ago
But what would you use instead for services that support only password authentication? And even for services with 2FA: If one of the factors is a password, where do you store it?
paulpauper|2 years ago
kritr|2 years ago
0xbadcafebee|2 years ago
Props to the security researchers for finding this bug! It's great that we have the infosec community to help protect us. Feels like one of the few industries whose monetary incentive is to help the public.
unknown|2 years ago
[deleted]
rdl|2 years ago
This is one of the few things cryptocurrency gets right in one specific way better than most other applications -- in most cases, everything is explicitly about operations with a key, and you build up protections on both sides of that. Unfortunately those protections themselves are often inadequate (hence billions of dollars in losses), but it's at least conceptually simpler and potentially could be fixed.
dannyw|2 years ago
Aerbil313|2 years ago
dharmab|2 years ago
gtirloni|2 years ago
As usual? Is that the state of Windows Server security these days? I never managed a Windows-based network so I have no idea. I heard about these things back in the 2000's but I'm surprised this is "usual".
jabroni_salad|2 years ago
Legacy protocols can be very sticky and most repeat pentest engagements I am able to use the same exact method every time because they will never get addressed. Modern windows (since like vista-era) will use better stuff out of the box but will also allow downgrade attacks in the name of compatibility.
Hell, I still find SMBv1 in a lot of places.
ziddoap|2 years ago
So, I read this to be "as usual for us during our engagements", not "as usual for everyone all the time".
SV_BubbleTime|2 years ago
dist-epoch|2 years ago
unknown|2 years ago
[deleted]
hiatus|2 years ago
Pesthuf|2 years ago
Why isn’t being signed enough for an application to store secrets only it can access in the keychain?
unknown|2 years ago
[deleted]
guerby|2 years ago
RedTeamPT|2 years ago
harrygeez|2 years ago
For Android and iOS if you forget your PIN code I believe you are screwed, as in no one can decrypt your device for you.
lxgr|2 years ago
rmbyrro|2 years ago
Every time a user logs, Microsoft should be obliged by Law to show: "Your computer will get cancer if you proceed logging in."
2bluesc|2 years ago
Really feel that should've made it to the title other it feels like click bait.
selykg|2 years ago
Our policy was that we would be happy if someone were to discuss bounties we paid out for, but we wanted the discussion to be fair and accurate. It did not ever really feel like it was mutually beneficial relationship. I don't miss that work at all really lol.
unknown|2 years ago
[deleted]
nlawalker|2 years ago
> the attack already assumes access to the workstation of the victim and the Windows domain
> The underlying issue has been corrected in Bitwarden v2023.4.0 in April 2023
> As it turns out, we were not the first to discover this in March 2023, it had already been reported to Bitwarden through HackerOne.[1]
I could have sworn [1] had a dedicated post here on HN but couldn't find it, it's worth a read too.
[1]: https://hackerone.com/reports/1874155
Dalewyn|2 years ago
I seldom can take "vulnerabilities" that require physical access seriously, because if a hostile is physically next to my computer I have more pressing concerns than some passwords.
unknown|2 years ago
[deleted]
glub103011|2 years ago
[deleted]
southernplaces7|2 years ago
southernplaces7|2 years ago