[EDIT: See the response by a Cursor dev below — looks like it was not authorized by them]
Sounds to me like Cursor internally has a private NPM registry with those packages. Because of how NPM works, it's quite easy to trick it to fetch the packages from the public registry instead, which could be used by an attacker [0].
Assumably, this Snyk employee either found or suspected that some part of Cursor's build is misconfigured as above, and uploaded those packages as a POC. (Given the package description "for Cursor", I'd think they were hired for this purpose.)
If that's the case, then there's not much to see here. The security researcher couldn't have used a private NPM registry to perform the POC if the point is to demonstrate a misconfiguration which skips the private registry.
cursor dev here. reasonable assumptions, but not quite the case. the snyk packages are just the names of our bundled extensions, which we never package nor upload to any registry. (we do it just like how VS Code does it: https://github.com/microsoft/vscode/tree/main/extensions)
we did not hire snyk, but we reached out to them after seeing this and they apologized. we did not get any confirmation of what exactly they were trying to do here (but i think your explanation that someone there suspected a dependency confusion vulnerability is plausible. though it's pretty irresponsible imo to do that on public npm and actually sending up the env variables)
> If that's the case, then there's not much to see here
They could have demonstrated the POC without sending data about the installing host, including all your environment variables, upstream. That seems like crossing the line
Wasn't this supposed to be fixed in NPM? I remember a talk by the researcher behind portswigger (sorry blanking on his name) doing this a while back, with great success (apple,ms,meta, basically all faang were vulnerable at that time).
I need to get serious about doing all development inside a virtual machine. One project per VM. There are just too many insidious ways in which I can ignorantly slip up such that I compromise my security. My only solace is that I am a nobody without secrets or a fortune to steal.
IDEs, plugins, development utilities, language libraries, OS packages, etc. So much code that I take on blind faith.
Vagrant’s popularity seems to have died down with Docker containers but it’s by far my favorite way to make dev environments.
Several years ago I worked somewhere that prohibited web browsers and development tools on laptops. If you needed to use a browser, you’d have to use one over Citrix. If you needed to code, you’d use a VDI or run the tools in a VM.
At the time I thought their approach was clinically insane, but I’m slowly starting to appreciate it.
The real problem is video performance in VMs. It still just...kind of sucks. Running Cinnamon in a VM is just about impossible to get GL acceleration working properly.
nvidia gates it's virtualized GPU offerings behind their enterprise cards, so we're left with ineffective command translation.
IMO: I can tolerate just about every other type of VM overhead, but choppy/unresponsive GUIs have a surprisingly bad ergonomic effect (and somehow leak into the performance of everything else).
If we could get that fixed, at least amongst Linux-on-Linux virtualization, I think virtualizing everything would be a much more tenable option.
It's horrible that trust is being eroded so much, and seeing monthly GB updates to my OS doesnt reassure me at all. I like the idea of having a stable isolated VM for each project. Are there standard open-source tools to do this?
Specifically I'm transitioning my Go and Zig development environments from an old mac to an M1 with Asahi Linux and getting a bit lost even finding replacements for Truecrypt and Little Snitch. Do these VM tools support encrypted VM's with firewall rules? I saw Vagrant mentioned here and that sounds like it might cover the network isolation, but what else would you suggest?
I know where you are coming from and I considered this myself again and again. For me and for now it is not something I want to do and not primarily because of the effort.
The VM might protect me, but it will not protect the users of the software I am producing. How can I ship a product to the customer and expect them to safely use it without protection when I myself only touch it when in a hazmat suit?
No, that is not the environment I want.
My current solution is to be super picky with my dependencies. More specifically I hold the opinion that we should neither trust projects nor companies but only people. This is not easy to do, but I do not see a better alternative as for now.
i develop on linux, on various projects. i'm mostly concerned with all the tools, build scripts and tests that may read sensitive data, or accidentally destroy data. so i'm limiting access to files when working on a project with linux namespaces, using bubblewrap.
i've got a simple per-project dot file that describes the file system binds. while i'm working on a project, new terminals i open are automatically isolated to that project based on that dot file. it has very low (cognitive) overhead and integrates pretty much seamlessly. i suspect many developers have similar scripts. i looked for projects that did this some time ago, but couldn't find it. either because it's too simple to make a project about, or because i don't know how others would describe it. if anyone has pointers...
i don't limit network access (though i did experiment with logging all traffic, and automatically setting up a mitm proxy for all traffic; it wasn't convenient enough to use as regular user).
there is still a whole kernel attack surface of course. though i'm mostly concerned about files being read/destroyed.
I think a lot of the issues in this particular example is the ease with which api keys, once leaked, are single factor passwords.
If you ran a key logger on my machine you would never get into any major site with mfa. You couldn't watch me log on to the azure console with passkey and do much with it. But if you scrape a saved key with publish abilities bad things happen.
I wonder how this is mitigated by my current workflow of running jupyter and vscode from a docker container.
I did not start doing this because of security, but just to get something more or less self managed without any possibility to break different projects. I am tired of my team spending too much time on extensions, versions, packages, ...
Docker compose files have saved our team many hours, even if it's extremely wasteful to have multiple vscode instances running alongside each other
I started doing development under a separate non-admin user on my MacBook. I switch to another user for personal stuff, or the admin user to install stuff with Homebrew. Doesn't protect from zero days but it's better than nothing.
The security, and overall application stability attack vector, is why I now vouch for processes with OS IPC instead of shared libraries, even if it requires more resources.
It doesn't fully sort out the trust issue though, even if everything is sandboxed in some fashion.
I wonder how long until this is standard, and PFYs coming into the industry look at our current practices much like people now look at non-encrypted credentials being sent over the network.
The only part of the article I disagree with is this line:
> But in general, it’s a good idea not to install NPM packages blindly. If you know what to look for, there are definite signals that these packages are dodgy. All of these packages have just two files: package.json and index.js (or main.js). This is one of several flags that you can use to determine if a package is legit or not.
This works -- maybe OK for top-level packages. But for transitive dependencies it's nearly impossible to vet every transitive dependency.
This is really where SELinux had the right idea overall: preclassifying files with data about their sensitivity, and denying access based on that, does adequately solve this problem (i.e. keeping npm installations away from id_rsa).
They also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D
Their competence isn't as big as their fame, in my opinion.
Also one of their sales people insulted me over email, because apparently not being interested in buying their product means you're an incompetent developer who can only write software filled with vulnerabilities.
That's extremely unfortunate, especially about the "abandoned" labelling. I've been looking to move off GitHub recently as well, it feels like it's got a bit too much control.
Codeberg looks interesting, and there are self-hosted ones like Forejo that also look great if you're okay with the maintenance.
> hey also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D
Well theres a sign of a good team.. /s
That's actually an interesting take, I haven't heard too much about them except that they do have an ego.
Without more context, this doesn't look great for Snyk either way: either they have an employee using NPM to live test their own services, or they have insufficient controls/processes for performing a legitimate audit of Cursor without using public resources.
Why not? NPM behaves oddly when there is a public package named the same as one on a private repo, in some cases it’ll fetch the public one instead. I believe it’s called package squatting or something. They might have just been showing that this is possible during an assessment. No harm no foul here imo
It's not white hat because they actively extract data; if it was just to prove it worked they could've done a console.log, cause npm install to fail, or not extract a payload.
Looks like NPM is generating jobs for those in the security field. It’s an unfixable mess, I really hope some competition like JSR will put enough pressure on the organization.
It's not just NPM, it's the trust in third party libraries in general. Even though it's much rarer, you'll see exploits on platforms like Nuget. You're also going to see them on JSR. You have more security because they are immutable, but you're not protected from downloading a malicious pacakge before it's outed.
I think what we're more likely to see is that leglislation like DORA and NSIS increasinly require that you audit third party packages. This enforcing a different way of doing development in critical industries. I also think you're going to see a lot less usage of external packages in the age of LLM's. Because why would you pull an external package to generate something like your OpenAPI specification when any LLM can write a cli script that does it for you in an hour or two of configuring it to your needs? Similarily, you don't need to use LLM's directly to auto-generate "boring" parts of your code, you can have them build cli tools which does it. That way you're not relying on outside factors, and while I can almost guarantee that these cli tools will be horrible cowboy code, their output will be what you refine the tools to make.
With languages like Go pushing everything you need in their standard packages, you're looking at a world where you can do a lot of things with nothing but the standard library very easily.
OT: Has anyone ever gotten (proper) SBOMs for Snyks own tools and services? Asking because they want to sell my employee their solution (which does SBOMs).
Snyk is founded by people from the Israeli Army's Unit 8200.
I wouldn't install it if you paid me to, because it feels a lot like Unit 8200 pumps out entrepreneurs and funds them so that (like the NSA) they have their foot already in the door.
Snyk Research Labs regularly contributes back to the community with testing and research of common software packages. This particular research into Cursor was not intended to be malicious and included Snyk Research Labs and the contact information of the researcher. We were very specifically looking at dependency confusion in some VS Code extensions. The packages would not be installed directly by a developer.
Snyk does follow a responsible disclosure policy and while no one picked this package up, had anyone done so, we would have immediately followed up with them.
Spraying your attack into the public with hopes of hitting your target is the polar opposite of responsible. The only "good" part of this is that you were caught in the act before anyone else got hit in the crossfire.
In response, you suggest that you'll send a letter of apology to the funeral home of anyone that got hit. Compromising their credentials, even if you have "good intentions", still puts them into a compromised position and they have to react the same as they would for any other malevolent attacker.
This is so close to "malicious" that it's hard to perceive a difference.
edit: Let's also remind everyone that a Snyk stakeholder is currently attempting to launch a Cursor competitor, so assuming good intentions is even MORE of a stretch.
This is grey-hat at best. Intent may have been good, but the fact is that this team created and distributed software to access and exfiltrate data without permission which is very illegal. You may want to consult with the legal department before posting about this on a public forum fyi.
Seems reasonable enough, but why would it (allegedly) send environment variables back via a POST? Even if it's entirely in good faith, I'd rather some random package not have my `env` output..
Upvoting this since presumably you're actually the CTO at Snyk and people should see your official response, but wow this feels wildly irresponsible. You could have proved the PoC without actually stealing innocent developer credentials. Furthermore, additional caution should have been taken given the conflict of interest with the competitor product to Cursor. Terrible decision making and terrible response.
Why, after all these years, are we still doing this stupid thing of using a global namespace for packages? If you are a company with an internal package registry just publish all your packages as @companyname/mylib and then no one can squat the name on a public registry. I thought we collectively learned this 4 years ago when dependency confusion attacks were first disclosed.
The usual reasons: laziness, ignorance, poor design. Most package managers suck at letting you add 3rd party repos. Most package managers don't have namespaces of any kind. The ones that do have terrible design. Most of them lack a verification system or curation. Most of them have terrible search. None of them seem to have been exposed to hierarchical naming or package inheritance. And a very small number of people understand security in general, many fewer are educated about all the attack classes.
But all of that is why they get popular. Lazy, crappy, easy things are more popular than intentional, complex, harder things. Shitty popular tech wins.
In the Java world, you need to prove ownership of a given namespace (group id), e.g. via a TXT record for that domain. Isn't there a similar concept for NPM? The package is named sn4k-s3c/call-home, how will a victim be tricked into referencing that namespace sn4k-s3c (which I suppose is owned by the attacker, not Cursor)? I feel like I'm missing part of the picture here.
You're not really missing anything so much as adding a misguided assumption of competence to NPM.
Npm doesn't really do namespaces. There's just no ownership to prove as most packages are published like "call-home" with no namespace required. This gives exciting opportunities for you to register cal-home to trap users who miss type, or caII-home to innocuously add to your own or open source projects or whatever. Fun isn't it?
In this case the call home package is namespaced, but the real attack is the packages like "cursor-always-local" which has no namespace. Which can sometimes (?) take precedence over a private package with the same name.
It's not a pretty picture, you were better off missing it really.
NPM packages are the most bloated and unreadable pieces of code I've encountered. The creator of Node apparently hates all software and yet Google gave him the captain's hat and we're left with the absolute crap shoot that is web development. I feel guilty with an additional 1KB of code or 500 bytes of RAM but this is seen as an outsider opinion. I hope big tech rots and this is just a symptom.
https://news.ycombinator.com/item?id=3055154
Hopefully this makes the Cursor team reconsider security (which doesn't seem very good really).
Stopped using it for serious stuff after I noticed their LLMs grabs your whole .env files and sends them to their server... even after you add them to their .cursorignore file. Bizarre stuff.
Now imagine a bad actor exploiting this... recipe for disaster.
Security often means the opposite of scalability and growth, so why should they? The business goal is to make sure Cursor grows large enough that they have economics of scale to be a viable business.
If you want secure LLM you can use Mistral, which comes with all the EU limitations, good and bad.
> All of these packages have just two files: package.json and index.js (or main.js). This is one of several flags that you can use to determine if a package is legit or not.
Wouldn't a lot of small packages consist of just these two files, meaning seeing just these two files in a package may raise an eyebrow but hardly be a smoking gun?
It's not a smoking gun. It is just one of a number of signals you look for when identifying potentially malicious packages. Other things you look for are number of collaborators, how long it existed, domains it talks to, and artifacts it pulls in.
Behind hundreds of builds Snyk has been a challenging integration that ultimately creates very low value. I recommend using a decent team that goes in for flow weaknesses as these are most responsible for significant findings...
The TL;DR is that our security research team routinely hunts for various vulnerabilities in tools developers use. In this particular case, we looked at a potential dependency confusion attack in Cursor, but found no vulnerabilities.
There's no malicious intent or action here, but I can certainly understand how it appears when there's not a ton of information and things like this occur! As a sidenote, I use Cursor all the time and love it <3
> The packages performed HTTP requests back to our researchers containing username, hostname, current directory and (in later versions) environmental variables.
And exfiltration was needed to confirm a vulnerability why exactly?
Sorry, but you screwed up royally. Scary to see that Snyk still does not see this.
Ethically, your work was even lower than that of those who test their AI tools on FOSS code, send in bogus reports and thus waste maintainer's time. Experimenting on unwitting humans and ecosystems is not okay.
As much as I don't like NPM, these issues aren't limited to NPM. It's just that NPM is getting so much attention that we're more likely to find and hear about these issues when when they target NPM.
I'm fairly concerned about the state of Python packages. It's not every week, but I frequently stumble upon packages that are not what they appear to be. Sometimes not maliciously, sometimes the author just got overly ambitious and failed to deliver, other times, someone is clearly typo-squatting or attempting to get you to install the wrong thing.
I don't have a dog in this hunt. I've never worked with Snyk, I've never been a customer, and I don't think I even know anyone who works there. That said, they've built their whole company around being trustworthy and doubt they'd knowingly do anything to risk their entire business. Also, I can hardly imagine someone better positioned to protect against supply chain attacks.
Your criticism sounds to me like "just a reminder that this armed bodyguard service comprises Navy SEALs and Army Rangers". Uh, great!
So every Israeli is now a Mossad agent and every customer is an enemy of Israel like Hezbollah? You won't buy from Snyk because you want to boycott Israel, just own up to it it's a popular position to take.
As the pager / radio terrorist attack showed, it's the Mossad involvement you don't know about that you should be worried about. Same with the CIA, they were behind "secure" radio communication in Europe for decades (https://en.wikipedia.org/wiki/Crypto_AG) and nobody had any idea.
Just a reminder that Unit 8200 is staffed mostly by conscripts who are serving out their mandatory military service and chose to accept an invitation to serve in the cyberwarfare arm of the IDF instead of choosing to shoot guns.
In other words, it's staffed by Israeli kids who made the choice most of us would have made under the circumstances. It seems a bit unfair to hold that against them more than 10 years later, no?
Wrong forum perhaps? Or wrong story? But you clicked the story. Read it. And then even bothered to comment on it. You are putting a lot of effort into something that does not interest you.
If it was really a test, then why would it be sending environment variables via HTTP POST? There are many better ways to do this if you're legitimately deploying code remotely.
n2d4|1 year ago
Sounds to me like Cursor internally has a private NPM registry with those packages. Because of how NPM works, it's quite easy to trick it to fetch the packages from the public registry instead, which could be used by an attacker [0].
Assumably, this Snyk employee either found or suspected that some part of Cursor's build is misconfigured as above, and uploaded those packages as a POC. (Given the package description "for Cursor", I'd think they were hired for this purpose.)
If that's the case, then there's not much to see here. The security researcher couldn't have used a private NPM registry to perform the POC if the point is to demonstrate a misconfiguration which skips the private registry.
.
[0] In particular, many proxies will choose the public over the private registry if the latest package version is higher: https://snyk.io/blog/detect-prevent-dependency-confusion-att...
ArVID220u|1 year ago
we did not hire snyk, but we reached out to them after seeing this and they apologized. we did not get any confirmation of what exactly they were trying to do here (but i think your explanation that someone there suspected a dependency confusion vulnerability is plausible. though it's pretty irresponsible imo to do that on public npm and actually sending up the env variables)
arkadiyt|1 year ago
They could have demonstrated the POC without sending data about the installing host, including all your environment variables, upstream. That seems like crossing the line
nomilk|1 year ago
Allowing someone full access to the contents of your environment (i.e. output of env command) is a big deal to most, I suspect.
rdegges|1 year ago
NitpickLawyer|1 year ago
tankster|1 year ago
I hope there is no foul play.
guappa|1 year ago
3eb7988a1663|1 year ago
IDEs, plugins, development utilities, language libraries, OS packages, etc. So much code that I take on blind faith.
redserk|1 year ago
Several years ago I worked somewhere that prohibited web browsers and development tools on laptops. If you needed to use a browser, you’d have to use one over Citrix. If you needed to code, you’d use a VDI or run the tools in a VM.
At the time I thought their approach was clinically insane, but I’m slowly starting to appreciate it.
XorNot|1 year ago
nvidia gates it's virtualized GPU offerings behind their enterprise cards, so we're left with ineffective command translation.
IMO: I can tolerate just about every other type of VM overhead, but choppy/unresponsive GUIs have a surprisingly bad ergonomic effect (and somehow leak into the performance of everything else).
If we could get that fixed, at least amongst Linux-on-Linux virtualization, I think virtualizing everything would be a much more tenable option.
whitehexagon|1 year ago
Specifically I'm transitioning my Go and Zig development environments from an old mac to an M1 with Asahi Linux and getting a bit lost even finding replacements for Truecrypt and Little Snitch. Do these VM tools support encrypted VM's with firewall rules? I saw Vagrant mentioned here and that sounds like it might cover the network isolation, but what else would you suggest?
weinzierl|1 year ago
The VM might protect me, but it will not protect the users of the software I am producing. How can I ship a product to the customer and expect them to safely use it without protection when I myself only touch it when in a hazmat suit?
No, that is not the environment I want.
My current solution is to be super picky with my dependencies. More specifically I hold the opinion that we should neither trust projects nor companies but only people. This is not easy to do, but I do not see a better alternative as for now.
mjl-|1 year ago
i've got a simple per-project dot file that describes the file system binds. while i'm working on a project, new terminals i open are automatically isolated to that project based on that dot file. it has very low (cognitive) overhead and integrates pretty much seamlessly. i suspect many developers have similar scripts. i looked for projects that did this some time ago, but couldn't find it. either because it's too simple to make a project about, or because i don't know how others would describe it. if anyone has pointers...
i don't limit network access (though i did experiment with logging all traffic, and automatically setting up a mitm proxy for all traffic; it wasn't convenient enough to use as regular user). there is still a whole kernel attack surface of course. though i'm mostly concerned about files being read/destroyed.
arkh|1 year ago
technion|1 year ago
If you ran a key logger on my machine you would never get into any major site with mfa. You couldn't watch me log on to the azure console with passkey and do much with it. But if you scrape a saved key with publish abilities bad things happen.
dacryn|1 year ago
I did not start doing this because of security, but just to get something more or less self managed without any possibility to break different projects. I am tired of my team spending too much time on extensions, versions, packages, ...
Docker compose files have saved our team many hours, even if it's extremely wasteful to have multiple vscode instances running alongside each other
cedws|1 year ago
guappa|1 year ago
For work use I use a work machine and if it gets compromised it's not really my own problem.
pjmlp|1 year ago
It doesn't fully sort out the trust issue though, even if everything is sandboxed in some fashion.
petesergeant|1 year ago
pinoy420|1 year ago
gortok|1 year ago
> But in general, it’s a good idea not to install NPM packages blindly. If you know what to look for, there are definite signals that these packages are dodgy. All of these packages have just two files: package.json and index.js (or main.js). This is one of several flags that you can use to determine if a package is legit or not.
This works -- maybe OK for top-level packages. But for transitive dependencies it's nearly impossible to vet every transitive dependency.
If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area? https://gist.github.com/anvaka/8e8fa57c7ee1350e3491#top-1000...
ziddoap|1 year ago
This would be where different security advice would apply: don't pull in a package that has 400 dependencies.
XorNot|1 year ago
chamomeal|1 year ago
loaph|1 year ago
At my place of work we use this great security too called Snyk. Definitely check it out
/s
guappa|1 year ago
They also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D
Their competence isn't as big as their fame, in my opinion.
Also one of their sales people insulted me over email, because apparently not being interested in buying their product means you're an incompetent developer who can only write software filled with vulnerabilities.
azemetre|1 year ago
Completely backwards software that corpos only seem to buy because their insurers force them to check off some security list box.
gyoridavid|1 year ago
alp1n3_eth|1 year ago
Codeberg looks interesting, and there are self-hosted ones like Forejo that also look great if you're okay with the maintenance.
bilekas|1 year ago
Well theres a sign of a good team.. /s
That's actually an interesting take, I haven't heard too much about them except that they do have an ego.
Ylpertnodi|1 year ago
woodruffw|1 year ago
tru3_power|1 year ago
nikcub|1 year ago
They should be running a private npm repo for tests (not difficult to override locally) and also their own collaborator server.
Cthulhu_|1 year ago
hollowturtle|1 year ago
devjab|1 year ago
I think what we're more likely to see is that leglislation like DORA and NSIS increasinly require that you audit third party packages. This enforcing a different way of doing development in critical industries. I also think you're going to see a lot less usage of external packages in the age of LLM's. Because why would you pull an external package to generate something like your OpenAPI specification when any LLM can write a cli script that does it for you in an hour or two of configuring it to your needs? Similarily, you don't need to use LLM's directly to auto-generate "boring" parts of your code, you can have them build cli tools which does it. That way you're not relying on outside factors, and while I can almost guarantee that these cli tools will be horrible cowboy code, their output will be what you refine the tools to make.
With languages like Go pushing everything you need in their standard packages, you're looking at a world where you can do a lot of things with nothing but the standard library very easily.
rettichschnidi|1 year ago
KennyBlanken|1 year ago
I wouldn't install it if you paid me to, because it feels a lot like Unit 8200 pumps out entrepreneurs and funds them so that (like the NSA) they have their foot already in the door.
de130W|1 year ago
davedx|1 year ago
dannyallan|1 year ago
Snyk does follow a responsible disclosure policy and while no one picked this package up, had anyone done so, we would have immediately followed up with them.
luma|1 year ago
In response, you suggest that you'll send a letter of apology to the funeral home of anyone that got hit. Compromising their credentials, even if you have "good intentions", still puts them into a compromised position and they have to react the same as they would for any other malevolent attacker.
This is so close to "malicious" that it's hard to perceive a difference.
edit: Let's also remind everyone that a Snyk stakeholder is currently attempting to launch a Cursor competitor, so assuming good intentions is even MORE of a stretch.
senorrib|1 year ago
yabones|1 year ago
etyp|1 year ago
austinkhale|1 year ago
pizzalife|1 year ago
unknown|1 year ago
[deleted]
lopkeny12ko|1 year ago
0xbadcafebee|1 year ago
But all of that is why they get popular. Lazy, crappy, easy things are more popular than intentional, complex, harder things. Shitty popular tech wins.
gunnarmorling|1 year ago
hennell|1 year ago
Npm doesn't really do namespaces. There's just no ownership to prove as most packages are published like "call-home" with no namespace required. This gives exciting opportunities for you to register cal-home to trap users who miss type, or caII-home to innocuously add to your own or open source projects or whatever. Fun isn't it?
In this case the call home package is namespaced, but the real attack is the packages like "cursor-always-local" which has no namespace. Which can sometimes (?) take precedence over a private package with the same name.
It's not a pretty picture, you were better off missing it really.
TheRealBrianF|1 year ago
kittikitti|1 year ago
zelphirkalt|1 year ago
fintechie|1 year ago
Stopped using it for serious stuff after I noticed their LLMs grabs your whole .env files and sends them to their server... even after you add them to their .cursorignore file. Bizarre stuff.
Now imagine a bad actor exploiting this... recipe for disaster.
miohtama|1 year ago
If you want secure LLM you can use Mistral, which comes with all the EU limitations, good and bad.
nomilk|1 year ago
Wouldn't a lot of small packages consist of just these two files, meaning seeing just these two files in a package may raise an eyebrow but hardly be a smoking gun?
6mile|1 year ago
hu3|1 year ago
Helps in cases like this.
absqueued|1 year ago
supriyo-biswas|1 year ago
[1] https://www.githax.com/
edm0nd|1 year ago
seems to be either a tool that isnt out yet or perhaps not available for free or the public.
unknown|1 year ago
[deleted]
unknown|1 year ago
[deleted]
bugtodiffer|1 year ago
SebFender|1 year ago
unknown|1 year ago
[deleted]
stuaxo|1 year ago
Yes, but also impractical.
rdegges|1 year ago
The TL;DR is that our security research team routinely hunts for various vulnerabilities in tools developers use. In this particular case, we looked at a potential dependency confusion attack in Cursor, but found no vulnerabilities.
There's no malicious intent or action here, but I can certainly understand how it appears when there's not a ton of information and things like this occur! As a sidenote, I use Cursor all the time and love it <3
guappa|1 year ago
And exfiltration was needed to confirm a vulnerability why exactly?
I love how completely unaware you guys are.
rettichschnidi|1 year ago
Ethically, your work was even lower than that of those who test their AI tools on FOSS code, send in bogus reports and thus waste maintainer's time. Experimenting on unwitting humans and ecosystems is not okay.
jijji|1 year ago
mrweasel|1 year ago
I'm fairly concerned about the state of Python packages. It's not every week, but I frequently stumble upon packages that are not what they appear to be. Sometimes not maliciously, sometimes the author just got overly ambitious and failed to deliver, other times, someone is clearly typo-squatting or attempting to get you to install the wrong thing.
jdthedisciple|1 year ago
Think xz-utils but even much less sophisticated exploits.
I don't see any systematic protection against this?
cko|1 year ago
Snyk
bbqfog|1 year ago
https://en.wikipedia.org/wiki/Snyk
kstrauser|1 year ago
Your criticism sounds to me like "just a reminder that this armed bodyguard service comprises Navy SEALs and Army Rangers". Uh, great!
weatherlite|1 year ago
Cthulhu_|1 year ago
krembo|1 year ago
lolinder|1 year ago
In other words, it's staffed by Israeli kids who made the choice most of us would have made under the circumstances. It seems a bit unfair to hold that against them more than 10 years later, no?
yard2010|1 year ago
skirge|1 year ago
cheema33|1 year ago
Did Cursor made claims to this effect and invited public to hack them?
Or are you equating someone saying they "take security seriously" to "it's an open season, please attack our systems."?
jtdev|1 year ago
[deleted]
justmarc|1 year ago
[deleted]
cheema33|1 year ago
Wrong forum perhaps? Or wrong story? But you clicked the story. Read it. And then even bothered to comment on it. You are putting a lot of effort into something that does not interest you.
PoppinFreshDo|1 year ago
raybb|1 year ago
[deleted]
nathabonfim59|1 year ago
guappa|1 year ago
dalton_zk|1 year ago
Sophira|1 year ago
xyst|1 year ago
sneak|1 year ago
I was dismayed to learn about their choice of brand, and think it might cause confusion. :(
vips7L|1 year ago
rdegges|1 year ago