(no title)
binsquare | 1 day ago
The author had copilot read a "prompt injection" inside a readme while copilot is enabled to execute code or run bash commands (which user had to explicitly agree to).
I highly suspect this account is astro-turfing for the site too... look at their sidebar:
``` Claude Cowork Exfiltrates Files
HN #1
Superhuman AI Exfiltrates Emails
HN #12
IBM AI ('Bob') Downloads and Executes Malware
HN #1
Notion AI: Data Exfiltration
HN #4
HuggingFace Chat Exfiltrates Data
Screen takeover attack in vLex (legal AI acquired for $1B)
Google Antigravity Exfiltrates Data
HN #1
CellShock: Claude AI is Excel-lent at Stealing Data
Hijacking Claude Code via Injected Marketplace Plugins
Data Exfiltration from Slack AI via Indirect Prompt Injection
HN #1
Data Exfiltration from Writer.com via Indirect Prompt Injection
HN #5 ```
crummy|1 day ago
binsquare|1 day ago
But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?
Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.
roywiggins|1 day ago
If 2) is fine then why bother with 1)? In yolo mode such an injection would be "working as designed", but it's not in yolo mode. It shouldn't be able to just do `env sh` and run whatever it wants without approval.
fulafel|1 day ago
"The env command is part of a hard-coded read-only command list stored in the source code. This means that when Copilot requests to run it, the command is automatically approved for execution without user approval."
politelemon|1 day ago
altairprime|1 day ago