(no title)
cnvogel | 1 year ago
Once the first stage, extracting a shell script from one of the „test“ data blobs, has been found it was clear to everybody that something fishy is going on.
It’s inconceivable that I’ve would have found the first stage and just given up, but then it was „only“ a matter of tedious shell reversing…
They could easily done without the „striping“ or „awk RC4“, but that must have complicated their internal testing and development quite a bit.
cesarb|1 year ago
But what you were looking at might not be the first stage.
You might be looking at the modified Makefile. You might be looking at the object files generated during the build. You might be looking at the build logs. You might be investigating a linking failure. The reason for so many layers of obfuscation, is that the attacker had no idea at which layer the good guys would start looking; at each point, they tried to hide in the noise of the corresponding build system step.
In the end, this was caught not at the build steps, but at the runtime injection steps; in a bit of poetic justice, all this obfuscation work caused so much slowdown that the obfuscation itself made it more visible. As tvtropes would say, this was a "Revealing Cover-Up" (https://tvtropes.org/pmwiki/pmwiki.php/Main/RevealingCoverup) (warning: tvtropes can be addictive)
makomk|1 year ago
almostnormal|1 year ago
For a one of its kind deployment it would probably not matter. However, deploying to multiple targets using the same basic approach would allow all of them to be found once one was discovered. With some mildly confsing but different scripting for each target systematic detection of others becomes more difficult.