That's a common mistake of people that do not understand how AV's work and how virustotal work. From their own FAQ:
At VirusTotal we are tired of repeating that the service was not designed as a tool to perform antivirus comparative analyses, but as a tool that checks suspicious samples with several antivirus solutions and helps antivirus labs by forwarding them the malware they fail to detect. Those who use VirusTotal to perform antivirus comparative analyses should know that they are making many implicit errors in their methodology, the most obvious being:
-VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
-In VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be more aggressive and paranoid, since the impact of false positives is less visible in the perimeter. It is simply not fair to compare both groups.
-Some of the solutions included in VirusTotal are parametrized (in coherence with the developer company's desire) with a different heuristic/agressiveness level than the official end-user default configuration.
This is interesting for signature based AVs. More interestingly bypassing dynamic AV engines that execute code in a sandbox seems to be fairly trivial as well. For example allocating 100mb or memory, running a few million iterations in a loop during startup will cause most av engines to stop executing the code due to resource constraints. This paper is a really interesting read on the topic[0]
I once wrote a kernel extension that intercepted any and all file open()'s on OS X. If the application in question was opening a file that it was not whitelisted to do so, it would bring up a modal dialog box asking whether or not this application should be allowed to open this file.
It was basically a firewall on the kernel level.
It worked splendidly, however, I was never able to gain any traction in marketing it. That was back at around OS X 10.4 now. I've been waiting for another company to come along with something similar - since it really does seem a comprehensive way of blocking viruses (albeit more suited to more advance uses). I'm still waiting for something like that.
That is an example of a mandatory access control (MAC) framework[1]. SELinux[1] is a MAC for linux systems and is very effective if the user doesn't disable it due to frustrations over false positives or due to true positives that are viewed by the user as false.
OSX has discretionary access control, which can be configured to be a full MAC[3].
Starting in OS X v10.5, the kernel includes an implementation of the TrustedBSD Mandatory Access Control (MAC) framework. A formal requirements language suitable for third-party developer use was added in OS X v10.7. Mandatory access control, also known as sandboxing, is discussed in Sandboxing and the Mandatory Access Control Framework.[4]
It was based on a port of OpenBSD's PF firewall and let you set fine-grained permissions on file, network, and registry access. It's a painful training process for newly-installed software (lots and lots of prompts) but I haven't seen anything else come close to what it offered. I wonder if that pain is why they seem to have abandoned it; at some point the average user would end up just uninstalling it or clicking "Allow" for every prompt.
Once up and running, however, you could do some really cool things such as giving a process read-only access to its installed directory plus the ability to read/write to a specific folder you store that program's documents in. Attempts by the program to read outside those directories would be rejected, with mixed results (from gracefully handling it, to endless alert dialog looping, to crashing) depending on how well the software was written.
When I read this, I thought, "Hey, that sounds a lot like Little Snitch but for disks." Then I read below that LS' main competitor, Hands Off!, seems to have that functionality. Very interesting...
I did something similar for a Computer Security class back in the day, but I did it from userland using dylib injection, and did it as a PoC of the malicious things you could do without getting root.
Once you've intercepted read() and write(), you control almost everything. One of the demos I did was injecting content into HTTP responses. Fun project, very glad I didn't ever share the code for it :)
You've basically written a HIPS - Host Intrusion Prevention System, or at least a base layer of it. Antiviruses on Windows are popular to have this kind of protection, I don't know of commercial solutions for other systems.
It's initially worrying, but it makes a certain amount of sense when you think that an antivirus is a blacklist (of known vulnerabilities), not a whitelist.
- This wouldn't work for larger payloads. AVs flag binary looking data that is larger than a certain size and that is later processed or assigned to a variable.
- Their veil project has some problems. Py2EXE gets marked as malware by some AVs in many cases just because it is Py2EXE. Same thing with non-commonly used obfuscators. Basically, they just pick up on the fact that something is obfuscated. If that obfuscator is not commonly used in goodware programs, it is marked as malware. This is kind of a dumb strategy on the part of AV engines, but it works okay.
You're not going to catch new malware with static (or dynamic for that matter) analysis anyway. Thing is, the problem is ill-defined.
What is malware?
Is it a program that does something a user doesn't want? If users knew what regular programs do, they wouldn't be okay with most of it either.
Is it a program that does some obfuscating tricks and exploits undocumented functionality in the system? Plenty of legit programs including a lot of AV engines do that as well.
The only usable definition in my opinion is that it is a program that makes the user unhappy with no easily accessible way of removing it completely.
This is why the only solution seems to be to only allow installation from a trusted repository. I am still not sure why Windows/Apple OSX haven't adopted such a strategy (with a developer mode override option for some advanced users).
> What is malware? Is it a program that does something a user doesn't want?
OS X Sandboxing seems to have the right idea: Instead of worrying about what the user doesn't want, do only what the user WANTS.
Basically, sandboxed apps don't have access to files and folders other than the ones that the user explicitly chooses in an Open/Save dialog. It's a surprisingly nag-free opt-in mechanism that "just works."
After that, automatic backups will let users revert any undesirable changes to their data, whether they were made by their own selves or by malware.
I think operating systems should just do a better job of making the user more aware of all recently-modified files, especially if a process has been modifying a large number of them in a short time (the recent ransomware comes to mind) or if a third-party background process has been generating an uncanny amount of network traffic.
Seeing something like "1,590 files modified" on log-on or in a notification, is way more alarming and would make users take immediate action, compared to all the usual OS or antivirus nags that we are all accustomed to subconsciously agreeing to.
"the only solution seems to be to only allow installation from a trusted repository. I am still not sure why Windows/Apple OSX haven't adopted such a strategy (with a developer mode override option for some advanced users)."
One weird thing w.r.t. Gatekeeper is that it seems to depend on everybody who downloads signed executables not via the App Store to blacklist them (by adding some extended attribute to the file)
Is this an oversight of the AV software companies? Did no one come up with this before? Could it be that if people did come up with this before that a lot of Windows computers have viruses without them knowing? Is their virus detection scheme fundamentally flawed?
Should I be shocked? Shouldn't I be? I'm currently shocked but I don't know if it's justified, not an expert in the field.
Antivirus software is as useful for security as monster HDMI cables are useful for improving digital picture quality, so this is not surprising.
The only thing that antivirus software does semi-decently is identify known software binaries. Antivirus software cannot reliably identify unknown binaries through heuristics because writing software to understand unknown software binaries is impossible in general. There are potentially an infinite number of ways of proving that, but the easiest way that occurs to me is that one of the many things necessary for understanding unknown software in general is solving the halting problem, which was proven to be impossible in general by Alan Turing.
Furthermore, the utility for a database of known malicious binaries is practically non-existent. Malicious software is always designed to exploit some vulnerability and once the vulnerability is fixed by the vendor, there is nothing for the antivirus software to do. If you could apply the definition update that the antivirus software needed to catch malicious software, you could have applied the vendor patch that fixed the vulnerability the malicious software used in the first place. That not only makes the definition update unnecessary, but handles the unknown things that the definition update would never have caught.
In the cases of a vendor being slow to patch, refusing to patch (e.g. the exploits used by the hot potato proof of concept code for all current Windows versions) or the user not applying the patch in time (e.g. lack of scheduled downtime), the inability of antivirus software to catch unknown software using those vulnerabilities provides a false sense of security. If a system is specifically targeted by a malicious hacker, the hacker would use something that antivirus software would not catch, such as a script kiddie tool against which there are no known definitions or custom code. Being unfortunate enough to be attacked by a virus, trojan, etcetera before they get definitions also means there is no protection.
Real security requires doing things like minimizing attack area and configuring things competently (e.g. not using your username as your password). That is something that you cannot get from an antivirus vendor.
You're not secure until you whitelist. And even that's not a guarantee; it's a necessary but sufficient condition. But systems which do not run signed, whitelisted code from boot time forward are as good as pwnt.
No, a whitelist isn't good enough. You can't anticipate an exhaustive list of the programs the user will want to run.
What you can do, however, is enforce a policy by which programs are required to provide machine-checkable evidence, also known as proof-carrying code [https://en.wikipedia.org/wiki/Proof-carrying_code], that they respect the system's safety policy.
Laughable. Ages ago I realised you could just take an exploit, base64 the contents of the binary code, save it in a string. You could then unbase64 it and execute the binary in memory. Nothing seemed to catch it.
You'd still need a decoding stub which can be fingerprinted. An XOR "decoder" is far smaller in shellcode and can be custom written in asm to reduce time-to-first-signature.
We just need a better sandboxing environment and individual permissions per excitable i.e. "Can this excitable connect to blah blah up?" , "can this program read outside of its sandbox folder?"
When you don't root your android or iPhone they handle it a lot better than desktop operating systems.
I used to work at a large, well known AV company. While much of what other commenters have said is true, I will note that VT is less authoritative than some realize; the version of our product that VirusTotal was given access to was substantively different than our normal product; features such as sandboxing were removed.
I wouldn't be surprised if this was a common practice; we considered our product's detection capabilities to be proprietary.
So after the program is actually compiled into binary code do the resulting instructions become so simple (and so fundamental to the operation of programs) that any attempt to write a heuristics rule to stop this technique would break thousands of programs or are heuristics just so inherently shitty that this technique works? Because I would still think that this line here:
Would be a red flag. I don't know how many programs use that on the Windows side of things but I have almost never seen programs call this function that weren't "crypters."
I am honestly impressed how simple this program is (it's the most elegant crypter I've probably ever seen) but I am still wondering whether heuristics could be made to detect this (without also falsely detecting thousands of other programs?) What do you think OP? Not really my field but curious all the same.
I believe this is how JIT compilation works. So this would trigger a false positive on the Java VM, Javascript V8 interpreter, Spidermonkey, and C# runtime.
Why would that be a red flag? From what I can glean from the documentation, both MEM_COMMIT and PAGE_EXECUTE_READWRITE are perfectly reasonable flags to pass to VirtualAlloc, and -again, according to the documentation- VirtualAlloc appears to be a way to (de|re)allocate memory within the region allocated to your process.
What's more, every program that uses Boost on Windows calls VirtualAlloc, in several places.
VirtualAlloc is roughly equivalent in intent to POSIX's mmap. So anywhere that you'd use mmap in a Linux program, you'd probably use VirtualAlloc in Windows.
Without reading the article: it looks like it casts exec to a function pointer with void return type and no arguments, and calls it.
(Disclaimer: I'm paid for writing Java :( )
Almost. It's casting exec to a pointer to a function that "returns" void and takes no arguments and then calling the function. Search for "C right-left rule" (without quotes) to see some hints on how to read complex declarations (mostly applies to casting too). I like http://ieng9.ucsd.edu/~cs30x/rt_lt.rule.html
AV is a little and often useless supplement to security, nothing more. It has always been trivially easy for a script kiddie to write malicious software that passes all of VirusTotal. In the decade I've been using AV, I had less true positives than false positives (yes please auto-delete my patches, hacktools and software I've written myself, idiot AV program) and of course some false negatives that wrecked me because I executed them. Since I stopped using AV and became more careful (e.g. use VM for suspicious files) I was never infected again.
lets not forget that this isn't actually all that practical in practice.
It's not executing any old binary: the 'shellcode' has to be position-independent and can't rely on any normal PE features; it has to do all it's library loading on it's own etc.
yes, very possible to create. but if your malware/tool you want to run is large, it will take a fair amount of time to convert.
I've said it before but as hard as it might be to believe, I think smalltalk was 30 years ahead of its time when every program was packaged inside its own OS image.
Anti-virus bypasses and even exploits are extremely common. My current line of thinking is that the best way to take control of your computer is to use virtualization to run many separate OS images for different sets of uses.
Interesting, but would have obvious drawbacks when it comes to addressing any OS vulnerabilities or improvements - suddenly you have to reinstall ALL your applications!
This reminds me of Nintendo's approach to emulation for their Virtual Console, actually. Rather than having a standalone emulator that you download images for, they package the emulator with the game. This way they never have to worry about inadvertantly breaking anything with a future release, but on the other hand any subsequent emulator improvements do not retroactively apply.
Try running the binary instead of just scanning it for signatures. It will be detected by the heuristics engines of most AVs. I'm not saying that AVs are good but articles like this that only focus on defeating signatures while leaving away heuristics (emulation, behavior detection, and even white listing to some extent) don't do the subject justice.
[+] [-] rmdoss|10 years ago|reply
At VirusTotal we are tired of repeating that the service was not designed as a tool to perform antivirus comparative analyses, but as a tool that checks suspicious samples with several antivirus solutions and helps antivirus labs by forwarding them the malware they fail to detect. Those who use VirusTotal to perform antivirus comparative analyses should know that they are making many implicit errors in their methodology, the most obvious being:
-VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
-In VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be more aggressive and paranoid, since the impact of false positives is less visible in the perimeter. It is simply not fair to compare both groups.
-Some of the solutions included in VirusTotal are parametrized (in coherence with the developer company's desire) with a different heuristic/agressiveness level than the official end-user default configuration.
thanks,
[+] [-] JoachimSchipper|10 years ago|reply
[+] [-] K0nserv|10 years ago|reply
0: http://www.sevagas.com/IMG/pdf/BypassAVDynamics.pdf
[+] [-] eru|10 years ago|reply
[+] [-] feelix|10 years ago|reply
It was basically a firewall on the kernel level.
It worked splendidly, however, I was never able to gain any traction in marketing it. That was back at around OS X 10.4 now. I've been waiting for another company to come along with something similar - since it really does seem a comprehensive way of blocking viruses (albeit more suited to more advance uses). I'm still waiting for something like that.
[+] [-] gvb|10 years ago|reply
OSX has discretionary access control, which can be configured to be a full MAC[3].
Starting in OS X v10.5, the kernel includes an implementation of the TrustedBSD Mandatory Access Control (MAC) framework. A formal requirements language suitable for third-party developer use was added in OS X v10.7. Mandatory access control, also known as sandboxing, is discussed in Sandboxing and the Mandatory Access Control Framework.[4]
[1] https://en.wikipedia.org/wiki/Mandatory_access_control
[2] https://en.wikipedia.org/wiki/Security-Enhanced_Linuxhttps:/...
[3] http://sysdev.me/trusted-bsd-in-osx/
[4] https://developer.apple.com/library/mac/documentation/Securi...
[+] [-] ams6110|10 years ago|reply
See Windows Vista.
[+] [-] arm|10 years ago|reply
――――――
¹ — http://www.oneperiodic.com/products/handsoff/
[+] [-] biot|10 years ago|reply
It was based on a port of OpenBSD's PF firewall and let you set fine-grained permissions on file, network, and registry access. It's a painful training process for newly-installed software (lots and lots of prompts) but I haven't seen anything else come close to what it offered. I wonder if that pain is why they seem to have abandoned it; at some point the average user would end up just uninstalling it or clicking "Allow" for every prompt.
Once up and running, however, you could do some really cool things such as giving a process read-only access to its installed directory plus the ability to read/write to a specific folder you store that program's documents in. Attempts by the program to read outside those directories would be rejected, with mixed results (from gracefully handling it, to endless alert dialog looping, to crashing) depending on how well the software was written.
[+] [-] kobayashi|10 years ago|reply
[+] [-] tonyarkles|10 years ago|reply
Once you've intercepted read() and write(), you control almost everything. One of the demos I did was injecting content into HTTP responses. Fun project, very glad I didn't ever share the code for it :)
[+] [-] self_awareness|10 years ago|reply
[+] [-] ryao|10 years ago|reply
[+] [-] fao_|10 years ago|reply
Somewhat related: http://security.stackexchange.com/a/117312
[+] [-] lqdc13|10 years ago|reply
- This wouldn't work for larger payloads. AVs flag binary looking data that is larger than a certain size and that is later processed or assigned to a variable.
- Their veil project has some problems. Py2EXE gets marked as malware by some AVs in many cases just because it is Py2EXE. Same thing with non-commonly used obfuscators. Basically, they just pick up on the fact that something is obfuscated. If that obfuscator is not commonly used in goodware programs, it is marked as malware. This is kind of a dumb strategy on the part of AV engines, but it works okay.
You're not going to catch new malware with static (or dynamic for that matter) analysis anyway. Thing is, the problem is ill-defined.
What is malware?
Is it a program that does something a user doesn't want? If users knew what regular programs do, they wouldn't be okay with most of it either.
Is it a program that does some obfuscating tricks and exploits undocumented functionality in the system? Plenty of legit programs including a lot of AV engines do that as well.
The only usable definition in my opinion is that it is a program that makes the user unhappy with no easily accessible way of removing it completely.
This is why the only solution seems to be to only allow installation from a trusted repository. I am still not sure why Windows/Apple OSX haven't adopted such a strategy (with a developer mode override option for some advanced users).
[+] [-] Razengan|10 years ago|reply
OS X Sandboxing seems to have the right idea: Instead of worrying about what the user doesn't want, do only what the user WANTS.
Basically, sandboxed apps don't have access to files and folders other than the ones that the user explicitly chooses in an Open/Save dialog. It's a surprisingly nag-free opt-in mechanism that "just works."
After that, automatic backups will let users revert any undesirable changes to their data, whether they were made by their own selves or by malware.
I think operating systems should just do a better job of making the user more aware of all recently-modified files, especially if a process has been modifying a large number of them in a short time (the recent ransomware comes to mind) or if a third-party background process has been generating an uncanny amount of network traffic.
Seeing something like "1,590 files modified" on log-on or in a notification, is way more alarming and would make users take immediate action, compared to all the usual OS or antivirus nags that we are all accustomed to subconsciously agreeing to.
[+] [-] Someone|10 years ago|reply
Isn't that what Gatekeeper (https://support.apple.com/en-us/HT202491) is supposed to be?
One weird thing w.r.t. Gatekeeper is that it seems to depend on everybody who downloads signed executables not via the App Store to blacklist them (by adding some extended attribute to the file)
I think a whitelist would be more reliable.
[+] [-] mettamage|10 years ago|reply
Should I be shocked? Shouldn't I be? I'm currently shocked but I don't know if it's justified, not an expert in the field.
[+] [-] wolfgke|10 years ago|reply
No, this is a principle limitation of any AV software that is based on blacklisting.
> Did no one come up with this before?
Of course other people came up with similar ideas before.
> Should I be shocked?
If and only if you had trust in your AV software before.
[+] [-] devit|10 years ago|reply
It's impossible to determine whether software is malicious or not (Rice's theorem).
Antivirus software only reliably detects code that is identical to known malicious software.
[+] [-] ryao|10 years ago|reply
The only thing that antivirus software does semi-decently is identify known software binaries. Antivirus software cannot reliably identify unknown binaries through heuristics because writing software to understand unknown software binaries is impossible in general. There are potentially an infinite number of ways of proving that, but the easiest way that occurs to me is that one of the many things necessary for understanding unknown software in general is solving the halting problem, which was proven to be impossible in general by Alan Turing.
Furthermore, the utility for a database of known malicious binaries is practically non-existent. Malicious software is always designed to exploit some vulnerability and once the vulnerability is fixed by the vendor, there is nothing for the antivirus software to do. If you could apply the definition update that the antivirus software needed to catch malicious software, you could have applied the vendor patch that fixed the vulnerability the malicious software used in the first place. That not only makes the definition update unnecessary, but handles the unknown things that the definition update would never have caught.
In the cases of a vendor being slow to patch, refusing to patch (e.g. the exploits used by the hot potato proof of concept code for all current Windows versions) or the user not applying the patch in time (e.g. lack of scheduled downtime), the inability of antivirus software to catch unknown software using those vulnerabilities provides a false sense of security. If a system is specifically targeted by a malicious hacker, the hacker would use something that antivirus software would not catch, such as a script kiddie tool against which there are no known definitions or custom code. Being unfortunate enough to be attacked by a virus, trojan, etcetera before they get definitions also means there is no protection.
Real security requires doing things like minimizing attack area and configuring things competently (e.g. not using your username as your password). That is something that you cannot get from an antivirus vendor.
[+] [-] bitwize|10 years ago|reply
[+] [-] catnaroek|10 years ago|reply
No, a whitelist isn't good enough. You can't anticipate an exhaustive list of the programs the user will want to run.
What you can do, however, is enforce a policy by which programs are required to provide machine-checkable evidence, also known as proof-carrying code [https://en.wikipedia.org/wiki/Proof-carrying_code], that they respect the system's safety policy.
> it's a necessary but sufficient condition
Perhaps you mean “not sufficient”?
[+] [-] martinald|10 years ago|reply
[+] [-] mschuster91|10 years ago|reply
[+] [-] cdevs|10 years ago|reply
When you don't root your android or iPhone they handle it a lot better than desktop operating systems.
[+] [-] eru|10 years ago|reply
[+] [-] x5a|10 years ago|reply
I wouldn't be surprised if this was a common practice; we considered our product's detection capabilities to be proprietary.
[+] [-] Uptrenda|10 years ago|reply
void *exec = VirtualAlloc(0, sizeof c, MEM_COMMIT, PAGE_EXECUTE_READWRITE);
Would be a red flag. I don't know how many programs use that on the Windows side of things but I have almost never seen programs call this function that weren't "crypters."
I am honestly impressed how simple this program is (it's the most elegant crypter I've probably ever seen) but I am still wondering whether heuristics could be made to detect this (without also falsely detecting thousands of other programs?) What do you think OP? Not really my field but curious all the same.
[+] [-] vortico|10 years ago|reply
[+] [-] simoncion|10 years ago|reply
What's more, every program that uses Boost on Windows calls VirtualAlloc, in several places.
[+] [-] tbirdz|10 years ago|reply
[+] [-] magoon|10 years ago|reply
[+] [-] Igglyboo|10 years ago|reply
[+] [-] Uptrenda|10 years ago|reply
((void(*)())exec)();
Type cast a void pointer to a void pointer, execute the result, then execute the result of that? Or ... Can anyone explain what's going on here?
[+] [-] kondbg|10 years ago|reply
Exec (before the cast) points to memory containing the shellcode data.
To actually start executing the shellcode, you just need to somehow cause the program counter to point to the address of the shellcode.
An easy way to change the program counter is by calling a function ... which is what this line does.
Read this as "cast exec to a pointer to a function that takes zero arguments and returns void and call the function with no arguments."
This is the same as:
To familiarize yourself with C syntax regarding pointers, read about the "right-left rule" [1][1] http://ieng9.ucsd.edu/~cs30x/rt_lt.rule.html
[+] [-] sz4kerto|10 years ago|reply
[+] [-] reginaldo|10 years ago|reply
[+] [-] frede|10 years ago|reply
[+] [-] Kenji|10 years ago|reply
[+] [-] spydum|10 years ago|reply
[+] [-] wepple|10 years ago|reply
It's not executing any old binary: the 'shellcode' has to be position-independent and can't rely on any normal PE features; it has to do all it's library loading on it's own etc.
yes, very possible to create. but if your malware/tool you want to run is large, it will take a fair amount of time to convert.
[+] [-] elcct|10 years ago|reply
[+] [-] CyberDildonics|10 years ago|reply
Anti-virus bypasses and even exploits are extremely common. My current line of thinking is that the best way to take control of your computer is to use virtualization to run many separate OS images for different sets of uses.
[+] [-] feintruled|10 years ago|reply
This reminds me of Nintendo's approach to emulation for their Virtual Console, actually. Rather than having a standalone emulator that you download images for, they package the emulator with the game. This way they never have to worry about inadvertantly breaking anything with a future release, but on the other hand any subsequent emulator improvements do not retroactively apply.
[+] [-] flyinghorse|10 years ago|reply
[+] [-] cryowaffle|10 years ago|reply
[+] [-] gus_massa|10 years ago|reply
> Are reposts ok?
> If a story has had significant attention in the last year or so, we kill reposts as duplicates. If not, a small number of reposts is ok.
> Please don't delete and repost the same story, though. Accounts that do that eventually lose submission privileges.
Looking at the public available data of the 3 submitters, they look like real independent user (not sockpuppets or something).
[+] [-] jacquesm|10 years ago|reply