That's a common mistake of people that do not understand how AV's work and how virustotal work. From their own FAQ:
At VirusTotal we are tired of repeating that the service was not designed as a tool to perform antivirus comparative analyses, but as a tool that checks suspicious samples with several antivirus solutions and helps antivirus labs by forwarding them the malware they fail to detect. Those who use VirusTotal to perform antivirus comparative analyses should know that they are making many implicit errors in their methodology, the most obvious being:
-VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
-In VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be more aggressive and paranoid, since the impact of false positives is less visible in the perimeter. It is simply not fair to compare both groups.
-Some of the solutions included in VirusTotal are parametrized (in coherence with the developer company's desire) with a different heuristic/agressiveness level than the official end-user default configuration.
> -VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
It says right in the parent comment that the VT results may not actually reflect how the product performs in the real world.
If you just read the reasons why you shouldn't use it for comparisons, you should also understand why it's pointless to use VT for testing if you bypass AV or not. To quote:
-VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
-In VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be more aggressive and paranoid, since the impact of false positives is less visible in the perimeter. It is simply not fair to compare both groups.
-Some of the solutions included in VirusTotal are parametrized (in coherence with the developer company's desire) with a different heuristic/agressiveness level than the official end-user default configuration.
This is interesting for signature based AVs. More interestingly bypassing dynamic AV engines that execute code in a sandbox seems to be fairly trivial as well. For example allocating 100mb or memory, running a few million iterations in a loop during startup will cause most av engines to stop executing the code due to resource constraints. This paper is a really interesting read on the topic[0]
Antivirus software is by definition a black list approach. A white list approach means code signing. iOS does that. It works fairly well in conjunction with certificate revocation for anything bad that managed to be signed by a trusted certificate. That means you have both a whitelist (that is append only) and a blacklist (to fix mistakes in the append only whitelist).
A whitelist might not even work. In practice whitelists end up with everything in the world on them. It might not be that hard to find a whitelisted program that would let you do the thing in the article just by passing it the right arguments. Certainly anything with a buffer overflow in it would work and there are probably a hundred other ways to do it too.
Yep, one place where I used to work, the owning company's policy had "Linux" on its whitelist, but not "Wireshark". Even though the company was developing software which communicated over the network in various protocols.
I once wrote a kernel extension that intercepted any and all file open()'s on OS X. If the application in question was opening a file that it was not whitelisted to do so, it would bring up a modal dialog box asking whether or not this application should be allowed to open this file.
It was basically a firewall on the kernel level.
It worked splendidly, however, I was never able to gain any traction in marketing it. That was back at around OS X 10.4 now. I've been waiting for another company to come along with something similar - since it really does seem a comprehensive way of blocking viruses (albeit more suited to more advance uses). I'm still waiting for something like that.
That is an example of a mandatory access control (MAC) framework[1]. SELinux[1] is a MAC for linux systems and is very effective if the user doesn't disable it due to frustrations over false positives or due to true positives that are viewed by the user as false.
OSX has discretionary access control, which can be configured to be a full MAC[3].
Starting in OS X v10.5, the kernel includes an implementation of the TrustedBSD Mandatory Access Control (MAC) framework. A formal requirements language suitable for third-party developer use was added in OS X v10.7. Mandatory access control, also known as sandboxing, is discussed in Sandboxing and the Mandatory Access Control Framework.[4]
There is MAC framework inside MacOS X kernel, but it's only for Apple usage. Developers arent' supposed to use it, because Apple states that it has no ABI stability and it can change unexpectedly from version to version. That is probably why it's not used by any commercial software (or it's so rare).
Instead, Apple designed a KAUTH framework, which is way more limited than MAC, but can be used to implement some features that will be stable across kernels (it has ABI stability). There are already some AVs that are using KAUTH.
Yes, the file access part of this product is exactly what I wrote too. They even made it with just the same codebase, inside the bundle "HandsOff.kext" is where all the action is. It's commendable that these guys have made it & kept it going and have it running as a product. It is so much more difficult than it initially appears. One of the first things that you come up against is you find out that at the kernel level, there are thousands of open()'s occurring system wide a second. Managing that in linked lists in the kernel without creating a speed or memory hit is the initial challenge. Then the other thing you come up against, which is a much greater challenge, is avoiding deadlocks by blocking open()'s to the wrong process.
However, if a technical user is using the product, it makes security 100% solid. You can literally intentionally download and run any virus, with full confidence that you can easily stop it from doing anything you dont want it to. Since a dialog box is created before it can read or write to any file, there is literally nothing it can do without your permission.
It's also useful for monitoring what an installer or app is doing to your filesystem, exactly what files it is touching as it goes along, and also how to thoroughly uninstall it if you want to.
I have been meaning to make a cut down version just for the filesystem monitoring and as an uninstaller that works 100%. You can even make an uninstaller that not only uninstalls all files that were created alongside the file you choose, but also any files that were created by any of those files (by logging the paths of the open()'s with O_CREATE by any of those files)
> You can literally intentionally download and run any virus, with full confidence that you can easily stop it from doing anything you dont want it to. Since a dialog box is created before it can read or write to any file, there is literally nothing it can do without your permission.
What about a virus that reads your keystrokes or screenshots your screen and sends them to the Internet? Or a virus that spams or does DDOS attacks?
That's the beauty of it. Take the keystrokes example you gave. Run it. Allow it to monitor your keystrokes (by clicking "Allow" when it's doing stuff related to that). Allow it to create the file logging your keystrokes, if you want (granting it write only access when the dialog box comes up). But after you have toyed with it, you might stop it at the point when it attempts to read from that file, in order to transmit it over the internet, or whatever it's going to do with it.
Same with the screenshots. You'd allow it to do whatever you feel like, but you might stop it when it tries to actually create the screenshot file, but allow it to do everything else in order to monitor its behavior. And since it's all in real time, with dialog boxes coming up for each of its actions, it makes it quite interesting to do so.
Isn't it dangerous to assume all malicious programs will use scratch files before communicating across the network? Won't you miss programs that use purely in-memory structures?
Yes, it is. And that was just a simplified example. In practice if you were running something you were very distrustful of, you would block access to almost all of its file access. You also wouldn't leave it running for long enough to feed it enough keystrokes to get you into trouble. But even if you did, you would catch it with all the file opens (and network connection open's) before it could transmit your keystrokes and get you into trouble. In practice many file open()'s are required to perform any function.
I’ve been using Hands Off! for the functionality for the past few years, but if I had known that the functionality was based on your kernel extension, I would’ve switched away from Hands Off! in a heartbeat (I already get the firewall features of Hands Off! from Little Snitch, so the only reason I use Hands Off! is for the disk access control feature).
Out of curiosity, could you provide a link to the website advertising your kernel extension (if it still exists)? As an OS X user, I feel pretty bad that I wasn’t aware of its existence (I would’ve certainly recommended it to my friends).
Sorry for being unclear, I didn't mean to say that it was based on my kernel extension. I meant to say that we both based it off of the same kernel extension. The extension in question is called kAuthORama and is provided by Apple. (note: I'm not 100% sure that they did use that, that was just an educated guess).
It was based on a port of OpenBSD's PF firewall and let you set fine-grained permissions on file, network, and registry access. It's a painful training process for newly-installed software (lots and lots of prompts) but I haven't seen anything else come close to what it offered. I wonder if that pain is why they seem to have abandoned it; at some point the average user would end up just uninstalling it or clicking "Allow" for every prompt.
Once up and running, however, you could do some really cool things such as giving a process read-only access to its installed directory plus the ability to read/write to a specific folder you store that program's documents in. Attempts by the program to read outside those directories would be rejected, with mixed results (from gracefully handling it, to endless alert dialog looping, to crashing) depending on how well the software was written.
When I read this, I thought, "Hey, that sounds a lot like Little Snitch but for disks." Then I read below that LS' main competitor, Hands Off!, seems to have that functionality. Very interesting...
I did something similar for a Computer Security class back in the day, but I did it from userland using dylib injection, and did it as a PoC of the malicious things you could do without getting root.
Once you've intercepted read() and write(), you control almost everything. One of the demos I did was injecting content into HTTP responses. Fun project, very glad I didn't ever share the code for it :)
You've basically written a HIPS - Host Intrusion Prevention System, or at least a base layer of it. Antiviruses on Windows are popular to have this kind of protection, I don't know of commercial solutions for other systems.
It's initially worrying, but it makes a certain amount of sense when you think that an antivirus is a blacklist (of known vulnerabilities), not a whitelist.
- This wouldn't work for larger payloads. AVs flag binary looking data that is larger than a certain size and that is later processed or assigned to a variable.
- Their veil project has some problems. Py2EXE gets marked as malware by some AVs in many cases just because it is Py2EXE. Same thing with non-commonly used obfuscators. Basically, they just pick up on the fact that something is obfuscated. If that obfuscator is not commonly used in goodware programs, it is marked as malware. This is kind of a dumb strategy on the part of AV engines, but it works okay.
You're not going to catch new malware with static (or dynamic for that matter) analysis anyway. Thing is, the problem is ill-defined.
What is malware?
Is it a program that does something a user doesn't want? If users knew what regular programs do, they wouldn't be okay with most of it either.
Is it a program that does some obfuscating tricks and exploits undocumented functionality in the system? Plenty of legit programs including a lot of AV engines do that as well.
The only usable definition in my opinion is that it is a program that makes the user unhappy with no easily accessible way of removing it completely.
This is why the only solution seems to be to only allow installation from a trusted repository. I am still not sure why Windows/Apple OSX haven't adopted such a strategy (with a developer mode override option for some advanced users).
> What is malware? Is it a program that does something a user doesn't want?
OS X Sandboxing seems to have the right idea: Instead of worrying about what the user doesn't want, do only what the user WANTS.
Basically, sandboxed apps don't have access to files and folders other than the ones that the user explicitly chooses in an Open/Save dialog. It's a surprisingly nag-free opt-in mechanism that "just works."
After that, automatic backups will let users revert any undesirable changes to their data, whether they were made by their own selves or by malware.
I think operating systems should just do a better job of making the user more aware of all recently-modified files, especially if a process has been modifying a large number of them in a short time (the recent ransomware comes to mind) or if a third-party background process has been generating an uncanny amount of network traffic.
Seeing something like "1,590 files modified" on log-on or in a notification, is way more alarming and would make users take immediate action, compared to all the usual OS or antivirus nags that we are all accustomed to subconsciously agreeing to.
Re: OS nags...Windows is the worst IMO since UAC basically just asks ~it looks like you're trying to do a thing, are you sure you want to do that thing? The "open this program FROM THE INTERNET????" nag on OSX and Win are also pretty pointless. I've never not clicked yes, but there's not an easy way to disable this. It looks like it CAN be disabled with registry tweaks or GPEdit: http://www.sevenforums.com/tutorials/182353-open-file-securi... but kind of ridiculous that it's not just an option, given how little it increases security (how many people don't just blindly click "open"?)
What if it's a browser exploit that basically got the browser to "download and open"? Then this would be the second line of defense - something is about to run even though you didn't know expect it to run.
The "execute downloaded executable" protection predates UAC. It was introduced in Windows XP SP2 and UAC implemented its own version of it, which created double prompts:
That being said, the "execute downloaded executable" protection only applies when the binaries are marked as being from the internet, which is easy to bypass. You just need to download the software in a way that the mark is not applied (e.g. not through a software mechanism designed to apply it). Furthermore, attacks need not rely on "execute downloaded executable". They just need to achieve code injection, which bypasses the need to run through the "execute downloaded executable" process. However, any injected code would be able to download and open without triggering UAC by not marking the file as having been downloaded from the internet. Such file marking is entirely voluntary and malicious code would likely never voluntarily do it.
Protection against inadvertent execution of software downloaded from the internet is the only function in UAC that is useful. The other functions are designed to operate on already executing software. Since already executing software can gain system privileges (above administrator privileges) via vulnerabilities the that Microsoft refused to fix, it can do basically anything it wants. UAC is fairly useless against it because anything it wants includes turning off UAC. I know enough about Windows security that I stopped using Windows years ago, so I do not know whether UAC would require malicious software that has gained system privileges to turn it off. If it does, it should be a simple matter. Anyway, the hot potato proof of concept code demonstrates gaining system privileges by exploiting such vulnerabilities:
The video of it running on Windows 7 uses the system privileges to give a regular user administrator privileges. There is a Windows firewall prompt that appears in the video, but it does not stop the exploit. The appearance of the prompt ought to be avoidable because whatever triggered the Windows firewall prompt was not necessary for the exploit and could be removed.
"the only solution seems to be to only allow installation from a trusted repository. I am still not sure why Windows/Apple OSX haven't adopted such a strategy (with a developer mode override option for some advanced users)."
One weird thing w.r.t. Gatekeeper is that it seems to depend on everybody who downloads signed executables not via the App Store to blacklist them (by adding some extended attribute to the file)
Is this an oversight of the AV software companies? Did no one come up with this before? Could it be that if people did come up with this before that a lot of Windows computers have viruses without them knowing? Is their virus detection scheme fundamentally flawed?
Should I be shocked? Shouldn't I be? I'm currently shocked but I don't know if it's justified, not an expert in the field.
Well, if you really want to sandbox some internet behavior (for example, porn, which I imagine has the highest percentage of sites delivering a malicious payload), use a virtual machine manager (VirtualBox) and set up a virtual machine (some Linux variant may serve best) for that specific type of access. You can do this multiple times, once for each type of access, such as a dedicated VM for accessing your bank website.
If you're really paranoid, you can save the state of the virtual machine before use, and restore the prior state every time you use, it, preventing any changes to the VM. You would occasionally want to start it up, install all the recommended updates, and then save the state again though.
Use an ad blocker (uBlock origin), keep your OS/Browser up to date. If you really want to get paranoid you can use NoScript or something like that (but you'll give up some convenience).
The main thing is to make sure you trust the things you're clicking on.
If you have to visit websites or try programs you don't trust, some people have virtual machines specifically for those situations. They'll visit the site/open the program inside the VM, and if something sketchy does happen, it'll be contained within the VM and not infect the host OS (unless it's incredibly sophisticated malware that can break out of VMs--but very unlikely you'd be targeted by something like that).
3. UAC prompts that annoy users to the point where the user either turns them off or automatically clicks yes. This is in part because of the even weirder situation of legitimate software often being written to touch things that it has no business touching.
4. End users trained to execute software obtained from random internet sites.
5. File names used to identify files as executable.
There are probably other backward things with regard to security too, although I cannot think of them offhand.
Your best choices would be installing a Linux distribution or buying an Apple machine running Mac OS X. If you must use some sort of Windows, check out ReactOS:
That likely does something by virtue of not having same bugs and not having yet implemented the legacy things that exploits often target. It is not as good for security as Linux or Mac OS X though.
> It's impossible to determine whether software is malicious or not (Rice's theorem).
Which is why proof-carrying code [https://en.wikipedia.org/wiki/Proof-carrying_code] is a good idea: the onus ought to be on the programmer to provide (machine-checkable) evidence that their program is safe to use, for whatever notion of “safe” might make sense in your system.
That's still in its infancy. Future results might counter what we figured out. The certified compilers are nice for subversion-resistant development but not running untrustworthy code. There's significant difference between their models and the software/hardware combo actually running.
Better route was started in Burrough's where you pick a language good at correct programs and carefully design a safe machine around it. Same with System/38, SAFE (crash-safe.org) for functional, and Cambrige's CHERI for C language. The fundamentals work as advertised along with ability to enforce arbitrary security policies. Then design and security stuff are built on that. Only thing known to work consistently to any degree of success.
On COTS hardware, separation kernels and compiler transforms on legacy code are about best that we can do.
> That's not true - there's heurystic analysis techniques, generic signature detection etc. Of course they may not meet your definition of "reliably"
The only reliable things in security are the things that an attacker cannot bypass even when knowing that they in use (e.g. RSA). The premise of the article being discussed is that heuristics are trivial to bypass.
Would you say that rice's theorem is a generalization of the halting problem?
It looks like "halting problem being unsolvable" -> "rice's theorem" by a subset relationship. Consequently, if rice's theorem were false, you could solve the halting problem by modus tollens.
That being said, I had using the halting problem as my way of saying that identification of malicious software is impossible because infinite loops can be malicious and I had been unaware of rice's theorem. I will use that in my explanations in the future.
Antivirus software is as useful for security as monster HDMI cables are useful for improving digital picture quality, so this is not surprising.
The only thing that antivirus software does semi-decently is identify known software binaries. Antivirus software cannot reliably identify unknown binaries through heuristics because writing software to understand unknown software binaries is impossible in general. There are potentially an infinite number of ways of proving that, but the easiest way that occurs to me is that one of the many things necessary for understanding unknown software in general is solving the halting problem, which was proven to be impossible in general by Alan Turing.
Furthermore, the utility for a database of known malicious binaries is practically non-existent. Malicious software is always designed to exploit some vulnerability and once the vulnerability is fixed by the vendor, there is nothing for the antivirus software to do. If you could apply the definition update that the antivirus software needed to catch malicious software, you could have applied the vendor patch that fixed the vulnerability the malicious software used in the first place. That not only makes the definition update unnecessary, but handles the unknown things that the definition update would never have caught.
In the cases of a vendor being slow to patch, refusing to patch (e.g. the exploits used by the hot potato proof of concept code for all current Windows versions) or the user not applying the patch in time (e.g. lack of scheduled downtime), the inability of antivirus software to catch unknown software using those vulnerabilities provides a false sense of security. If a system is specifically targeted by a malicious hacker, the hacker would use something that antivirus software would not catch, such as a script kiddie tool against which there are no known definitions or custom code. Being unfortunate enough to be attacked by a virus, trojan, etcetera before they get definitions also means there is no protection.
Real security requires doing things like minimizing attack area and configuring things competently (e.g. not using your username as your password). That is something that you cannot get from an antivirus vendor.
You're not secure until you whitelist. And even that's not a guarantee; it's a necessary but sufficient condition. But systems which do not run signed, whitelisted code from boot time forward are as good as pwnt.
No, a whitelist isn't good enough. You can't anticipate an exhaustive list of the programs the user will want to run.
What you can do, however, is enforce a policy by which programs are required to provide machine-checkable evidence, also known as proof-carrying code [https://en.wikipedia.org/wiki/Proof-carrying_code], that they respect the system's safety policy.
Laughable. Ages ago I realised you could just take an exploit, base64 the contents of the binary code, save it in a string. You could then unbase64 it and execute the binary in memory. Nothing seemed to catch it.
You'd still need a decoding stub which can be fingerprinted. An XOR "decoder" is far smaller in shellcode and can be custom written in asm to reduce time-to-first-signature.
We just need a better sandboxing environment and individual permissions per excitable i.e. "Can this excitable connect to blah blah up?" , "can this program read outside of its sandbox folder?"
When you don't root your android or iPhone they handle it a lot better than desktop operating systems.
I used to work at a large, well known AV company. While much of what other commenters have said is true, I will note that VT is less authoritative than some realize; the version of our product that VirusTotal was given access to was substantively different than our normal product; features such as sandboxing were removed.
I wouldn't be surprised if this was a common practice; we considered our product's detection capabilities to be proprietary.
So after the program is actually compiled into binary code do the resulting instructions become so simple (and so fundamental to the operation of programs) that any attempt to write a heuristics rule to stop this technique would break thousands of programs or are heuristics just so inherently shitty that this technique works? Because I would still think that this line here:
Would be a red flag. I don't know how many programs use that on the Windows side of things but I have almost never seen programs call this function that weren't "crypters."
I am honestly impressed how simple this program is (it's the most elegant crypter I've probably ever seen) but I am still wondering whether heuristics could be made to detect this (without also falsely detecting thousands of other programs?) What do you think OP? Not really my field but curious all the same.
I believe this is how JIT compilation works. So this would trigger a false positive on the Java VM, Javascript V8 interpreter, Spidermonkey, and C# runtime.
Why would that be a red flag? From what I can glean from the documentation, both MEM_COMMIT and PAGE_EXECUTE_READWRITE are perfectly reasonable flags to pass to VirtualAlloc, and -again, according to the documentation- VirtualAlloc appears to be a way to (de|re)allocate memory within the region allocated to your process.
What's more, every program that uses Boost on Windows calls VirtualAlloc, in several places.
VirtualAlloc is roughly equivalent in intent to POSIX's mmap. So anywhere that you'd use mmap in a Linux program, you'd probably use VirtualAlloc in Windows.
How? That's the principle that everyone's missing! Sure, you can change the code a bit, but the millions of copies of your worm out there already can't magically change themselves after AV vendors update their blacklist. Sure, you can make a polymorphic worm, but even after mutation, your worm still probably has common patterns that blacklist makers can catch.
I'm a fan of proactive, not reactive security --- but let's not pretend that what AV vendors are doing is completely bogus.
You also have to keep in mind that blacklists are behavior heuristics are more suited to a bygone age --- one of low-bandwidth, sporadic connections and floppy disks that could be infected with trojans. Nowadays, we're more worried about drive-by 0day flash exploits from ad networks than about infected, self-propagating executables.
I've always found it fascinating that the human immune system has both proactive and reactive security, just like our computers should. The innate immune system [1] is analogous to mandatory access control, OS file permissions, buffer hardening, and other non-specific security mechanisms. The adaptive immune system, on the other hand, works like a blacklist updated throughout your life (and propagated from mother to child!).
Both systems catch threats the other does not. There's still a lot to learn from biology.
Without reading the article: it looks like it casts exec to a function pointer with void return type and no arguments, and calls it.
(Disclaimer: I'm paid for writing Java :( )
Almost. It's casting exec to a pointer to a function that "returns" void and takes no arguments and then calling the function. Search for "C right-left rule" (without quotes) to see some hints on how to read complex declarations (mostly applies to casting too). I like http://ieng9.ucsd.edu/~cs30x/rt_lt.rule.html
AV is a little and often useless supplement to security, nothing more. It has always been trivially easy for a script kiddie to write malicious software that passes all of VirusTotal. In the decade I've been using AV, I had less true positives than false positives (yes please auto-delete my patches, hacktools and software I've written myself, idiot AV program) and of course some false negatives that wrecked me because I executed them. Since I stopped using AV and became more careful (e.g. use VM for suspicious files) I was never infected again.
lets not forget that this isn't actually all that practical in practice.
It's not executing any old binary: the 'shellcode' has to be position-independent and can't rely on any normal PE features; it has to do all it's library loading on it's own etc.
yes, very possible to create. but if your malware/tool you want to run is large, it will take a fair amount of time to convert.
I've said it before but as hard as it might be to believe, I think smalltalk was 30 years ahead of its time when every program was packaged inside its own OS image.
Anti-virus bypasses and even exploits are extremely common. My current line of thinking is that the best way to take control of your computer is to use virtualization to run many separate OS images for different sets of uses.
Interesting, but would have obvious drawbacks when it comes to addressing any OS vulnerabilities or improvements - suddenly you have to reinstall ALL your applications!
This reminds me of Nintendo's approach to emulation for their Virtual Console, actually. Rather than having a standalone emulator that you download images for, they package the emulator with the game. This way they never have to worry about inadvertantly breaking anything with a future release, but on the other hand any subsequent emulator improvements do not retroactively apply.
I think you could save and fork OS images. At the same time I would think vulnerabilities wouldn't matter as much, but you are right that it adds complexity to managing a system.
Try running the binary instead of just scanning it for signatures. It will be detected by the heuristics engines of most AVs. I'm not saying that AVs are good but articles like this that only focus on defeating signatures while leaving away heuristics (emulation, behavior detection, and even white listing to some extent) don't do the subject justice.
I suppose the value in AV is that you are like one of the proverbial guys being chased by the lion - you don't need to outrun the lion, only the other guy. In this case, you just hope any malware hits enough other poor suckers first that your AV of choice is updated in time to save you.
Yes, this shows that antivirus is trivial to bypass. However, antivirus is not the last word in endpoint protection. While this method can be used to get otherwise ordinary payloads past antivirus, behavior-based detection and application whitelisting can be used to prevent many of these attacks.
Proper security hygiene can prevent most attacks, not better antivirus software. Proper security hygiene means:
* applying security updates
* enforcing least privilege
* reducing attack surface (e.g. Does your desktop really need open ports?)
* using decent passwords and two factor authentication when possible
* not reusing passwords in case a place where you used a password is compromised
* not executing code from untrusted sources
* checking whether code from a trusted source is vulnerable to a MITM attack before executing it
* saying no to prompts for elevated privileges unless you can prove to yourself that there is a good reason for them and finding out what caused a prompt for elevated privileges when you see no legitimate reason for it
* wiping a system should you think it might have been compromised and maybe also discarding the hardware just in case firmware was altered, which is what the US government tells US CEOs to do with things that they bring to China
* not providing confidential information (e.g. your password) just because someone claiming to be a trusted party such as IT called asking for it
That last one is how the NSA red team hacked the Pentagon's Joint Staff intelligence directorate when doing penetration testing as part of a "war game" in 1997:
That said, there are likely more when thinking about confidentiality (the other half of security), but these are the ones that occur to me when I think about ensuring system integrity.
Anyway, antivirus does not save you if you fail to do any of those things. Anything that could get by all of that would be a zero-day attack where antivirus software is likely to be similarly useless. Not all zero-day attacks can get past all of that (minimal attack surface is awesome). If you are the principle target (like the Pentagon was for the NSA red team), antivirus software has no chance of saving you against a zero-day attack.
It appears that not much have changed in the last ~20 years, when looking at mainstream antivirus software... I used to be able to move around a few assembly instructions inside a virus to avoid detection while still maintaining 100% of the virus' features
At VirusTotal we are tired of repeating that the service was not designed as a tool to perform antivirus comparative analyses, but as a tool that checks suspicious samples with several antivirus solutions and helps antivirus labs by forwarding them the malware they fail to detect. Those who use VirusTotal to perform antivirus comparative analyses should know that they are making many implicit errors in their methodology, the most obvious being:
-VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
-In VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be more aggressive and paranoid, since the impact of false positives is less visible in the perimeter. It is simply not fair to compare both groups.
-Some of the solutions included in VirusTotal are parametrized (in coherence with the developer company's desire) with a different heuristic/agressiveness level than the official end-user default configuration.
thanks,