Hacker News new | past | comments | ask | show | jobs | submit login
Security Update for Microsoft Malware Protection Engine (microsoft.com)
159 points by tooba on May 9, 2017 | hide | past | favorite | 83 comments



> Mr Cluley did add, however, that he thought the Project Zero protocol for announcing the vulnerability - which had included information that malicious hackers might have found useful - had been risky. > "That can help the bad guys," he said.

This is just plain wrong, isn't it? I was under the impression that all of the details on PZ are hidden until either a fix is released, or 90 days have passed. I don't see how this could have 'helped the bad guys'.


Yeah, I don't get this. The announcement on twitter contained zero information apart from "there's a remote code exec vulnerability on windows". Which I think you could confidently say at ANY point in time (about ANY system).

But Graham and others (https://twitter.com/taviso/status/861575086632968192) continue to attack Tavis for announcing the fact that there is a known vulnerability. As if this somehow makes users more insecure.

Surely it is better to alert people (and especially organisations) that a major security issue has been found so that they can be prepared to patch their systems as soon as a fix is released?


He basically announced that he knows a secret worth millions. Criminals and state actors might do everything they can to get this secret, starting with trying to hack him, over bribing him, blackmailing him, serving him secret court orders, or even physically assaulting him with the famous wrench.

Even if you are a seasoned security researcher, saying "I know how to get into any Windows PC by sending someone to a website" paints a huge target on you.

Although the danger is kind of abstract, there is no security gain from tweeting about this, so I'd say he shouldn't have done it.


I'd maybe disagree that there is "no security gain from tweeting about this". I think there is a security gain from warning people that an urgent security patch is on the way so they should prepare themselves/their organisation to deploy the patch as soon as it lands.

Now you could argue that a personal twitter account is not the best medium for this warning (maybe it should come directly from Google Project Zero or Microsoft), but Tavis does have a large sphere of influence, and I invariably hear about these things through him than via other sources so it is an effective medium.

Edit: Note, the argument here is whether his initial tweet (https://twitter.com/taviso/status/860679110728622080) constitutes disclosure - I don't think it does. The actual disclosure was handled according to Google Project Zero's policies.


At he end of the day you have to ask yourself if it's worth it. Security is a touchy topics and that's probably how you want it. Yes, you can argue that you're right, even objectively so, but there's going to be people who disagree. There only so much karma you can burn through before people, and often the people who are the most serious about it, leaves for some other route. And then you sit there ten years later and wonder why "people don't care".

You want to announce upcoming vulnerabilities, fine, do that. Be up front about it and make an argument for it so people see your motivations. Don't leave people guessing what's going to happen based on the mood of the day.


It's not worth millions if it's already been reported to Microsoft.


Tell that to the millions of people who won't have the patch before next week.


Why would they not have it until next week. Per the advisory, the engine update is part of the normal windows defender updates, which happen up to 3 times daily.

I just checked this morning, and I have the updated version already, and took no action.

Those millions would have had to have disabled windows defender updates in order to not get this update before next week.


It turns out the update is part of the Defender update, not windows update.

That's a much more frequent schedule.


Not sure about millions but many do have update for Windows turned off, due to Microsoft's security-trust-destroying habit of deploying invasive and undesired non-security updates automatically.

Windows 10 especially has a nasty streak with updates, and while security updates are smart, forcing new content updates, advertisements, and spyware into the Tuesday fast track teaches users that the only way to be safe from Microsoft is to not take software from them automatically.


File this one under "play stupid games - win stupid prizes".

Idiots like this are why Windows updates are completely forced in the first place. A couple generations of "experts" knew better and refused to let Windows update. Then billions of dollars were lost cleaning up worms that had already been patched, but people kept canceling the update dialog. See SqlSlammer, CodeRed, etc.

You had your chance to handle your own updates and proved that you cannot be trusted to do so. This is the next logical step.


"You had your chance to handle your own updates and proved that you cannot be trusted to do so. This is the next logical step."

Personally, I would describe the user voluntarily allowing their computer to become infested with first party adware and spyware as winning the stupid prize, but your mileage may vary.


Yes, if one wants safety, one got to be unsafe regarding the company who provides the infrastructure one paid for. One either has Internal-Adds or Internet-Aids. Or both.

One could of course get tarred and feathered until one looks like a penguin - but hey, who is that desperate. Lets ask that question again, the day such a exploit is rolled out world wide with a patch on his back.

Funny times.

Microsoft recommends for all HN Readers: Spam musubi - buy the delicious Hawaian delacay now. In a store near you! Get a free sample with this Coupon-Code: 0xDEADBEEF


He basically announced that he knows a secret worth millions. Criminals and state actors might do everything they can to get this secret, starting with trying to hack him, over bribing him, blackmailing him, serving him secret court orders, or even physically assaulting him with the famous wrench.

It's not a secret worth millions and nobody does any of that stuff for Windows RCE vulnerabilities.


That's nonsense, and nobody would pay millions for this bug. What he did was really of no consequence at all.


It's more of "I knew how to get into any Windows PC by sending someone to a website, but it's already been patched"

Much less desirable.


4 days ago when he posted about its existence on twitter, it hadn't been patched.


But the vendor was aware of the issue, and was already getting their ducks in a row. So it was largely useless to any adversary.


I know comparatively little about this stuff, so please excuse the question if it's dumb...

If being alerted allows organisations to prepare to patch, surely it also allows malicious actors to prepare to exploit?

My lay gut feeling is this seems more a trade-off in publicity between the announcing party and the software provider, both wanting to be seen with the initiative so that they look good.


This is a tangent, but: Isn't it strange that in security, it feels ok to give an uninformed opinion?

I'm not calling you out -- quite the opposite. I like your comment because it admits to being uninformed. But for every comment like yours, there are dozens of tweets and HN comments that conceal their lay status while also having strong opinions.

In the tech world, this seems unique to security. For example, none of us would feel like we should have a say in how Rust rolls forward unless we're experts in Rust, or at least involved in Rust in some way. Yet there are many who feel they should have a say in whether Project Zero ought to do X or Y even without any experience. I wonder why?


I'm surprised you feel that way. Who here hasn't commented on an aspect of UI design, or UX, or Apple's roadmap or whether product X should be open source or comply with standard Z or whatever?


In case anyone here hasn't heard it, this supposed trait of engineers of overconfidence on other topics is sometimes referred to as Engineer "Woo"

http://rationalwiki.org/wiki/Engineers_and_woo


True, but as engineers, most of us have been at least peripherally involved in an aspect of UI design, or UX, or a product's roadmap, or choosing whether something should be open source or comply with a standard. But it seems like comparatively few of us have been involved with security in any way except perhaps doing our best not to write insecure code.


I'd guess that security feels pretty personal to a lot of people. They may not be experts in the area, but maybe they run servers,- deal with sensitive data or just don't want their personal machines compromised. A poor approach to vulnerability disclosure for a zero-day could cause them real issues, whereas perhaps a language design decision is less critical.


I don't think there's a difference in how willing people are to do it (e.g. I've definitely seen people who haven't touched Rust have strong opinions about it). However with security it's very easy to accidentally say something someone can show is clearly and unequivocally wrong, with decades of research in tow. A lot of other technical opinions on HN are ultimately up for debate.

You see the same thing with physics and mathematics here fairly often: every time quantum computing comes up some people ask some good questions, and then a bunch of people give them confident and embarrassingly wrong answers. Finally others (including me sometimes) shout them down. It's just less noticeable because those subjects are rare here compared to information security; tptacek and others are constantly fighting the good fight.


Because we all have potatos in the fire and are afraid of the guy with the gasoline canister dancing in circles.

Its just something you learn real fast if you Dev. Producing something, like a Ming-Vase is really hard. Shooting a Ming-vase to smithereens is really easy (ask your Son/Daughter for indoor soccer-practice).

So yeah, you are allowed to have a opinion on things you dont know about, but that your live/livestock depends on. Everyone gets to vote on police work without ever doing a degree in CSI.


Have you paid attention to these exploits the they produce? They're often not "easy". Also, if they are, that's the dev's fault, not theirs. Shooting the messenger to avoid responsibility, it never gets old...


Malicious actors are far more proactive already about watching for security patches and reverse engineering them to work out how to exploit unpatched systems. Once a patch is released the cat is out the bag, and the only solution is to patch as quickly as possible.


> surely it also allows malicious actors to prepare to exploit?

"There is an RCE in Windows" is not helping anyone.


A lot of people in IT (a surprisingly high portion of programmers, even) don't understand the value of full disclosure in security research. For some reason, they decided to export their usual arguments to decry Tavis's tweet:

https://twitter.com/taviso/status/860679110728622080

The responses to his tweet calling him irresponsible are consistent with the tone of this remark. "This can help the bad guys". Nevermind the fact that there's no details in the tweet relating to the actual vulnerability or exploit.

To be clear: I don't know what relationship (if any) Graham Cluley has to the people being jerks to Tavis, and it's possible that this quote was taken out of context. However, given the backlash Tavis's tweet summoned from some Twitter users with inflexible opinions about disclosure ethics, and this alien remark in the article, I'd hedge on the two being related.


Graham's a longtime critic of Tavis. Think he used to work for an AV provider. Here's the history anyway https://www.google.co.uk/search?q=Graham+Cluley&oq=Graham+Cl...


It seems that he is the Sophos guy.


i think tavis's tweet is completely fine. even if someone managed to work out the general location of the code from the tweet [tavis has been looking at AV and Natalie does research on scripting VMs -- a big stretch] then its still an enormous effort to find the problem code and further effort to create an exploit. i doubt it would be possible to do by the time microsoft patched it.

but as a researcher you to have to be careful with details sometimes. i've been able to reverse engineer java exploits from security explorations full disclosure posts in the past but these contained significantly more details than tavis's tweet.


I suppose the issue that I have with that Tweet is - exactly what is its purpose - I don't think it poses a risk, but the tone - excited?, self-important? doesn't sit well with the idea of a professional security bod soberly reporting a serious problem.

I just think the tone rubbed people up the wrong way.


If people don't know that a vulnerability exists (For example: the Intel Active Management Technology) it can be very easy for the company to just ignore it. (Especially if it is one that reflects badly on the company) However, if people know that a vulnerability exists, it puts the ball in the companies court to do something about it.

However, that is just my personal opinion about the reason for Travis' tweets (which happen every time a large vulnerability is discovered), and I have no security background. I trust that people like Travis, who have done a lot of work to improve security, to know how to minimize the damage from the vulnerabilities.


> I don't think it poses a risk, but the tone - excited?, self-important? doesn't sit well with the idea of a professional security bod soberly reporting a serious problem.

What a surprise! Self-importance in an security industry that relies on reputation for consulting gigs?[1] You might have missed the ominous, grandiose vulnerability names, fancy logos and the PR-blitz now associated with any vulnerability worth a damn.

I'm an outsider, but even I know NetSec twittersphere is that last place to expect 'sober' communication.

1. I don't agree with your assessment that there was self-service in Tavis' tweet. To my knowledge Google Zero doesn't consult for anyone, he was probably excited and very surprised by what he saw and he needed to get it off his chest.


There are many sides to this and you're generalizing it to people not understanding the full value of security disclosure is misleading. I can assure you a lot of those people fully understand the value of security disclosures and they are for it.

What many people have the problem with, is with Tavis' tone and his approach to announcing his findings. No reasonable security researchers find a bug, announce it first to Twitter or other mass public postings and then inform affected vendor(s) with the disclosures. That's not it should be done and it is not a responsible discourse policy, this is what people have problems with Tavis.

Tavis is doing amazing work, work that we need but he has to be careful with how he announce his findings to the public.


What's wrong with an announcement like this? With literally no details, it's not helping bad guys (or, for that matter, good guys).

I can see an argument that it's unprofessional to call out a company when you need them on your side. But is anyone making that argument? All the negative replies I saw to that tweet, for example, are along the lines of "omg you ruined my weekend why couldn't you wait until Monday?"


Not this tweet specifically, look at the tweets he did in the past where he did point out the names such as LastPass and 1Password. It caused some folks to contact these vendors for more information where they don't have any yet at the time of the tweets.

Like this one: https://mobile.twitter.com/taviso/status/760231214812844032

Or https://mobile.twitter.com/taviso/status/845717082717114368


How is that different from pointing out a name like Microsoft? Either way, it might be uncouth, but I don't see the connection to responsible disclosure.


It's not me that is having a problem with this, I'm just pointing out why some people are having problems with Tavis in general.

You don't think it is reasonable to at least tell a vendor there is a security problem first before telling the rest of the world?

Maybe responsible disclosure is the wrong name for this, I like the coordinated disclosure idea better.

Maybe I am using the wrong terms but I cannot edit my post anymore.


I think it's reasonable to tell the vendor first. I don't think it's reasonable to freak out about a general, detail-free announcement, and especially not with "omg there goes my weekend" and "you're helping the bad guys" nonsense.

The mere announcement of the existence of a bug, with little enough detail that it won't help anyone find it (i.e. "RCE in Windows" is useless), does no practical harm. It might be a bit rude.

It's the announcement of details that help people find the bug that hurts. If the original announcement was "RCE in Windows due to type error in malware protection JavaScript interpreter" then that would potentially help bad guys put together an exploit before good guys can release a patch.

Stuff like responsible disclosure (coordinated disclosure would be a fine term too) is about the second one, only, as far as I understand it. It's about mitigating the practical effects of the vulnerability as much as possible, not about protecting the reputation of the company or avoiding rudeness.


As a supporter of Full Disclosure I believe it is irresponsible to follow the so called "Responsible" disclosure model.

I dont believe it is "responsible" to leave people exposed for 90+ days while the vendor attempts to whitewash and cover up their vulnerabilities as it so often the case.

While some software vendors might respond the vulnerabilities properly, most do not often wanting to blame shit, or even file legal action against anyone discovering vulnerabilities.


The term "responsible disclosure" is actually a bad term to use, even if you support it: https://adamcaudill.com/2015/11/19/responsible-disclosure-is...

TL;DR "coordinated disclosure" is preferred.


It depends on the situation but I agree with the better name for this, coordinated disclosure.

If the vendor refuses to do anything, then yes, the 90 days should be waived.

I'm not saying we shouldn't disclose at all, I'm saying the vendors have the right to have the info first and react before the said announcements start.


I'm still wondering if the best possible plan (when done over a long time) is immediate and full release of all known information to the public.

The immediate effect is worse. You will have people making use of the vulnerability as soon as the information is out the door. But what about the secondary (and tertiary, etc.) impacts? Will companies be more likely to spend more on security because they will have lost the chance of having 90 days to fix an issue before it goes public? Will consumers who see the damage done in the immediate end up searching for more secure options?

It seems weird (and very very beneficial to the corporations making these security vulnerabilities) that we blame the researcher for releasing the details more than the entity who made the insecure software, sometimes even more than we blame the ones exploiting the vulnerability.

Think of it this way, we already have a given window before we go public. 90 days, which you mention in your post. Why do we have 90 days? Why not 180? If you get to the 90th day with no fix in sight, going public exposes all users to the same damage. If it were 180 days, or something much longer like 10 years, is there a chance that entities behind the software in question will just ignore the bug because patching bugs doesn't generate income like new features? Does the reasoning we have for having a 90 day clock instead of a longer maybe justify a shorter than 90 day clock?


Tavis also got some blowback on Twitter simply for announcing that he'd found a vulnerability. It's baffling to me why people think it's a problem.

If mere knowledge of the existence of a vulnerability in a particular product is enough for the 'bad guys' to find it, well, they were going to find it anyway.


Are we taking twitter as a legitimate medium? 90% of what I see there are contrarians riding the coattails of others. Of course Tavis has all these crazy replies. Its a bit like how the paparazzi get celebs to notice them. They 'neg' the famous person to get the desired response and attention they want.

People not playing these games don't see his tweet as being controversial. It almost had no details, what exactly is there to argue here? Your average copy of Windows probably has tens of thousads of unfound zero days. Its rational that they will be continued to be found.

I think the narrative of "but people on twitter are talking" is fairly bullshitty. Twitter is not reputable, anyone can reply to anyone, and unless you start naming the names of respected security researchers then these replies are from just kids and a trolls looking for attention.


It's not that the bad guys will find it now by looking for it in Windows. It's more like they could grab a gun (thugs) or creatively worded court order (government) and pay him a visit...

A remote zero day in Windows is worth millions on the black market, and in skilled hands the amount of damage or money you can make is nearly limitless.


Literally nonsense. But, in fact, totally representative of the crazy lengths people go to to justify why Tavis's tweet that he found ~a vulnerability~ is somehow dangerous.


Please do not spread baseless FUD. None of this is true.


Oh come on. They're going to tie him up and torture him before he gets a chance to press "Send" on his report email, then?


I tried the proof of concept zip (https://bugs.chromium.org/p/project-zero/issues/detail?id=12...) on my machine this morning. It crashed msmpeng. Had to manually update.

Perhaps they should wait for a few days to allow it to roll out organically before releasing the details?


Anyone confused about what parent's comment is quoting, this HN link was originally pointing to http://www.bbc.co.uk/news/technology-39856391


This is going to sound glib but it just sounds like some people in security research are butthurt. And not for any valid reason.


They are commenting on the disclosure of the exploix and how to exploit it. Obviously everyone will have the patch installed.


The disclosure is irresponsible.

The post published today contains information on how to exploit the bug with a working code for POC, confirmed to work.

The windows patch is published today. It's gonna take weeks to propagate to the windows computers around the world.


Windows Defender updates (malware definitions and engine updates) don't run on the same schedule as other Windows updates-- they're downloaded at least three times daily, and installed immediately once they're downloaded.

IIRC, they're also not disabled by the UI switch that disables other Windows updates. A user would have to go pretty far out of their way in mucking around with things that shouldn't be mucked around with in order for this update to take "weeks" to propagate to them.


The windows patch contains mitigation techniques for the vulnerability, likely enough for an attacker to to reverse engineer. Its better to have the details out in the open so that users can take action to mitigate in the meantime.


Serious props to Microsoft for getting the fix out the door so quickly. I'm glad that they took this seriously because this is a major vulnerability.


What is less impressive is that this exploit was even possible. Executing all incoming JavaScript as root user, what the heck?

Even if they patched this one bug in the interpreter, how many more are there that are not yet discovered / only known by dark market exploit vendors?


Very true. I wouldn't be surprised if see more patches after this one. Based on their initial response time they seem to be taking this as seriously as it warrants. Hopefully they move away from this model eventually - it won't be easy to lock this down properly.


Any suggestions for a good quality virus scanner in which I can have some confidence in regarding a reasonable choice in how it operates.

If I'm understanding correctly Defender runs with high privilege and has a very large security footprint; as such I don't think it's something I want to run.


>>> Any suggestions for a good quality virus scanner in which I can have some confidence in regarding a reasonable choice in how it operates.

All antivirus operate like rootkits. It's basically a rootkit trying to block other rootkits to install.

Microsoft has the advantage to have access to all windows API and they put a ton of efforts in testing/compatibility. It is the least worst of all evil.


Everything on the market has had something like this bug. I'd stick with Defender simply because Microsoft will put the resources into dragging their coding practices out of the 90s and they're better about general QA testing.


I'd recommend Avira.


Other thread is up at: https://news.ycombinator.com/item?id=14296959 (although doesn't mention that MSFT have released a fix already).



Hrm. Microsoft article says:

> For more information on how to verify the version number for the Microsoft Malware Protection Engine that your software is currently using, see the section, "Verifying Update Installation", in Microsoft Knowledge Base Article 2510781.

But the link points to https://technet.microsoft.com/en-us/library/security/4022344 which doesn't include Windows 10.

Edit: guessed and found it: Start -> Windows Defender Security Centre -> (cog icon in bottom left) -> About -> Engine Version


And if your engine version is equal to or less than 1.1.13701.0, then you still have the vulnerability and need to update ASAP.


From powershell (on Windows 10/Server 2016, possibly others):

(Get-MpComputerStatus).AmEngineVersion

Also, from powershell:

Update-MpSignature

to just go ahead and run the update process


Thanks! I wish Microsoft would throw the powershell one liner in their document, most poeple interested in this stuff would just rather do:

    Get-MpComputerStatus | select 'AmEngineVersion'
Than read a longwinded set of commands. PS. Cool technique with the parens.


And for Microsoft Security Essentials, Click the down arrow next to the Help link, and select about.


"An attacker who successfully exploited this vulnerability could execute arbitrary code in the security context of the LocalSystem account and take control of the system." Why would software that is written to scan potentially dangerous files be configured to run under the LocalSystem account? Shouldn't it run under a least privilege account?


Did they only fix the type confusion or did they do something about the unsandboxed JavaScript interpreter running as SYSTEM ?


They got this out incredibly quickly so it's likely that either they just fixed the type confusion or that they already had a sandboxing modification ready which they were saving for a major update but had to rush out. I don't know the age of the defender code, but my money is on the former.


shame that the instructions for verifying the update don't apply to Windows 10


My Windows 10 system showed 1.1.13701.0 but Windows Update indicated everything was up to date. I clicked on the usual Windows Update button Check for Updates anyway and now Defender shows 1.1.13704.0


It'll update on definition update, too, it seems (without going through Windows Update).


They do? Just check the version: Open Defender, go to Help => About, check "Engine Version". Should be 1.1.13704.0 or higher.


If you assume the instructions for Windows 8 apply to windows 10, then that is what you find. Also, the security advisory itself links to this page with the statement "For more information on how to verify the engine version number that your software is currently using, see the section, "Verifying Update Installation", in Microsoft Knowledge Base Article 2510781.", but the actual section title is "Verification of the update installation", so when I searched for the stated section name, I did not find it. These are small mistakes, but it is sloppy work. A person might be left wondering if there is some Windows-10 -specific information somewhere in the rabbit-warren of links leading from the security advisory.


I also find the version under Settings -> Updates & Security -> Windows Defender.

Thant screen gives you the version numbers in the place you're most likely to want to update.


on my win10 machine, I had to open Windows Defender Security Center, click the cog in the lower left, and then click 'About' in the upper right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: