> Microsoft has addressed an authorization misconfiguration for multi-tenant applications that use Azure AD, initially discovered by Wiz, and reported to Microsoft, that impacted a small number of our internal applications.
Sure, a small number of internal applications. Just the ones that gave them arbitrary JavaScript execution on every visitor to bing.com and full access to every signed in user's emails and cloud office documents.
I see these sort of things, and always chuckle to myself. It's reassuring to see even the Giants screwing up too.
It's way too easy to assume devs at some fancy wiz-bang company are all 10X'ers and produce perfect elegant code every time.
Then you realize developers at all companies are nearly universally bad.
It's a kind of miracle any of this stuff works at all, despite all the intelligence that's gone into them.
Because massive vulns like this are existential threats for these protocols. If Equifax gets breached for 150m peoples' personal information, they just get a slap on the wrist, pay you $15 (if you're lucky), and continue as a going concern.
If these trillion dollar companies messing around with your personal information was an existential threat to their business, they would pay comparably with crypto companies.
Most large crypto bounty programs are paid in things that are instantly convertible to dollars (mostly USDC, some DAI/USDT or occasionaly ETH). There are a few out there but in general the dollar numbers are real.
I assume these are paid in crypto too, is it possible to easily exchange this large amount of money to regular currency? Then again, even 10% of that looks enormous.
It doesn't seem out of line with the published award scales of Microsoft's bug bounty program[0]. If the researchers weren't ok with that bounty, they shouldn't (and wouldn't!) have done the research with that motive.
> It doesn't seem out of line with the published award scales
And yet, it still seems out of line with the scope of the vulnerability.
"seems" implies a subjective measurement.
> If the researchers weren't ok with that bounty, they shouldn't (and wouldn't!)
Researchers are going to research. Please don't carry water for trillion dollar companies; the reward was paltry don't blame the researchers for Microsoft's cheapness.
I'm not blaming the researchers for anything. They did the research, found a major issue, and got a reward in line with the published expectations, and have not complained about it being too little. Why would I blame them for any of that?
It is only uninformed HN posters imagining that vulnerabilities are worth millions of dollars on some imaginary black market that are complaining about it. Just like they do with every vulnerability disclosure that mentions the size of the bounty.
And by repeating this low-value conversation for the hundredth tube, you (yes, the specific you) are just hijacking the conversation and taking attention from what is unique and interesting about this case into your fantasy grievances about the bounty being too small.
> you (yes, the specific you) are just hijacking the conversation and taking attention from what is unique and interesting about this case into your fantasy grievances about the bounty being too small
Well, good morning to you as well.
Anyway, the conversation had turned to the topic of remuneration. You didn't reply with THIS comment originally, you joined right in and defended Microsoft's payment terms. If you thought this thread of conversation was "taking attention away" why did you wallow into the mud with the rest of us?
No need for the ad hominem attacks about “uninformed HN posters”. Zerodium and their likes are ready to enter discussions with you for paying ~ $X million given a sufficiently high impact vulnerability, which I strongly believe this is.
> It is only uninformed HN posters imagining that vulnerabilities are worth millions of dollars on some imaginary black market that are complaining about it. Just like they do with every vulnerability disclosure that mentions the size of the bounty.
Sure normally it is a joke but this one in particular does look like something that would actually be worth something to the right buyer
I don't follow your argument. We are discussing whether the bounty is high enough to provide a good incentive for researchers to pick up and report issues like this timeously.
It's impossible to know if anybody else has been abusing this for some time or not, and it's a valid concern for people here to argue the bounty is low enough that this kind of valuable exploit may have ended up on an exploit marketplace in the interim.
Forget darkweb criminals, for something that juicy you can bet that nation states would be very interested. I don't know what China's equivalent of the NSA is, but if they're half competent they'd provide serious wonga for a bug like that one.
It's almost as if HN is a site where we discuss things instead of just passively consuming information.
In either case only paying $40,000 for disclosing an exploit like this sends a clear message from Microsoft. They don't take their user's security seriously. And it also incentivizes certain outcomes -- They're cheap and less moral actors who are only motivated by the finanical reward won't bother to engage with Microsoft.
So. What. $40,000 is something that a senior SDE could pay out themselves without flinching. A trillion dollar company like MSFT should do better than that for something so severe.
Put another way: just because the researchers didn't (publicly) complain doesn't put MSFT in the right here. There are more than a few kernel developers who didn't raise hell about Linus' abusive behavior. Yet even Linus himself realized his behavior had to change.
Wait wtf. If I understand this correctly anyone with an AD tenant (i.e. anyone) could just log in to bingtrivia.azurewebsites.net to modify the Bing page for everyone and insert JS that retrieved e-mails from logged-in visitors?! W T F
I have worked on a project where we integrated with AAD for auth. One of our developers did things that looked fine - used the common Asp.NET middleware for AAd, configured it, used the latest version, set authorization policies on sensitive routes, etc.
Then I did my own code review, and saw that they had passed "verify: false" in the options. When asked why, they said "I just copied this from StackOverflow and it worked...". This guy is not dumb BTW. It's just that security is easy to get wrong, especially if you don't know where the dragons are. We later had a full suite of positive and negative automatic tests, and had an independent pen test verify that things were done properly - but most small teams will never get those resources...
We're celebrating that we've moved away from insecure languages with manual memory management to secure high level languages, sandboxes and whatnot, and the first thing we do is "yay coding got easier let's increase complexity by 1000x". At least sometimes I really feel like this, but probably just getting old.
I don't think it's as simple as that - we have higher level languages that protect us from trivial security issues like buffer overruns, but that's a net good. There is no advantage we would gain in security from rolling that back.
Instead, it gives us exposure to a new set of higher level security problems, that over time we will need to develop higher level primitives to navigate. We would still have these problems with lower level languages, we'd just be too overwhelmed with smaller issues to properly address them
> "I just copied this from StackOverflow and it worked...". This guy is not dumb BTW.
These seem quite contradictory statements. Copying something from StackOverflow and using it in production code without making sure you fully understand what it does? And something like "verify: false" should be an instant red flag that you need to triple-check to make sure that’s really the correct thing to do for your use case.
Forget small teams. As a business owner buying solutions that integrate with AAD, how could you possibly be confident that the team you're buying from has done their due dilligence wrt security?
A quite common error, actually. Lots and lots of services are configured to accept logins from arbitrary auths, but forget to verify that it's an auth they actually trust! You just check that it's a valid oauth token, and your code works for your login page, so you consider your login-feature to be done and move on to something else, leaving a wide open door.
Edit: As the article itself stated, around 25% of all systems with this setup are vulnerable!
> The results surprised us: 25% of all the multi-tenant apps we scanned were vulnerable to authentication bypass.
If only. How about: just download the backup of the server log from https://example.com/logfile.txt? Oh, and it contains everything. Including internal application logs.
Btw, even single-tenant can be a bit scary in this regard: You can by default log in with guest users. For instance if someone has joined a teams meeting. So unless you explicitly check permissions or disable this (for instance if you think "everyone in our org should have access to this"), you inadvertently could have external people log into your apps as well.
Tutorial on how to deny Guest users in Azure AD[1]
In general, one should always use roles in Azure. Even if you have a flaw like this, your endpoint would be safe if you required a role to access your endpoint.
For multi-tenants, I completely this misconfiguration, there’s no real warnings when configuring it. In order to lock down to specific tenants, I recommend having a list of issuers that you check the token against.[2]
S3 bucket misconfigurations alone probably account for a sizeable fraction of all dataleaks over time, with AD misconfigurations a close second.
Things I still regularly come across:
- developers with copies of the production database on their laptop
- said laptop doesn't have an encrypted hard drive
- every developer having access to production databases, including people hired yesterday
- default userids / passwords hardcoded in firmware
- the marketing department having access to the production database in bulk
- datalakes with zero access controls that perfectly mirror the production db
- sharepoints without proper authentication storing mountains of customer data
Anyway, I could go on like this for a while. And usually the company employees are aware of these, they just haven't gotten around to plugging the holes or they were never going to unless someone told them to because it is convenient. Occasionally there is serious pushback, for instance against developers having a recent copy of the production database on their laptop is in some places considered perfectly normal.
> every developer having access to production databases, including people hired yesterday
Do you expect new developers to have a wait period before being able to access sensitive materials? That seems like a recipe for disaster, you’re openly saying “I don’t trust you” at the start of your relationship with someone, when it’s the most volatile.
I’m almost positive that doing it will just cause your new devs to resent their new bosses from day 1.
Then why was the “especially people hired yesterday” necessary?
Your original comment makes it seem like you feel someone should not have access simply because they’re new. Not that nobody should have access at all.
> - every developer having access to production databases, including people hired yesterday
I wonder how that correlates with companies bragging that they only take day or even few hours to have new hire code landing at production "coz they are so agile"
I now feel better about adding additional check to disallow logins from not our domain to our apps even tho technically app setup in azure wouldn't allow that anyway... fuckup can happen at any level
JFC. This was an absolutely massive screwup by Microsoft. In addition to Bing and other MS services, it allowed access to "COSMOS: A file management system, managing over 4 exabytes (!) of Microsoft’s internal files."
I would have added quite a few more exclamation marks myself. That bug should have gotten way more than a $40k bounty.
> Staying on Azure is on the border of being considered a firing offense...
If there was a competitor to AD that is. Right now developers and hn peeps have loads of options for their work, enterprises do not. I have Windows clients to authenticate, I need AD. I need forwards/ backwards compatibility on my clients, therefore I need Windows. I have applications that must run for several decades therefore I need Windows.
This is a bug in AAD, which is a very different thing than AD. There is some interoperability between them (ok more than some) but needing AD for legacy auth doesn’t necessarily mean you need AAD for Oauth.
If this is the class of problems we have now, I'm sure introducing a fuzzy logic interface taking natural language input, able to connect to all sorts of APIs and enabling users to generate code they can't fully understand or debug properly will not cause any major problems...
> I’m sure introducing a fuzzy logic interface taking natural language input
Do LLMs use fuzzy logic at all? I know fuzzy + neural network was a big thing in the 1990s or so, but I thought that LLM NNs (and most current NNs in AI) use Bayesian logic.
The Wiz article does mention that (specifically for their XSS attack at least) Microsoft can't necessarily determine if anyone was affected.
Not that you're wrong of course, but with an attack like that it wouldn't be accurate to say that every user is affected. At the same time, if you can't figure out if anyone was affected, what do you tell regulators?
They could definitely review historical or backup copies of their CMS databases for Bing, or database transaction logs, or whatever they have, to determine if this type of attack was ever launched. They'd have to correlate that to Bing visitor/usage records and hopefully be able to connect the dots. Even if they have the data to do this, I don't see them doing it.
What's going on here really? What "misconfiguration"? I don't get it... misconfiguration is just "single-tenant" vs "multi-tenant"? That can't be right, that would mean that anyone with an account in the original tenant would still have admin access. It doesn't matter if you have single or multi-tenant logins if you don't check account permissions in any way. The fact that the user is coming from some other tenant only increases (massively) the surface area of who can screw you over... but like, if that's the fix... than anyone in the org can still screw you over.
It's a little more complicated - in AAD, the app developer has a single app registration, which is the app side of AAD. The app can ask for certain permissions, specify trusted sign-in URLs, manage client secrets, etc.
Then, on each AAD tenant that wants to use this app, an admin for that tenant would create their instance of the app, called an enterprise application or service principal.
That second object can choose whether any user in that tenant can sign in to the app or not, and can assign AAD roles to users/groups.
It is very common for apps to have a single tenant, and therefor the app registration and enterprise application are both in the same tenant, but if the app registration would allow it, any tenant can create their instance, and assign users from their tenant any role they desire.
This means that it is possible to have an app managed really well within your tenant, yet make the mistake of allowing sign-ins from other tenants, and thereby outsiders could still have full access...
I always loved the AAD documentation whenever I had to work with these things. Whenever I have to implement a service the things I care about are of this nature:
-how do I validate a token to confirm that a call has permissions to do a thing.
-what secrets do I need to safeguard in order to be able to validate said tokens
-does the caller need to store any secrets
-when do I need to call external services in order to validate tokens.
Instead, AAD hides all thrse behind abstract terms like: service principal, app registration, enterprise application... No wonder people fuck up.
I can't even comprehend what "creating their own instance of an app" means! This is way more abstract than it needs to be. Does the code get served from somewhere else? Do they get db copies? Probably not... But it hurts to think in these terms when all we want is to make sure calls are authorized.
And yes, I have successfully implemented this kind of stuff before. I used to keep sections of the RFCs on speed dial. I've never had to deal with cloning app registrations.
> and potentially enable the Office 365 credential theft of millions of Bing users. These credentials in turn granted access to users’ private emails and documents.
It's amazing to see how the same Microsoft culture that created Windows insecure nature, still persist in all the traits of it's Cloud. I bet behind the scenes it's all a ginormous Sharepoint server....
Did they perhaps force a password reset when this was reported? I was prompted to change my password when I logged in (a work account I don't use much) yesterday which slightly surprised me in that there wasn't a particularly explained reason, and I didn't think it could have been old enough that it was a rotation policy kicking in.
That's not a misconfiguration: they completely skipped the authorization checks in the applications. Even when you limit Azure's SSO to your own tenant, you should check if the account is permitted to change a minor thing such as Bing's frequent searches page.
The "Am I affected" section downplays the effect. If I understood it properly, this could already have been exploited, and nobody would ever be the wiser.
That's the crux of this. There are no surprises in AAD behavior here, it's just that Microsoft put developers or security people on these products that missed the mark.
Everyone makes mistakes in their careers, but these are some awfully big mistakes with awfully big stakes.
Since around 2021, I've reported privately (with MSRC on copy) a number of Bing infrastructure issues where their internal webpages were misconfigured and exposed to the public. I had access at some point to a bunch of their dashboards, content curation tools, build farms, stuff that exposed user search query strings, user logs, etc. I had no idea a bounty program existed and MSRC never responded. The team did fix the issues and seemed appreciative. To see such a different response (and a large cash bounty) is frustrating (to me). Happy it got fixed though!
Would this apply to applications that don't have web pages as well?
I notice for example the application that the Windows Defender ATP network scan agent application that is automatically created is set to 'AzureADandPersonalMicrosoftAccount' for signInAudience for example, but I'm not sure if you could access it using the Object ID somehow through the Graph or something.
No, they're right: 1.7 billion people many of whom don't have a choice in the matter. Of course there are plenty left that do but if you don't actually have to use it security wise you are better off elsewhere.
> Microsoft has addressed an authorization misconfiguration for multi-tenant applications that use Azure AD, initially discovered by Wiz, and reported to Microsoft, that impacted a small number of our internal applications.
Sure, a small number of internal applications. Just the ones that gave them arbitrary JavaScript execution on every visitor to bing.com and full access to every signed in user's emails and cloud office documents.
https://msrc.microsoft.com/blog/2023/03/guidance-on-potentia...