Hacker News new | past | comments | ask | show | jobs | submit login

It seems a lot of infosec folks have these shallow "X = bad" mappings in their brains. Like in that Caddy issue, "out of bounds read = bad" even though realistically you can't do anything bad with it.

I see similar thinking all the time with bug bounties at work. We had an XSS report once, but it was under an odd domain that doesn't host any authenticated resources. Yet "XSS = bad" so the report had a way higher urgency score than it should've. Sure we want to fix it - and we did - but it wasn't a credential-stealing nightmare scenario XSS.




> "out of bounds read = bad" even though realistically you can't do anything bad with it

Schneier's law [1] is at play here. Especially in situations where the person who wrote the bad code in the first place is later tasked to fix it. If they couldn't see the problem the first time it is going to be tough for them the second time around.

I've gazed longingly at a promising bug for weeks with no idea how to weaponize it, only to finally give up and ask someone smarter who tends to point out a trick I had never seen before. Even with a career in security old enough to buy beer, I am still amazed by the clever shit I miss.

1. "Any person can invent a security system so clever that she or he can't think of how to break it."


> If they couldn't see the problem the first time it is going to be tough for them the second time around.

No. Its easy to overlook a bug like overflow or use after free, but once you see it, you understand it just fine.


This is a fair point. I actually hesitated a bit before posting my comment because I don't actually know 100% that the XSS was harmless. But the reporter didn't actually demonstrate anything other than an alert('owned').


I don't understand, is arbitrary js execution not enough?


You may have noticed many websites host user-uploaded content on a different domain to their main site. Github delivers some things from githubusercontent.com, Google some things from googleusercontent.com, Reddit delivers some things from redditmedia.com and so on.

The reason they do this is to give a big layer of protection against the harms of XSS - even if user-uploaded content manages to execute arbitrary javascript on googleusercontent.com the that javascript can't access cookies for google.com as it's hosted on a different domain.

Some scoring guides rate XSS as very high risk, assuming you don't have this mitigation in place. resonious had the mitigation already ("it was under an odd domain that doesn't host any authenticated resources") so the XSS wasn't a very high risk.

With that said - some people will thank you for demonstrating how a small security problem can be escalated into an account takeover, but other people will call the cops on you for hacking their website or threaten to sue you. So I would say if you're reporting XSS it's reasonable to stick with an alert box, unless you know the person receiving the reports is reasonable.


>Google some things from googleusercontent.com, Reddit delivers some things from redditmedia.com and so on.

Exactly:

>If you are injecting script in subdomains of (sandbox) domains such as: [...] ...we won't file a bug based on your report, unless you can come up with an attack scenario where the injected code gains access to sensitive user data.

https://bughunters.google.com/learn/invalid-reports/web-plat...

e.g.

https://www-tutorialrepublic-com.translate.goog/codelab.php?...


It depends. Where is that JS executed? On the login page? The payment details form? In a restricted IFrame serving a tracking pixel? On a static page handling public document downloads with a different domain to the logged in contexts?

With a good security report, you want to include example impact. "I can run alert." - meh, but should be checked/patched just in case. "I can run arbitrary JS on a page collecting payment details, without CSP restriction." - now that's immediately bad.


It can definitely impact the company’s reputation negatively. If any of our sites allowed alert(“owned”) it would be top priority to fix.


It's not. If it is, then why not go file a report at jsfiddle.net?


There is a medium-to-weak argument that enforcing these minimum standards even in clearly benign places raises standards everywhere, which means it is much less likely to show up in the really bad places.

I'm just making debate however, I don't think most people are playing 3d chess when they ask for these changes, they just want the line item on their report to clear up.


> It seems a lot of infosec folks have these shallow "X = bad" mappings in their brains. Like in that Caddy issue, "out of bounds read = bad" even though realistically you can't do anything bad with it.

As others have pointed out, no few "unexploitable" issues have turned out to be entirely exploitable in the hands of the right person. In a world where innocuous vulnerabilities can be chained together into very dangerous ones, this gets much worse. As a colleague of mine described to me, CVE math means 1+1+1=10.

More subtly, this interacts with one of the weirder ideas in security. Vulnerabilities exist before they're known. This means that there's likely a series of vulnerabilities lurking in every bit of software you use. It's hard to do much about those with certainty, but you can do something about the bug in front of you to prevent it from contributing to CVE math.

To put it another way - risk analysis has room for error. Don't be too certain of yours.


In my experience, many corporate infosec guys are basically just beancounters without a deeper understanding of information security.


I have personally rejected a candidate that claims to know professional security tools but maintains that it is not his job to filter out false positives ("because they don't exist") before presenting the results to the developer team. The same candidate would also say "you need a firewall" but would not be able, in an interview setting, to explain how to protect the database server using a firewall - i.e., what exactly to allow.


Yes, unfortunately any kind of staff position that does not deliver product attracts these types who just want to hide and never be accountable for delivering value to the business. I'm not saying the positions aren't needed or valuable, but just that it is appealing to the wrong kind of people.


And unfortunately, their value is often directly proportional to the amount of workload they add to the productive segments. People wonder why security teams are the first to be cut during hard times, but this is basically why. That said, I can see both sides of it, security is obviously of great importance. But there just has to be a better way, perhaps some categorization of threat models cross referenced against the CVEs/etc.


> security is obviously of great importance.

It is. But the thing is, those corporate infosec folks I'm talking don't actually improve security. It's the same as with many audits.


Shoving out endless amounts of broken trash has negative value to society, even if it makes the company money hand over fist.


It seems a lot of non-infosec but technical folks have a pattern of shallow first-order thinking. Like in that XSS issue, "no authenticated resources = not a big deal" even though realistically you could redirect end-users to a phishing domain (I can assure you that a much larger percentage than you'd like would fall for this), or create and delete invisible DOM elements in such a manner to exploit a UAF vulnerability in their browser's rendering engine to perform a sandbox escape and get code execution in your users' userland, where they'll proceed to dump all of your users' saved credentials and start emptying bank accounts - all because you couldn't imagine anything bad happening from an XSS on a site with no authenticated resources and therefore chose not to prioritize fixing it. Even CPU-level speculative execution vulnerabilities can be invoked through sandboxed JS running in a browser.

Deprioritizing an XSS vuln in an end-user facing website you built because there isn't sensitive, authenticated data on that domain is like being the owner of a construction company that built a hydroelectric dam incorrectly, who noticed visible cracks on the dam that aren't supposed to be there, and decides to not tell anyone and "maybe fix it later" because the hydroelectric generator is still working fine and cracks in the dam don't cause generators to stop working.


OOB read allows an attacker to measure internal status such as stack status, and memory allocator status, this widget along with another flaw makes exploits much more reliable.

You -can- definitely do bad things with the data gained from it.

For the memory allocator data, read up on the "house of" attacks against glibc's allocator.


> "out of bounds read = bad" even though realistically you can't do anything bad with it.

I have absolutely seen more than enough exploits for "obviously unexploitable" vulnerabilties that I think the "out of bounds read = bad" mindset is the only reasonable mindset to have.


I certainly find there are many people who take that naive view.

The right thing to do is to triage them in your context and make your own risk assessment. The CVSS numbers are a guide to which ones you should prioritise looking at, but not a risk assessment.

However, in the security and risk world, you have to take a worst case view in the absence of further reliable information. This can lead to fixing things that maybe don't really need fixing, even when a triage has been done. It's often hard to discover enough reliable information.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: