The truth is that when reputable information security specialists are engaged to perform a no holds barred internal network penetration test or red teaming exercise for a client, they will gain full administrative access of the network in more than 9 out of 10 cases. There are well known and documented techniques for escalating privileges and traversing through a network. This is just the reality if you operate a typical Windows corporate network of a sufficient size.
In the past, companies mostly just accepted this risk and focused on protecting their network perimeter. Over time, this attitude has shifted and organisations now recognise the insider threat (e.g. a rogue employee/contractor or an external attacker who has already breached the perimeter).
Although not a good read (in terms of being engaging or interesting), you'll find that a lot of security professionals will use something like the Center for Internet Security (CIS) benchmark when doing a formal audit or configuration review of RHEL (or any major Linux distribution for that matter). They will run a command line tool that will check the system's config against every item in the benchmark. The tool will generate a report with pass/fail outcomes for each item plus hardening advice. It's not perfect but it can be a decent starting point before you do further manual analysis of your system.
It says in the About section on the home page "Unless someone can intercept your local traffic and our traffic to a site, you'll be able to spot MITM attacks". I'd argue that this is not entirely true. If an attacker operating as a MITM can intercept all local traffic (e.g. via some form of DNS attack), they do not need to control the traffic from hash-archive.org to 3rd party sites. They simply need to control how hash-archive.org is presented to the victim. In theory, the attacker could serve up a bogus version of hash-archive.org that appears to be legitimate but is returning falsified hashes that match the malicious downloads they have intercepted elsewhere.
You might claim this is not possible because hash-archive.org runs over HTTPS so an attacker would also have to somehow generate a valid SSL certificate signed by a trusted CA. This is true but if someone types hash-archive.org into their browser URL bar, the initial request is made over HTTP. The legitimate hash-archive.org redirects the client to HTTPS seamlessly but a fraudulent hash-archive.org could just keep the victim on HTTP.
To provide some mitigation against this type of attack, you could do a couple things:
* Only allow hash-archive.org to be accessed over HTTPS (port 443). Close port 80. [EDIT: in fact, this doesn't really help all that much because the MITM can still try serve their bogus version of hash-archive.org over HTTP]
* Set the HTTP Strict Transport Security header (HSTS) [1]. After the first visit to the legitimate hash-archive.org, compliant browsers will only ever allow future visits to be made over HTTPS.
For good measure, you could also set up HTTP Public Key Pinning (HPKP). HPKP is a 'security feature that tells a web client to associate a specific cryptographic public key with a certain web server to prevent MITM attacks with forged certificates.' [2]
"NoSQL, or rather NoAuthentication, has been a huge gift to the hacker community. Just when I was worried that they'd finally patched all of the authentication bypass bugs in MySQL, new databases came into style that lack authentication by design"
"We are currently utilizing advanced protocols including double salted hashes"
Shudder. Whenever someone starts talking about double salting, triple salting or even just salting, it's usually a sign that they are doing password storage all wrong.
Salting only thwarts attacks against pre-computed lookup (i.e rainbow) tables and most attackers don't use rainbow tables nowadays to reverse hashes. Increases in GPU power have meant that it's more practical to just enumerate through all password permutations on-the-fly than do a lookup in an enormous file.
If a company is using a modern hashing algorithm purposefully designed for password storage (e.g. PBKDF2, bycrypt or scrypt), they need not even consider salts because they are automatically incorporated into the algorithm and are transparent to the implementor.
In my opinion, the best article describing the current state of play with respect to password storage is the following:
When an application residing at one.example.com sets a cookie, the browser by default resubmits the cookie in all subsequent requests to one.example.com and also to any subdomains, such as sub.one.example.com. It does not submit the cookie to any other domains, including the parent domain (example.com) and any other subdomains of the parent, such as two.example.com.
A server can override this default behavior by including a domain attribute in the Set-cookie instruction but this is pretty uncommon. Cookie scoping (and therefore cross-domain protection) can be managed differently if the default behaviour is not intended and HTTPOnly is not relevant here. HTTPOnly is really only a simple mitigation against the most obvious and trivial Cross-Site Scripting (XSS) exploitation technique (i.e stealing a session token).
"The passwords are stored as SHA1 hashes of the first 10 characters of the password converted to lowercase. That's right, truncated and case insensitive passwords stored without a salt"
I'm surprised this fact is not getting more attention. In theory, this means that a MySpace account with a password of Welcome1234567 could be logged into with a password attempt using any of the following examples:
In essence, case sensitivity and the 11th character onward are completely ignored. This vastly reduces the total key space. To compound the problem, SHA-1 has been used which is not suitable for password storage (salted or otherwise) because it's an intentionally fast algorithm. This means an attacker can more efficiently run all permutations through the hash function to find a hash match and hence the password. In fact, as I've described above, the attacker doesn't even need to retrieve the exact password to gain access to the account. They just need an input that will produce an identical SHA-1 hash (i.e. an input containing the same first 10 (case insensitive) characters as the original password).
Based on the work I've done reversing password hashes in bulk (legitimately for clients in penetration testing engagements), I'd suggest that at least 80% of the reported ~360 million hashes could be reversed within a few days with access to the full data set and $5k worth of commodity GPU hardware. And you can guarantee that these passwords will be used in future attacks against other web sites because of how common password reuse is. Frightening.
My previous bank ( ASB ) used to truncate passwords. I found out because one day I was trying to enter my password and it kept refusing it until I left off the last two characters. It turns out that they had stopped truncating or perhaps just increased the length, and so my 10 character password was just an 8 character one all along. It kind of boggles my mind that a bank would do that.
Not quite as frightening as the schemes some financial institutions use... one that immediately comes to mind is 6 digits, no more or less, and probably stored in plaintext. Then again, bruteforcing attempts are usually very easily noticed and kept from succeeding on such systems.
Sure. But it would be a stretch to find any financial institution with as many as 360 million customer records. Maybe one of the state-owned commercial banks in China being the exception.
And more to the point, the corresponding email addresses and/or usernames in the MySpace breach are leaked along with the password hashes. The same email address and password combinations will be tried on other web sites (e.g. Amazon, Facebook) with a reasonable chance of success. No brute force necessary.
This is one of those cases where it's the responsibility of the bug bounty platform operator (HackerOne) to ensure that its customer (PornHub) deals appropriately with bug bounty participants. If PornHub doesn't offer a clear scope and fair reward for effort, penetration testers may be disillusioned with the HackerOne brand also and choose not to partake in other bug bounty programs it oversees. And of course the platform cannot thrive without a large number of skilled and active testers.
This is so true. I work for a large social network and we recently got an email from an employee of a particular porn streaming company. They wanted to implement this new web compression protocol/algorithm into their systems and they had heard that we were doing the same.
Our solution involved writing Apache Traffic Server plugins and achieving high throughput. Their solution involved using PHP to execute the demo cli tool that came with the library and pass it the content they wanted to encode.
We interviewed a candidate who previously worked for sugardaddie.com. It was definitely an interesting conversation but I think yours would take the cake. :)
Yeah, that's too hard to top. I could only tweak it with PornHub penetration "expert" or "professional." I imagine the phrase protection would be avoided over security in that company given it has dual-meaning there. Wouldn't want people to think I dispensed... commodities... all day long. ;)
Exactly, even more it would be great publicity, at least to pentesters, if Hackerone would investigate one of these reports and publicly reprimand or even disqualify a site like PornHub.
But pentesters are not the ones paying for a HackerOne listing, those would be the companies, and perhaps the companies might not be so happy if HackerOne would publicly shame some of them.
A similar anti-CSRF measure is implemented in some application frameworks by default. For example, When performing XHR requests in AngularJS, "the $http service reads a token from a cookie (by default, XSRF-TOKEN) and sets it as an HTTP header (X-XSRF-TOKEN). Since only JavaScript that runs on your domain could read the cookie, your server can be assured that the XHR came from JavaScript running on your domain. The header will not be set for cross-domain requests."
This is an effective approach because unless an attacker has already compromised the relevant cookie, they will be unable to spoof the X-XSRF-TOKEN header in a cross-origin request. On the server-side, you just need to validate that (a) the X-XSRF-TOKEN header was sent and (b) it contains the expected value for each HTTP request received.
I'm a few chapters into Silence on the Wire and enjoying it. Zalewski is brilliant. It's very theoretical however. I'd only recommend it if your motive for reading it is pure interest rather than a desire to pick up practical skills that are immediately actionable.
In the past, companies mostly just accepted this risk and focused on protecting their network perimeter. Over time, this attitude has shifted and organisations now recognise the insider threat (e.g. a rogue employee/contractor or an external attacker who has already breached the perimeter).