Great story, but I wouldn't call it "failed" because it showed that the company has really good security procedures. I don't know many companies that could have resisted this sort of internal threat.
I've had pentests that found nothing before but I had logs full of attempts to compromise the app, including in some ways I'd never even heard of before. I didn't consider them to be failures either.
The author did find a weakness with Pam giving out her MFA token over the phone. It didn't yield anything beneficial in this scenario, but it was still a point of weakness.
Also, while its good that they busted him based on the powershell use, I'm surprised that they didn't install the listener on an IT system, since he had a short window in their office. If the author disguised as a cleaner, they most likely could have talked their way out if they were caught.
Years ago an office I was working in was 'broken into' by a former locksmith that had since taken a job with our main competitor. I knew something was fishy based on the specific towers that were stolen (yes, they stole complete hardware...) -- one from payroll, another from sales, etc. The systems were spread out across the office, so it wasn't a simple snatch and grab.
At first glance it looked like the server room door had been picked or kicked, but looking at the damage itself, the damage was completely superficial, if not lazy. I even Hollywood kicked the door myself to see if it opened, and nothing.
After checking some security footage, someone recognized the locksmith -- which later revealed that he quit a few weeks earlier and had been working for the competitor.
Let's be honest. No company would have resisted this sort of internal threat. This is either fiction (which it sounds like it is) or this is just a lack of time. The difference between pentesters vs real hackers are that hackers have as much time as they want.
The author is a well-known (on the twitter-sphere) pen-tester. Despite the childish gif memes and dramatic trimming, I don't think he made it up.
But I agree, it seems strange that he didn't succeed.
When corporations hire pen-testers, the set-up is that their CIO/CSO has full knowledge of what is going down and when so that they're prepared to intervene in the unlikely scenario of the pen-tester getting caught.
Knowing the mentality that is common amongst c-level suits, however, it would not be surprising if the suit tipped-off his IT guy in advance. After all, wouldn't catching the well-paid pen-tester look better to the other suits than being utterly pwned and paying for privilege?
The CIO might have successfully "kobiashi-maru'd" the whole exercise. Not saying that happened here, but if large corporations are capable of horrific security breaches, I think they're also capable of trying game the system to avoid things going sideways.
I can't tell you for sure that this is truth or fiction, but I've worked in information security for seven years (three as a consultant), and the story is 100% believable. The writing style is also consistent with the way about half of the pen testers I know write when they're not writing a formal report.
I specialize in the remote version of the kind of test the author describes. I'd be happy to do it in-person too, but usually the clients were less interested in that.
I agree this reads like fiction. Strange capitalization, and no technical person I know or know of would write this way; it all seems very cavalier and not the careful writing of someone who actually does this for a living. Not that I know better.
You might find the DC listed in the registry if you look for key "DCname". However again, that's another thing you'd use app whitelisting for, and catch every invocation of regedit. A lot of unattended driveby attacks attempt to do bad things here, so you catch/trap them.
Again, dead end.
So, I'd love to see a good walkthrough how to find the DC if these avenues don't work.
When you get to larger orgs, the DC doesnt always have to be the logon server. In fact, you're liable to see a slew of logon kerb redirectors talking to a few ldap backends running as pri and secondary AD.
I have to admit I’m not familiar with anything in your comment. Is this different terminology for a RODCs? I was also under the impression that AD didn’t have a primary DC, the only vestige of that is the PDC Emulator FSMO role.
It seems like fiction. I run a small but specialized computer security consultancy and most customers are not interested in throwing away money at "pentests" that don't involve breaking code.
When we gain access to a customer's internal network, we do try lateral movement on the network and give recommendations for how to isolate different parts.
From the perspective of a onbtestee, if you didn’t get the goods it failed. There is little more frustrating than getting a gooose egg in a pentest. Blue team, not so much
Most security cameras aren't monitored 24/7. It's just not practical. I'm sure they would have gone back and checked the tapes eventually, but as it happens they caught him before reaching that point.
They don’t need to be monitored by a person 24/7. All you need is a decent video analytics; most are even built-in IP cameras these days. A simple one that’s effective just detects motion within a zone during certain hours
This is starting to sound more fiddly than is reasonable for inside security on a corporate office. The real failure here is just that someone left the door propped open. That's easily addressable, and as it turned out, they caught the attacker anyway.
Meanwhile 4 put of 5 webshops we check have critical vulnerabilities and half the owners don’t care because “it just costs money to fix it and things have been fine so we’d rather spend more on Adwords”
Security, no scratch that - the human psyche works in mysterious ways :)
Frankly, that's an entirely appropriate way to deal with security. You can make the most secure product in the universe, but no one will use it if it takes forever to develop or is unnecessarily difficult to work with. Of course, how much security is enough is a multi-variable problem, but in general I find that "security people", who for obvious reasons want to sell you on security, way over-estimate its value.
no one will use it if it [...] is unnecessarily
difficult to work with
In some aspects of security, you can trade off inconvenience for cost, instead of trading off against security.
For example, imagine an office with a security gate, but it takes a few days for new employees to be issued with a working gate pass.
One way to reduce that inconvenience would be to trade off security, by having staff members buzz strangers in if they know the magic words "I just started, my pass hasn't arrived yet".
An alternative way would be to trade off cost, by having while-you-wait gate pass printing, and every gate having a guard who can check an online employee directory before buzzing someone in without a pass.
I feel a lot of people say "Security is a trade-off with convenience" when they actually mean "Security is a trade-off with convenience and spending, but we've already taken spending off the table"
There is another option here. Have security passes issued promptly. It's possible, I've worked at a bank where it took all of 5 minutes. (Not that I disagree with the general point, just the example)
I had a meeting at a major UK broadcaster in London the other day. Arriving at reception I had my photo taken, a visitor pass printed, and put in a pass with a guest RFID enabled card, and to top it all a voucher for a free cup of coffee from the canteen.
No reason that such a system can't be done for the first week or so, even if you don't want to have the facility to print permanent cards in every office.
That's what I meant by "having while-you-wait gate pass printing" :)
A guard can still be useful to let in people who've forgotten their pass or left it at their desk, or whose pass has broken, or who have come from a different office, or who've got their hands full carrying things; and to deal with people without passes like contractors, interviewees, and other visitors; and to tell off people who tailgate now there aren't any good reasons to do so.
You’re getting downvoted because of being flippant but you are right, by definition there exists a trade off between cost and any feature. It’s the least interesting trade off: if want more X it will cost more. There also is a trade off between any two features mediated by cost, if I spend more on X, then I can’t spend as much on Y. What are more interesting are inherent trade offs. Like the famous trade off between security and liberty seems to not just be mediated by cost.
Oh it absolutely is, and the most humbling thing was realising that I would probably do exactly the same thing in their situation ;-)
However as a developer and business owner myself (as opposed to a security person), I genuinely thought more people would be more concerned about the security of their website. I had one person explain to me their XSS injection vuln was intentional because patching it would break some JS plugin....
You see not knowing is one thing, but knowing and being complacent is quite another. But we see that in more areas in life I guess :)
> I had one person explain to me their XSS injection vuln was intentional because patching it would break some JS plugin
Changing the "password" field on http site to "textarea" with hidden css rule is another great one- to get rid off the browser warning. As it was explained to me, because TLS is too much hustle.
Really refreshing to see someone say they might do the same thing if the tables were turned. It's easy to judge others shortcomings in what we know, but there's probably plenty of other things that we have our own shortcomings with.
Thanks! There’s a reason large enterprises spend a ton of resources on security while SMBs are vulnerable on just about every public attack surface: priorities.
Well that and expectations. Most small business owners see security as part of the developer’s job. However don’t realise it’s something that needs non-stop attention as websites and technologies in general are affected by erosion. What’s safe today might not be tomorrow. The developer isn’t at fault for that, and yet the website owner refuses to foot the patch bill, or pay for continuous monitoring.
I could write all day about this and we’ve only been in the trenches for a few weeks now :p
> Frankly, that's an entirely appropriate way to deal with security.
No, it’s not. You don’t have to be perfect, but “critical vulnerabilities and half the owners don’t care” sounds a long way from approaching perfect. This is why we have botnets made of webcams and blogs, and constant leaks of people’s private information and passwords.
Let's qualify that - it's an entirely appropriate way to deal with security with the current climate of not actually having any repercussions (or none beyond some tepid market response) for security breaches that damage third parties.
Honestly, I feel this position is morally bankrupt.
When someone pays for your product, gives you sensitive information, or interacts with you in any way, they have a set of expectations. When purchasing medicine at a pharmacy, they expect to receive something that has a chance of making them better. When they deposit money into a bank, they expect that the bank will try to have the money there when they want it later. When they drop their car off at a mechanic, they expect the mechanic will try not lose the car. Anything less is called a scam.
So when someone purchases a product, they expect that appropriate measures have been taken to ensure that they will not be hurt by that product. The average consumer doesn't know about proper electrical grounding, the interactions of their prescription medications, or XSS. They shouldn't have to, they're paying to not have to.
"security people" overvalue security in relation to _your_ bottom line. In truth, they aren't just there for you, they're also there to advocate for the people who will actually be hurt when your garbage security gets circumvented.
He said it's overvalued, not that it's irrelevant.
In your example you expect the mechanic to lock your car and keep the keys safe, but you don't expect it to be put in a walled in area with armed guards and full-time surveillance.
The latter is what IT security can quickly lead to.
That's absolutely true, but the position he was endorsing was
>Meanwhile 4 put of 5 webshops we check have critical vulnerabilities and half the owners don’t care because “it just costs money to fix it and things have been fine so we’d rather spend more on Adwords”
Waiting until you get owned to actually patch up critical vulnerabilities in order to save money is negligence.
I'd agree with that if these people would also suffer financial consequences if it goes wrong, and not offload that on their customers whose data got compromised.
Let's say their customers were aware of and understood all the risks. Do you think they'd really go elsewhere or give up the service? If so, then you're right, but my guess is most of them wouldn't because the risk is acceptable.
That's fine as long as your customers have the same expectations of the level of security you're providing as you do, and you're complying with appropriate legislation around handling of customer personal and business data :)
The company running a tight ship front-loads risk costs. The firm that doesn't is arbitraging present vs. potential future costs. The apparent lower costs are illusory.
A lot of the time the costs aren't actually payed by the breached party though. Most customers don't meaningfully change behaviour due to security breaches and meaningful fines are the exception, so your biggest risk is a breach benefitting your competition. If most of the costs can be externalized there is little incentive to care about them.
We're a SaaS Shop in europe targeting larger enterprise customers. While there is some funny business dealing with that kind of customers, security actually matters to them. We're currently being paid to implement TLS encryption for our offsite database replications, we might be able to create a customer project to setup tripwire systems. That's fun :)
The problem I have with pen tests is that they're not systematic and rely on the cleverness and knowledged of the tester. Even if they identify an issue, it's often hard or impossible to ensure it doesn't regress, and if it's an inhouse or custom software they've never seen before they likely won't be of much help without a lot of effort.
I think one step forward would be also approaching security the same way epidemiologist track down causes of diseases. In that they take patient data and trace back the factors that caused it, just instead of patients we're talking about security vulnerabilities and breaches. Having a corpus of causal diagrams that then we can develop software to analyze risk factors that we can then systematically test for.
I agree and from a purely rational standpoint you are correct. However in my experience the main benefit of a successful pentest is to achieve a change in culture and the perception of security of staff across the entire workforce where before it was deemed “taken care of” or non-important.
In short, people won’t change before shit has hit the fan, and a pentest is the closest you can get to a controlled shit-hit-fan situation without it being a meaningless drill :) How, and what is uncovered is besides the point and merely secondary when viewed from that perspective.
I did a website for Visa a few years ago, and it required a pentest before launch. We tried to find a loophole to justify it not needing a pentest (because that would give us 3 more weeks to develop the site), but no luck. It was such a simple site with no database, but they required it to go through pentest anyways.
The pentest came back with some recommendations. Mostly to do with the use of HTTP headers. Absolutely we fixed them, and made damn sure that the next time we had a site to be pentested those unforced errors were not repeated.
So on a small scale, yes. Pentesting improved the way we developed websites. I don't know about how it affected the "culture". Visa has a really strong security culture already.
>> Pentesting improved the way we developed websites. I don't know about how it affected the "culture". Visa has a really strong security culture already.
So if the security culture is strong, the pentesters reports are read and implemented; if the security culture is weak-to-completely-non-existant, they'll likely be ignored?
I think you already answered your own question when emphasizing ‘public’ there.
Security isn’t fun, and at best a relief (when nothing is found). When a pentest was successful, as in, the tester got in you can be sure it’s kept under wraps.
So no, I don’t think there are many, if any public records. The fact that there is shame and status involved in not being completely air tight is a big driver of the persistent insecurity of the world at large.
Anonymized records would go a long way in achieving a shift to safety and awareness but as you can read here they are easily construed as stories of fiction.
Everyone likes talking about that growth hack that drove a 1000% revenue increase. No one wants to talk about the database hack that spilled thousands of client records out in the open.
So basically tracking "business-as-usual" attacks (probably 99% low-tier, low effort attacks that didn't go anywhere) or serious attack examples on other companies isn't going to change the culture. But a full blown, highly skilled attack, with a fully dedicate adversary, specifically targeted at your business and business value, with potentially devastating consequences - will do a better job of waking people up?
Yes. It seems stupid, but what you outline is a 100% match with what I’ve seen in practice. Until it happened to them, the other company was stupid and careless - not them.
At risk of repeating myself I’m chalking this up to basic human behaviour in all fields of life and the lack of taking responsobility by 80% of all people.
Security is very lopsided in that you just need 1 person to be careless for the attacker to get in, while the defender needs to be 100% secure across all vectors.
I could discuss this all day, and you know the importance of the topic, I know it, but the fact of the matter is that most non-tech people think of security as an annoyance. The solution? No idea yet, other than finding the right chord to strike and “fix” this psychological problem. We’ve made significant strides the last few months but getting companies more security conscious has been a tougher nut to crack than I first anticipated.
Feel free to email me at stan@site.security if you want to exchange thoughts on the topic. I’d love to take a deeper dive into the matter with anyone that’s passionate about solving the security problem in any way shape or form :)
Infosec in practice (not imaginary scenarios) is also about good hygiene by the regular plebs and investing in proper QA by hiring some people who are naturally paranoid and with enough clout to push back on bad lazy/ideas.
That plus regular fixed check ups, where more deep dives are done.
Like you said, and as the article points out, it seems to be as much a cultural day-to-day thing as it is about technical searches for vulnerabilities. Or worse renstalling noisey monitoring systems with a bunch of full positives and pointless investigative rabbit holes.
Certainly one could imagine better than the typical pen test. However, the typical case is not even that, the typical case is more or less "nothing", and an occasional pen test is loads better than nothing. It's kind of like how a mediocre accounting audit is better than an organization that never bothers to balance the books.
Pen tests are a good way of knowing if you have messed something up, but they do not guarantee that you haven’t. A pen test is not a substitute for other good security practices.
> The problem I have with pen tests is that they're not systematic and rely on the cleverness and knowledged of the tester.
It really does take skill to accomplish some things and breaching defenses (and protecting from breaches) is one of those. Be it a pen tester vs the blue team or a foreign spy and the intelligence agencies thwarting them, when there’s one and one against another it really does come down to the skills, persistency, and resources of the people on both ends, and a bit of luck. There’s just so many variables and as long as there’s people involved there’s a vulnerability somewhere that can be exploited that needs someone to wake up that day and use his mind and body to defend it.
I could've run "net accounts" on my workstation to query Active Directory directly & see their password policy, but decided to look elsewhere first. I didn't want to set off any alerts or logging.
I know nothing about Windows, but I'd have thought checking password policies far less likely to alert than plugging in your own device on the network.
Anyway, my favourite bit was that they didn't stop the people in Accounts running Powershell, they just raised an alert. I much prefer that approach to blocking people most likely just doing their job.
If PowerShell and cmd logging is turned on (and I'm sure it is) then seeing net * commands run from a marketing machine is hella bad. Its similar to "HEY LOOK AT ME IM HACKED!"
These logging things do get in the way of devs. They run PowerShell after ps... Its not uncommon for MB's a day of log per dev. So if you're wanting to run crap and get away with it, hack a dev machine and bury your commands in there.
So true. Just write an innocuous automation script that will fail on privs generating massive logs and then use debugging as an excuse when you try to systematically audit capabilities. Often helpful IT staff will open up vulns for ya to put an end to the noise. Of course in many orgs it’s necessary to do this in order to get legitimate work done too...
> I woke up, bloody, in an ergonomic office chair, my hands zipped tied behind me with the same zip ties they used to manage the server ethernet cables.
I didn't realize this story was fiction until I got to this sentence.
Update: The author confirmed on Twitter that other than the dramatization, this story is in fact true.
> And, aside from the beating up and tying down, it was true![0]
I had a programming job in the military where the desktops were in a vault secured with a combination lock. When opening up in the morning you got 2 cracks to enter the combination to the vault correctly. After that people with M16s showed up. That actually happened at least once that I know of, fortunately not to me.
I was hoping to find out what the blue team should have done in a real incident.
Does the blue team tech guy tackle the red team guy and restrain him? Give chase and hope the guy exits through a door with a security guard? Let him leave and hope the cops will care?
They're supposed to call in physical security. Many companies have security crews that are qualified and authorized to use force (and promptly call law enforcement.) Sometimes these crews are armed.
Storytelling as a knowledge share is fundamental to human culture. Drama and story refinement are required to make the knowledge easy to remember and spread.
As for you being convinced, my personal experience is that this story is entirely believable. Many of us in security have stories we cannot share that would make this one look like a Saturday morning cartoon.
Dramatization has nothing to do with credibility. It is a memory facilitation technique. Because you the reader can remove the drama and distill the critical story elements for further inspection of credibility.
Credibility is found in the citations, which here is only the story teller. As that is only one data point, I totally understand doubting its credibility because one needs more citations and voices for proof.
Further, I never stated I found the story creditable. I was operating from a believability standpoint. Inferring one's experience to weight if the story could possibly be believed. You shared you found it hard to believe based on it's dramatization. Where I shared that I found it completely plausible based on my experience.
And that is my main argument, that in this equation drama shouldn't be used as a weight. Positive or negative.
Edit: For the folks down voting this. Please don't conflate dramatization with persuaion, propaganda, or fake news. Dramatization is a tool used in those techniques.
I agree, quite a bit of this reads like fiction. The scenario itself suggests an uncommon level of competence, but assuming that's true then the rest is pretty believable for the same reason, if given to some dramatic license.
Depends on the culture. I interned at a place that fell under the Critical Infrastructure Protection program. It was so strict you'd get a visit from security if you tried to badge through a door you didn't have access to.
Interesting. I wonder if, in a “mind your own business,” curiosity-discouraged sort of environment, people might also be less likely to notice and report threats.
The first thing I do with credentials is find out what they open. Seems like if you have the resources to follow up on access attempts, you have the resources to set ACLs correctly so you’re not scared of them.
I had an interview at a relatively secure place. Premises are walled and gated, the first thing you come to is a checkpoint with armed guards and security cameras. If you don't have an access card or your access card isn't permitted in the area you're going to, you don't go in without escort. And that escort must have been agreed to in advance; the security team must have a document about your planned trip. Of course they check your id before calling the escort. People don't do work on internet connected computers; if you need to browse, there's a separate computer for that. That's about as much I'm comfortable revealing.. it goes deeper :)
I guess it's a place where "getting work done" doesn't involve downloading 1000 deps with npm and what have you from random sources on the internet.
Nice story and a good illustration that a lot of good IT Security isn't buying fancy "next gen" products, it's doing the basics of managing your systems well.
It costs more to run IT well, but there are good payoffs, like this.
I would imagine that it was just one of these “next gen” products that identified the fact that Powershell was running on a machine that normally would not run Powershell.
I love reading factual hacker stories that read like fiction. very entertaining. A brutal 5-7 year on ramp of learning what computers actually do on the inside... but understanding what the story about is worth it.
During pentests most testers run the usual route of attacking the domain. In my opinion it's not realistic, because most attackers don't attack domains. They attack applications.
True. Information security criticality migrated to applications a decade ago and never looked back. Internal networks are almost always easy to compromise. We have never failed on a 1000+ entity network. This implies, rather strongly, that you need to silo critical data and protect it at the application later, because the network is very hard for most organizations to secure.
That is true but if you can get Domain Admin access, generally getting application access is pretty simple after that, and as getting DA is pretty simple if the domain hasn't been locked down (like the one in the post had), it's a reasonable place to go...
IT Services mandated a pentest when we wanted to switch our corporate website to Drupal (as part of a battery of exercises intended to convince us to stay with Interwoven Teamsite, which they paid a million dollars a year for).
When we agreed, they supplied an external penetration tester with the Drupal superuser password and said, "See? Unsafe!" when the pentester was able to log in and put the site into maintenance mode.
I suppose in the strictest sense they were correct: trusting IT with credentials did turn out to be a security issue.
You are missing "having read the article". #1 MFA, IT computers never left unattended/accessible, even IT accounts had minimal permissions, "IT like" behavior on non-IT computer caused alert (so even if they got past all the above they would have got caught when they used, like the OA did).
Finally placing and retrieving physical devices/physically messing with machines is risky and luck based. It would be one of the last things you tried. As it was in the OA.
I had to endure the horror of reading it that way yesterday. Twitter loads incredibly slowly and keeps showing full screen popups to sign up which if you aren't very careful will redirect you away from the page.
Why on earth would anyone thing twitter is an appropriate blogging platform.
It's actually the original format in which the author published this content. It was auto-converted to the blog style format linked here on HN by https://twitter.com/threader_app.
> Want to see some magic? See a thread. Mention @threader_app with the word "compile". Get a reply from our bot with the link to the thread[0]
No, twitter is a trash medium for sharing stories like this. That said a link to the twitter stream in this thread will be useful to future visitors to this HN thread is threader is no longer a thing and/or down.
Disable Javascript, you won't even notice it's not a blog post. (After reading your comment I had to whitelist the website to realize it actually was a bunch of tweets)
> On most Internal Pentests, I generally get Domain Admin within a day or two. Enterprise Admin shortly thereafter.
Sounds realistic, from how most Windows shops are run.
Would it help to stop using AD to manage the IT infra, or have tiny domains (say, max 10 computers) without centralised control, and no company-internal workstation networks? Maybe throw in a rule that devices are recycled (to be wiped) frequently, say every 6 months.
I’ve seen it where it was EXPECTED that it normally took THREE WEEKS or so for a developer to have the set up correct to be able to develop the application they were hired for.
It was a Java web app. Not some weird special embedded hardware or something.
The app was just that (pointlessly) complicated to set up. Even in production it took quite a while despite being automated buy and installer and people having lots of experience installing it.
Edit: sorry about the duplicated text. I’ve fixed it.
I’m not a Java hater, I am in Java developer. I actually like the language. But the way this particular app is designed and configured, along with the Extertal systems that had to be reconfigured to be able to run at (which could have been eliminated) meant that setting up the system for developmental just took a ton of trial and error.
In a previous job I was asked to use a configuration management tool to automate setup and installation of a development environment. (Never did it though.)
This would have been easy on a Linux desktop: installing Java, Maven, an IDE, and changing some settings files for the corporate proxy and artifact repository etc is very basic for Ansible, Puppet or Chef. It's probably achievable on Windows or OS X, although I'm not sufficiently familiar with them.
Heavy customization of the IDE could be trickier; the configuration system doesn't necessarily stay very stable between releases. Maybe something that's part of a larger, well-designed system, like KDevelop, would be easiest.
The first choice strategy obviously would be to automate this stuff.
Because it's hard to mandate a flag day where everything must be automated, a transitional strategy might be: document each dev setup so well stepwise that you can outsource it to the IT dept or other internal support org. Maybe the dev team can provide a screencast of it, for example. The support org would then have an incentive to automate it to replace manual work, along with a measurable payoff for it.
NO! now the automation has to be maintained and set up. The first choice strategy is to simplify the tools and application!
Always with software people there is this tendency to add more abstraction and machinery when encountering complex abstractions and machinery. You only exacerbate the problem by doing this!
The thing is that things like Active Directory are very useful in managing large numbers of systems and users. Ditching them would make IT much more expensive to run and, as many companies view operational IT as an expense to be minimized, that's a hard argument to carry.
That's the conventional wisdom, but it really depends on what the alternative is. Many things done by an IT-dept provided environment often have negative value for many/most users and most of them can be dropped, or replaced with something less centrally managed.
Less central management == more costs, right? you need more people to administer disparate systems
Most corporations view IT operations as an overhead. What do you do with overheads.... you minimize them :)
That said from a security standpoint, all those little disparate systems aren't really any easier to secure, in fact they're likely harder, it's just the consequences of compromise may be lower
Less automation, more sloppiness. I don’t think anyone particularly needs all the endpoints, either. Just enough of a foothold to go after networked applications.
Probably standard bitlocker. You get decrypted drive automatically, windows can use it, but you only get access to your files, and you can't decrypt and not boot windows. You have to hack local admin for that.
Excellent reading. In my experience,internal politics is the greatest threat against companies.
Most of us here can walk into most companies and engineer end to end encrypted,least access,zero trust,mfa authenticated network using strictly foss tools and methodologies. Question: Who will let you?
No joke,OP wasn't exaggerating about how easy most of his pentests are. Most companies throw money at it,do risk analysis and say "hmm,this is enough,a compromise is tolerable".
IMO, when it rains,it pours. Risk analysis only tells you what the risk is based on known data. Unknown unknowns will be your doom. Best to build things right even without an incentive.
> hauling armloads of old laptops from the IT shack to my cubicle, a small Leaning Tower of Pisa forming under my desk
that sounds more unrealistic than properly protected Windows systems tbh. New marketing employee hauling lots of laptops, no one noticed? Like, people that work nearby might've noticed that?
I've had pentests that found nothing before but I had logs full of attempts to compromise the app, including in some ways I'd never even heard of before. I didn't consider them to be failures either.