They failed to implement multi-factor authentication on their VPNs. This is one of the most common attack vectors. VPNs by themselves are not enough. Over two thirds of corporate attacks with VPNs have been through compromised static credentials.
There are numerous ways of integrating VPNs with MFA ranging from a RADIUS integration to a SAML one. At Saas Pass the admin can choose and implement up to 15 different methods of MFA to VPNs ranging from FIDO keys to push approval to scanning encrypted barcodes to SMS (not recommended) etc...
Disclaimer: Worked on the design of some of the mfa interfaces for the SAML based VPNs and the access control policies.
Personally I think RADIUS is a terrible idea. If that thing is compromised, or even the network or name service to connect to it, the attacker gets everything. Cert-based auth gives every client the ability to independently authorize every login without any ability to learn or compromise credentials. And there are lots of cert-based MFAs that require nothing more than a few lines in an Apache config, or even handled through the operating system to set up a vpn handshake.
One of my biggest contributions at one place I worked was to dismantle their radius auth and replace it with token cert auth. It’s hard to write about that on a resume, because people tend to be more impressed with complexity than simplicity.
"Upgraded legacy, obsolete, AAA service with modern and standards compliant solution"
And if it is ever asked about, boy do you have some great stories about upgrading from a system whose protocol name referred to dial in modems and used MD5 to obfuscate passwords.
One issue in the critical infrastructure space is that many of these MFA solutions are less robust and resilient than no MFA. Anything based on calling out to an external service (Duo, for example) is likely to be less resilient [1] than the static, no-external-dependencies system they use for plain password Auth.
Clearly moving towards complex systems like RADIUS and SAML introduce even more moving parts that sit with interconnects between trusted and untrusted, internal and external components. That becomes a point of failure in the eyes of infrastructure companies.
The challenge here seems to be getting the IT world to understand the focus on availability in critical infrastructure - historically, the focus has been on availability first, integrity second, confidentiality third.
You could argue this shouldn't apply to VPN interfaces in to manage systems, but given the external dependencies of these systems, they feel far less likely to remain available than the kind of solutions the sector expects.
On a separate note though, critical infrastructure need to re-think their budgets to pay for their increased availability needs, given most of the commercial solutions are third party hosted "as a service" offerings.
Unavailability of Duo’s MFA product is extremely rare. But when it does happen, IT admins can switch to “bypass” mode to only require the 1st factor. So even though Duo doesn’t have 100% uptime, you’re no worse than no MFA
Scrolling through their incident feed, it seems to be a bit more frequent than "extremely rare".
Remember that critical industry is a sector where patching software is viewed as risky, because changing anything when it works is seen as unnecessary, and would often have weeks or months of pre-deployment testing. They work on a different time-base to the IT world.
The ability to do a "bypass" is nice in theory, but the need for this option will resonate with the "old guard", who will be able to use this external dependency to keep it out.
Personally I prefer smartcard-based solutions for MFA, since you can do all verification offline without requiring any third party to inject themselves into the chain. I found a good example of a vulnerability this year in Duo that just shows their validation logic is complex enough that one user was able to pass 2FA as another user [1].
That's probably your second worst case scenario, with the worst being an outsider being able to pass 2FA. A well-implemented smartcard scheme shouldn't fail in this way, as you're distributing validation logic to each device, rather than centralising it and relying on a "trusted outsider" to give a "yes or no".
Personally i prefer TOTP. It can be phone or card based, but it's very easy to use, deploy, and rotate. The logistics of shipping and rotating physical smartcards to employees, not to mention having said employees carry that smartcard around with them, seems too much for basically no gain.
This. Have a solid offboarding checklist that includes VPN credentials or any access to systems that employee might have. At this point I don’t have any sympathy for the hacked when it’s as simple as this.
It’s nearly always as simple as this. I got flamed on here for suggesting that Twitter just needs to follow NIST or similar standards to avoid the hack they had last year instead of hitting expert security researchers.
The offboarding checklist is not enough. You should have regular cleanup procedures which synchronize accesses with the main source of truth as well. There are too many exceptions and just human errors for a time constrained checklist execution to be enough.
Checklist is a 90% solution which might help here. Or it might have been the case where the checklist failed.
Invalidating credentials should be a one-stop-shop action. There shouldn’t be “invalidate their email, invalidate their VPN, invalidate their corporate login...”
Can you explain this? I thought ISO 27002 was implementation guidance for ISO 27001. No mandatory requirements even the Annex A controls themselves aren’t mandatory.
Good point. At the company where I work, we have an ISO compliance team that converts that guidance into mandatory requirements. So the authority is coming from company policies rather than ISO itself.
MFA doesn’t solve targeted phishing and that’s something people keep forgetting. Sprays are just the best ROI, but if everything had MFA it would quickly create a phishing as a service business model.
While mitigating a number of attacks, MFA with these types of physical tokens is still only as strong as their setup/enrollment process, which in many cases can be compromised via phishing.
From my experience, there’s a difference between trying to compromise someone with good opsec (many readers of hacker news) and compromising regular non technical people
How are you envisioning practically attacking deployment of FIDO tokens via phishing? Compared to conventional phishing or spear phishing attacks this seems very difficult to execute.
Suppose BigCorp users are supposed to enroll the new tokens they all received at mfa.bigcorp.example which I dunno maybe they're reaching via a link from blog.thebigcorp.example because of course these organisations have a dozen different domains used interchangeably.
I can see how you could try to redirect some or all employees to mfa.b1gc0rp.example which you control, and that's an opportunity to steal their non-token credentials, but now their token doesn't actually work.
Even though they've enrolled with mfa.b1gc0rp.example you don't directly gain working token credentials for bigcorp.example this way, and almost as importantly for this attack, nor do they. So they're going to call the company IT desk.
I guess if you own a suitable token, you could conduct this as a spear phishing attack where the victim tries to enroll at your bogus site, then you replay the non-token credentials they used for that to enroll your real token on the real site, but again the victim doesn't end up with a working token, so it seems like you're up against the clock.
And while during the pandemic I'm sure new employees were routinely enrolled off-campus, I suspect that's just not the case in normal times, even at organisations which have a very broad work-from-home policy.
I was specifically thinking banking and it’s exactly this type of spear phishing attack that happens (although with other types of tokens the Fido). In these scenarios, you only need to move the money once.
You definitely have a point with regard to non-transaction usage that requires long term access
Most MFA solutions are vulnerable to attack because there is a real challenge handling enrollment and lost tokens. It requires verification of the user, which guess what? Hard to do especially with remote users.
So, for any sort of FIDO token (a Yubico Security Key, Google's Titan, numerous cheaper products) the browser is working together with the physical token to authenticate you with U2F (on sites that didn't upgrade yet) or WebAuthn (the standard replacement)
During enrollment the site gets a unique random-looking identifier, a public key and a signed message that proves your token knows the associated private key. It stashes the identifier and public key. They aren't secret.
During authentication the site gives back one or more identifiers to ask you to prove you've still got one of these tokens you enrolled, and if your token recognises the identifier for this DNS name, it can sign a new message with the corresponding private key proving you still have the token.
Now suppose I'm a scammer, I am trying to phish users of the site realsite.example with my phishing scam site fake.example but they all use FIDO tokens. I can get realsite.example to give me the IDs for tokens (perhaps I guessed the user name and password or got it earlier by phishing), but then I'm blocked. I could try a few things:
1. I give them the realsite.example ID and I pretend my site is realsite.example. The user is never bothered by this because their web browser knows this website isn't realsite.example, I'm clearly a scammer, my attempt is ignored.
2. I give them the real IDs from realsite.example but I admit to the web browser that this is fake.example. This doesn't work because those IDs are for realsite.example and my site is fake.example. There isn't any "Wrong name, override? Y/N" type pop-up, there's no way for any component to guess what happened, it just doesn't work. Maybe the user will retry a dozen times. Maybe they'll eventually spot that it's a scam. They can't give me working credentials for realsite.example because they aren't at realsite.example.
3. I give them a nonsense ID and I admit this is fake.example, this doesn't work, their token doesn't recognise the nonsense ID.
4. I have them enroll their token on fake.example. This "works" fine, but now all I can do is authenticate this user on my site, the resulting credentials are completely useless on realsite.example, these are credentials for fake.example and nothing about them is the same.
While not U2F but pretty close for the issue at hand (preventing phishing), any VPN that supports radius as a backend can use Windows NPS which can be integrated with Azure AD MFA to send a notification to the phone.
You may still want to train the users not to accept any random request on the app, though. But this works (fairly) well and is cheap (no hardware token to buy).
Phone confirmation doesn't evade phishing. The user doesn't think the request they just got on the app is "random" because they think they're doing the right thing. The whole idea of phishing is to trick the human user.
It gets to be difficult to constantly come back to management for money at a small company. The MFA/Identity services want 3-12$/user. EDR wants 4-12$/user. Phishing training, same. VPN (fortinet in my case) same.
The 57$/User of MS E5 is starting to look like a steal.
Seriously though - in your comment, you described the landscape of knowledge whereby it is EASY for any given corp to mis-step without a 'you' on staff or contract.
Now, as the NSA has released in the past best practices around security and such (like how to lockdown a linux box for example)
Wouldn;t it be prudent for the .gov to be able to present a salient security posture recommendation in such a format as you present above?
Meaning - WHY THE FUCK DONT WE HAVE A SECURITY.GOV WEBSITE THAT STATES EXACTLY HOW TO SECURE YOUR [THING] and CLICK HERE FOR EXPERTISE ON HOW TO SO???
We are in the Digital Nuke age. We should act as such.
I guess as long as IT engineers are kept outside of the “adult room” and treated like gifted children coaxed with shiny toys.
And besides, I only recently discovered - after proudly presenting myself as having a Masters in Engineering - that the title Engineer is considered poorly, a synonym to machine operator.
> And besides, I only recently discovered - after proudly presenting myself as having a Masters in Engineering - that the title Engineer is considered poorly, a synonym to machine operator.
And to me, the Masters is the damning part of the title =)
Well, it’s relative: when discussing IT we’re still groping in the dark like in in the ‘800, when collapsing bridges and buildings or exploding locomotives were the norm and competence was viewed with contempt. So you might be arguing disasters are “move fast break things” and I might be the one that disagrees
Is this the first instance in history of "old professors" causing progress via any method other than dying? Who is meant by "competing professionals"? Is this a procedure that dentists do that e.g. hairdressers also do? (This would be more plausible if you named the procedure.) Hopefully you're not just lamenting the steady improvement in materials: it's not a problem that no one uses amalgam anymore. Also, there are several different types of dental schools operating in USA. The brand-new private schools can't really be held to the same standards as state schools and older private schools. They're just trying to pay off the bonds.
This sounds possibly reasonable... If the price is low, the supply is high enough already, adding more supply might not be helping anyone, dentists nor patients.
That depends, maybe the procedure is cheap because it's faster. Dentists would then be able to do it more often and still profit as individuals (higher volume). The ability of customers to save money, though, would result in less overall revenue for dentists as a whole. Hence the organised opposition.
Why? Dental work is not covered by the government here in Canada, and even private insurance caps usually have a very low amount that they cover (1-2k$ per year no matter how bad your teeth are, and a single root can treatment can put you over that limit very quickly). So it really has nothing to do with it being American and I think dentistry in general can have a problem when it comes to treatment necessity, over diagnosis and fixing things that may not need fixing.
Now that I think about it, I've actually rarely heard anything about single payer dental coverage, which is weird considering just how important dental health to life quality. Is dental work free in countries like Germany, the NL, Sweden etc?
In Norway, most health services are funded by the authorities.
The notable exception being dentistry. The dentists (mostly) fight tooth and nail not to be publicly funded; the reason, of course being that the state holds way more leverage than individuals - and rates are sure to plummet once the state foots the bill.
If I go to see my doctor, am admitted to hospital or whatever, I pay a small deductible - $20-40 - each time; once my expenses in a calendar year exceeds approx. $300, I get a waiver valid for the rest of the year, capping my direct medical expenses (excluding dentistry) at $300 a year.
For comparison, last time I had a tooth filled a few years ago, my dentist charged me $300 for a 20 minute job.
Dentists outside the USA do this, too. I once had a dentist outside the US even admit this to me. She seemed embarassed by her former classmates from dental school who were operating this way.
We do. The entire security industry is built around the NIST framework.
If you want more explicit guidance, IRS Pub 1075, DoD STIGs and other guides will literally tell you exactly what to do.
Egregious failure is always driven by lack of funding and/or motivation. Other failures are usually a result of a lack of defense in depth or compensating controls that missed something.
So we are in the territory where a script kiddy could download year old data leak from some random torrent tracker and use it to log in and shut down vital national infrastructure without even completing a Hacking course on Udemi.
Large corps with thousands of employees and legacy infrastructure is usually not that hard to breach.
While i haven't kept up with cyber security in a while i think the problem is largely the same. Security is not baked in, it's an afterthought at best and because of that there's shortage of talent.
Sure, penalties for negligent companies might help, but if someone leaves the front door open and gets robbed, you still go after the thieves too. Although the idea that the east coast pipeline infrastructure could be taken down by the rough equivalent of keeping the door open it pretty terrifying in its own right.
The door was closed and locked, but they a) didn't revoke an old key that got tied to it and b) didn't attempt to verify it was expected person using it when it was tried on the door. Making the distinction because I don't believe they were explicitly inviting people to rob them as much as they didn't take sufficient precautions in a bad neighborhood.
Honestly, we can't let basic security turn into an arms race.
It's getting harder and harder to maintain good password practice. Part of good security also needs to involve proactively going after the criminals.
Anyone willing to break a window can break into my home. Internet security needs to be the same way: Secure enough to keep the honest people honest, and government agencies need to track down the hackers and imprison them. We can't expect normal people to perfectly implement complicated security procedures all the time, nor can we have pervasive technology and only run it with the world's smartest and most perfect people.
Of course there needs to be consequences for thieves but that doesn't absolve people (particularly in critical infrastructure) from the responsibility to implement reasonable protections.
If everyone keeps their doors unlocked in town and hundreds of burglaries occur as a result, the police should do something about it, but they may not be staffed to go after everyone. At some point they're going to be telling people to lock their houses please.
"Password practice" is thankfully becoming less and less relevant as hardware security tokens become more widespread, affordable and reliable. IMO there's zero excuse for a critical infrastructure operator not to use 2FA at this point. Whatever the password policy is, people (consumers too) should really not use passwords as their last line of defense.
I seldom lock my house (or car). So I'm in the same boat. But, I live in a town (technically city) of 20 000. I have 30 neighbors. When I'm in New York City, I lock my car. The same goes for the Internet. It's Huge. I have billions of neighbors. So I lock my door. Even though my house is not national security infrastructure.
> Honestly, we can't let basic security turn into an arms race.
Are you implying that expecting an organization that manages critical infrastructure turning on 2FA and setting secure passwords is an arms race? Hell, if I was a local coffee shop, I'd still expect my employees to use 2FA and a password manager to log into company computer systems.
Harder than in the past. Look at it another way, some time in the past using just a password would have been considered 'good'. Now, using just a password is not considered 'good' in many domains, not my mum's computer. In the future any password of a complication that could be remembered might be considered not 'good' enough. Someone in the future would probably look back and say what we use now is not good. It was only a short time ago people considered a password and an SMS good enough for banking.
> Look at it another way, some time in the past using just a password would have been considered 'good'.
Maybe, if I'm generous, the 1960s?
> In the future any password of a complication that could be remembered might be considered not 'good' enough.
Human memorable secrets weren't "good enough" before and they aren't "good enough" now and they won't become "good enough" in the future. This is not a novel insight even if it's new to you.
> It was only a short time ago people considered a password and an SMS good enough for banking.
Not "people", banks. Go back and look, the people who care about security would have told you SMS "second factor" isn't a good idea back then too, but the banks weren't looking for actual security, they wanted to reassure regulators and customers that they were on top of this.
It's that emotional support versus engineering viewpoint. The banks offered emotional support. Don't fret, we care about security and we'll make everything OK. Outfits like Google took the engineering viewpoint. Understand problem, identify solution, deploy it. U2F => WebAuthn. Emotional support is great when your dog died, while engineering is a poor substitute. But if a bridge fell down the emotional support rings a bit hollow, engineers can ensure the next bridge doesn't do that. I say user authentication is a "Bridge fell down" problem not a "My dog died" problem.
"Anyone willing to break a window can break into my home."
You secure your house to your taste at your own risk - you are free to leave the door open.
What we had happen here, is the equivalent of a bank leaving the vault door unlocked and loosing other people's money. In that case you would be able to go after the bank for negligence.
If your budget is tight you might consider the following for MF for your VPNs:
pfSense VM at work with a single interface and an OpenVPN server listening with a server cert minted from your AD CA - this is your VPN concentrator. You get your OpenVPN server to hand off auth to RADIUS which is a DC with the NPS service installed. Mint SSL certs for all your domain joined users and computers via your AD CA. Investigate this parameter: cryptoapicert "SUBJ:" on your OpenVPN clients.
All of this stuff is documented in various places and I've dropped most of the clues!
You can actually fold in more factors with this. So the PC/user must have a SSL cert minted by the AD CA and a password, and a OTP via something like Google authenticator.
I'd start with short lived (say one month) AD minted SSL certs and then decide whether you need more factors.
Remember that a domain joined PC doesn't work very well at password change time if encryption and VPN connections rely on AD passwords.
Now, once you allow your folks on site - what do you allow them to access? Get your firewall rules sorted.
Compared to Wireguard with MFA? as far as I can tell Wireguard doesn't support MFA at all. Pure key auth is not acceptable in these contexts because of stolen keys and devices.
Rotate keys regularly through a secure channel. At some point these discussions devolve into a variant of "Which ammunition is least likely to kill me when I inevitably shoot myself?"
I think instead of firewall rules, it's better to use WireGuard pre-shared keys. WireGuard supports per-peer 256-bit pre-shared keys that get hashed with the ECDH result when setting up the session keys. If you've set up WireGuard without pre-shared keys, then behind the scenes the all-zero pre-shared key is used.
These pre-shared keys are set in the WireGuard server from userspace via ioctls. If the peer doesn't know its assigned pre-shared key, it's unable to complete the WireGuard key negotiation. So, you can use this mechanism to have a userspace daemon enable and disable WireGuard peers.
This morning, I was looking into a daemon to use a post-quantum algorithm and 2FA to set up a new WireGuard peer for 24 hours. Grover's quantum search algorithm effectively cuts the 256-bit pre-shared key to 128 bits, but if you use post-quantum algorithms to secure the pre-shared key generation, even a powerful quantum attacker has to do 2**128 work for every 24-hour session they want to break. The post-quantum algorithms are a bit computationally expensive, but they only need to be done every 24 hours to still give you forward secrecy against a quantum attacker.
The idea is the daemon exposes long-term Curve25519, NTRU, and Classic McEliece public keys. The client generates a new Curve25519 keypair to use as the 24-hour WireGuard peer's identity. The client sets up an encrypted session using the hash of all 3 key exchange mechanisms as the session's symmetric key. The username and a timestamp are sent to the auth server, which responds with the user's password salt, along with ephemeral Curve25519, NTRU, and McEliece values. The pre-shared key for the 24-hour WireGuard peer is the hash of all three ephemeral key agreements (using the same client-side ephemeral keys used in setting up the encrypted channel) concatenated with the Argon2 hash of the user's password and the RFC 6238 TOTP (Google Authenticor) time-based value. If the client responds with a correct Poly1305 MAC (using the auth session key) on the 24-hour pre-shared key, then the daemon will set up the 24-hour peer using that pre-shared key (using ioctl sycalls to communicate with the WireGuard kernel module). After 24 hours, the daemon removes the new peer.
Note that Grover's quantum search algorithm will cut the 256-bit pre-shared key and the ChaCha20-Poly1305 256-bit keys down to effectively 128 bits. Even if Curve25519 and one of the two post-quantum algorithms used are completely broken, you still get a secure channel. You lose perfect forward secrecy, getting forward secrecy blocked to 24-hour chunks.
If Cruve25519, NTRU, and McElice are all broken, then you lose forward secrecy (and the attacker can read the user names, but presumably traffic pattern analysis is pretty good at identifying the user anyway) and the password is the weak point, vulnerable to Grover's quantum search algorithm. The strongest password I've memorized and used in production was a wordlist generated from 128 bits from /dev/urandom. I'm weird; 99.9% of users would rebel at memorizing 160-bit to 256-bit passwords.
In order to prevent user enumeration, if a user doesn't exist, instead of returning an Argon2 passowrd salt, return some concatenated siphash values (with secret keys) of the username (so repeated queries for non-existing users give consistent results). I've also worked out how to use key rotation to get non-existing users changing their password salts every 60 days (as wold happen when real users change their passwords), but have the offsets uniformly distributed, so all of the fake users don't appear to change their passwords the same day.
This user experience still sucks, and it doesn't have to.
Every company I've ever worked for hands me a corporate laptop and says "you must use this". The laptop always sucks. But that's not the point, the point is: the laptop is something I have. Why in the heck are we not doing remote attestation with TPMs?
Stealing the laptop would be too easy a method for gaining access though, wouldn’t it? With a key- or code-based 2FA method I would not have to carry the laptop everywhere to keep my account safe.
Wireguard, OpenVPN, IPSEC etc are just ways of shuffling packets of data with encryption.
What I'm describing is actually a really easy or at least really cheap way for an organisation to grab an extra factor when authenticating their VPNs.
There are loads of howtos (most are completely wank but there are some decent ones) on minting AD SSL certs for users and computers. That's something you "have".
You could simply use SSL certs or you could also insist on a username and maybe a password ("something you know").
You use RADIUS because that is a long, long standard way of meditating auth/auth to facility.
Password is the cause like a gun is the cause of a shooting.
Networks are targets - both because their assets make them attractive to attack, and because a network is inherently vulnerable (since it is a set of connected nodes).
So the cause is the network and the solution is to eliminate the network. Securely identified, properly authenticated, least privileged access, app-specific, ephemeral connections. Can such a connection still be breached? Of course. But it will be more difficult and it will be isolated, micro-segmented by definition, and unable to be leveraged to attack laterally.
That said, networks makes things simple. And complexity is insecure. As soon as we can make the paradigm listed above as simple as networking, we will see a massive shift. We need it.
Password, allowing access to something with a very large 'blast radius', horrible.
Systematic failure of compartmentalization and roles/access control within the organization's network.
I do not agree that eliminating the network is the solution. Properly securing the network is. I could go into a 30 page essay on what might be considered within the category of 'proper', but suffice to say that organizations of this importance should have a serious infosec/netsec department.
Compromised password of an inactive account. Do regular audits and deactivate unused accounts, people! At the very least that takes care of the "disgruntled former employee" scenario. There's a lot of virtual ink in this article devoted to the "how were the credentials compromised" question. I understand the curiosity, but it's mostly unknowable and therefore not too interesting. It's like asking "Why did it rain on the day I didn't bring my umbrella?" Is that the question? "Why did it rain?" The interesting part is the point at which Colonial still had control and gave it up, which was when that employee left or whatever happened, and they didn't revoke those credentials.
I keep having a thought about how this account belonged to a fired employee who was not only bad at passwords, but bad at this job too. I have no idea if that's the case, as that's generally confidential info. Kinda funny to think of though.
The headline is only to drive home this bullshit narrative:
> It was the first time Colonial had shut down the entirety of its gasoline pipeline system in its 57-year history, Blount said. “We had no choice at that point,” he said. “It was absolutely the right thing to do. At that time, we had no idea who was attacking us or what their motives were.”
So they saw a windows ransomware screen on their windows office computers. Even a known ransomware, with a known decrypter they could not find with a simple search on their android/IPhone.
So they decided it's a high probability that the ransomware will jump from the office PCs to the non-Windows internal pipeline control system. Foreign terrorists. Bullshit.
As we later found out, they decided to shutdown the east coast infrastructure only because they could not calculate the exact gallons delivered to each customer via their office PCs. Rounding would have lead to real money loss apparently. FYI In the real world everybody else does rounding and estimation. The sensor data is for verification and control.
> As we later found out, they decided to shutdown the east coast infrastructure only because they could not calculate the exact gallons delivered to each customer via their office PCs.
What is the source for this? As far as I understood, they shutdown all their computer systems to prevent the malware from spreading to them which in most cases is the correct response.
Yes, MFA would have helped here, but I think it's besides the point. I've worked on a lot of projects, including one for a major stock exchange, and I've never seen perfect security. It's too slow. Even if you had MFA for the VPN, it doesn't stop local malware from being able to use the foothold to gain access to the network once the MFA is cleared. I'm not saying that MFA is useless, just that these types of things are very hard to get right and we only hear about the times the attackers got through. "Oh they should have done X" where X is any one of an increasingly long list of possible things they should have done ranging from security headers on their website, to MFA, to auditing their third party libraries, to locking down their development machines and networks (during Covid, ha!) it's never ending. The very best at it get by without total calamity, but the long tail of organizations just don't have the right staff, focus, or resources to do it.
Edit: I'm not saying it's right! I don't think it is right, and I've worked with governments to bring in regulations in certain industries, but to characterize this all on one failure kinda misses the point. There is a broader, harder problem that I'm not sure is even solvable.
MFA and addressing password reuse is table stakes in 2021. Like this is literally the first item I would do as a CISO in any company.
MFA is among the core security items like running endpoint protection, updating your software or filtering spam. Bucketing it with things like CORS or auditing third party libraries is either disingenuous or extremely naive.
Look, MFA for access to code repositories or a cloud is one thing, but if you think that MFA is "table stakes" for a VPN I don't know what to tell you other than "then very few orgs are doing table stakes with their VPNs."
These types of arguments are so hard to have over HN, because people with actual experience across all sorts of organizations can't explicitly say what they have seen and where because of NDAs and we all have our own viewpoints and individual experience.
Bucketing MFA into one broad concept is as disingenuous as calling out security headers as being as benign as CORS. Does your org have MFA on SSH? On email? On the code repo? On the Slack login? On the VPN(s)? Etc. Etc.
I have yet to see a single org that had MFA on everything and blaming a lack of MFA on the VPN misses the point. If they'd had it there, the attackers would have hit elsewhere. Most developers install random software and browser toolbars that they'll never have time to check and connect directly to their companies servers, repos, and communication tools. Pretending like MFA on a VPN would have magically solved everything is a pipedream. There are too many ways in, and every time someone gets in the response from the software community is always "oh why didn't they do X" while ignoring the larger truth: There are too many threat vectors and getting them all right, at all times, is nigh on impossible.
Yup. Every major tech company I’ve seen uses MFA + SSO for everything (including VPN and ssh). Many are even abandoning VPN because it doesn’t really give you what you want for security (vs something like BeyondCorp which is rapidly becoming the “right way” to do these things).
There’s no silver bullet that magically stops all attacks so your position feels a bit extreme. MFA would have stopped this particular attack (so would have disabling inactive accounts or scanning for password existence on leak databases).
Expect future attacks to focus on unrevoked lost/stolen yubikeys.
> Expect future attacks to focus on unrevoked lost/stolen yubikeys.
If the Yubikey is serving as a FIDO Security Key (rather than in one of its other modes) then this is much trickier than it might look.
Suppose you find my Security Key in the street, or left behind on public transit. It has no idea who it belongs to! All it is capable of doing is proving that it's still the same Security Key as before, which it is. I use that Security Key to sign into Google, Facebook, GitHub, and a dozen more, but the Security Key itself has no memory of any of that, it's a blank slate. The Security Key works perfectly well, maybe you can eBay it, but you can't realistically break into accounts with it.
The alternative is stealing keys to order. But this involves connecting up two very different skill sets, one of which requires physical proximity. That's a tall order for everybody but nation state adversaries. If you're an Indian organised crime syndicate, hacking some IT systems in Texas is likely practical for you via the Internet, but flying a member out to Dallas to try to steal a physical object from the right person is both unduly risky (local cops might be bought off, but the cops in Dallas don't know you from Adam) and surprisingly expensive.
You don’t think there’s a possibility of timing attacks that can be used (eg use the yubikey for an account under your control on the same service)? Might be hard for corporate keys obviously… m
Also there’s a baked in assumption that the HW for these is secure. They’re probably pretty good but I don’t know how resistant they would be to a physical attack to figure out the domains.
I agree that these attacks will get more expensive but I it’s still too early in the adoption cycle for me to conclude this is an unmitigated hard stop against these kinds of attacks vs just a speed bump.
Another area of weakness one can expect is the use of automated system accounts. Those are still in the Wild West age in terms of being constructed Willy Nilly, having lots of privileged access to internal systems without real access nor auditing controls, no expiry, and sometimes even offer login access.
> You don’t think there’s a possibility of timing attacks that can be used (eg use the yubikey for an account under your control on the same service)? Might be hard for corporate keys obviously…
Did this feel like a fully-formed coherent thought when you wrote it down? Because that didn't come across. A Security Key has no idea about other accounts on this service, some other service, anything like that, it's intentionally oblivious.
> They’re probably pretty good but I don’t know how resistant they would be to a physical attack to figure out the domains.
I think you still don't get it. The Security Key has no idea what domains it has been used with. It isn't trying to hide them from you or from bad guys, they aren't secret, it literally doesn't know.
Uhh... Yea. we do (and I wouldn't even bucket my organization as one that is good at engineering/systems). VPN, email, slack, ssh are all behind multifactor.
> Bucketing MFA into one broad concept is as disingenuous as calling out security headers as being as benign as CORS. Does your org have MFA on SSH? On email? On the code repo? On the Slack login? On the VPN(s)? Etc. Etc.
I have been doing security for 15+ years and your mindset of "omg its so hard, so we can't do anything" is the exact sort of thing that got the Colonial Pipeline into the situation they are in.
> I have yet to see a single org that had MFA on everything and blaming a lack of MFA on the VPN misses the point. If they'd had it there, the attackers would have hit elsewhere. Most developers install random software and browser toolbars that they'll never have time to check and connect directly to their companies servers, repos, and communication tools. Pretending like MFA on a VPN would have magically solved everything is a pipedream. There are too many ways in, and every time someone gets in the response from the software community is always "oh why didn't they do X" while ignoring the larger truth: There are too many threat vectors and getting them all right, at all times, is nigh on impossible.
Well, we should just give up and do nothing, right?
You probably need to see more organizations, I am really not trying to be a dick, but I truly hope you aren't in a position of authority or in charge of security for any sizable organization.
I have seen about 30-40 medium-large organizations breached by account takeovers (ATOs) over the last 5 years or so. At this point if you don't have SSO backed by MFA on everything (email, vpn, slack, git/vcs, salesforce, marketing sites, etc.) you are playing with fire. With things like Okta, Duo and Yubikey, there are just no excuses.
No. I did not say we should do nothing. What I said was people keep focusing on the one thing that the attackers used to get in, but that one thing is besides the point.
I explicitly said that MFA raises the stakes.
The point I was trying to make was that whatever the attack was, the thing that gets reported on, is so very rarely an attack that engineers side with the defender. It's almost always something like "oh why didn't they do X" where X could be thousands of things.
I know you say you're "not trying to be a dick" and all, but this is what I wrote in my original comment:
> the long tail of organizations just don't have the right staff, focus, or resources to do it.
If you reread my original comment you'll see what I'm trying to get software developers to understand. Securing these systems is nigh on impossible. Orgs lack incentives to do it to the fullest degree. If it weren't, 0days for iOS or Windows would cost billions.
I'm not excusing these large organizations—though actually working with the people involved gives me some perspective as to why they don't always have MFA on their VPNs—I'm trying to get the broader point across that we shouldn't trust such critical infrastructure to vulnerable computers. That there should be regulations and audits with fines.
And honestly, the way you worded your final paragraph is kinda confusing. You've seen a couple dozen orgs with account takeovers and your conclusion is that they're all playing with fire? Maybe what I said in my original comment on the article has some merit. Regardless, I've seen enough to know that "the problem" isn't a random VPN lacking MFA.
The standard setup I’ve seen is MFA on VPN, with critical systems available only via VPN (like SSH), then all other services (email, GitHub, Slack, etc) are available only through SSO with its own MFA.
MFA for both VPN (code repos, servers) and MFA/SSO for web application access at my employer. My wife's employer is similar. I work in software; she's in finance.
I amazed this isn't the norm across all industries.
The room users, as in, of a meeting room? I think those are defined to be in the "meatspace private network", so you don't need SSO/MF so long as the device identifies itself as being a member of the room.
I haven't seen video conferencing hardware at co-working spaces, it seems everyone there just plugs in their laptop if need be.
I agree with you in the sense that those are literally the first thing I would do as a CISO anywhere too.
But I've long accepted that this is why you or I will never be CISO of an organisation like this. I've worked alongside such people, and they'd be horrified that thread isn't all about a logon disclaimer on that VPN.
> Soon after the attack, Colonial embarked on an exhaustive examination of the pipeline, tracking 29,000 miles on the ground and through the air to look for visible damage.
Is that an extra zero? That's more than the Earth's circumference...
'In the wake of the attack on his company, Blount said he would like the U.S. government to go after hackers who have found safe haven in Russia. “Ultimately the government needs to focus on the actors themselves. As a private company, we don’t have a political capability of shutting down the host countries that have these bad actors in them.”'
To paraphrase Clausewitz, economic war is war by other means.
> "In order to disrupt the Soviet gas supply, its hard currency earnings from the West, and the internal Russian economy, the pipeline software that was to run the pumps, turbines and valves was programmed to go haywire, after a decent interval, to reset pump speeds and valve settings to produce pressures far beyond those acceptable to pipeline joints and welds," Reed writes.
"The result was the most monumental non-nuclear explosion and fire ever seen from space," he recalls, adding that US satellites picked up the explosion. Reed said that the blast occurred in the summer of 1982.
"While there were no physical casualties from the pipeline explosion, there was significant damage to the Soviet economy," he writes. "Its ultimate bankruptcy, not a bloody battle or nuclear exchange, is what brought the Cold War to an end. In time the Soviets came to understand that they had been stealing bogus technology, but now what were they to do? By implication, every cell of the Soviet leviathan might be infected. They had no way of knowing which equipment was sound, which was bogus. All was suspect, which was the intended endgame for the entire operation."
I feel like I should point out that this claim comes from an autobiographical book that has never been corroborated by any other source. When the claim involves "the most monumental non-nuclear explosion and fire ever seen from space," the lack of other sources is especially suspicious.
I personally find the claim that the Soviets were using SCADA control systems in 1982 somewhat suspicious as well.
> The hack that took down the largest fuel pipeline in the U.S.
This needs to be remembered: the pipeline was okay all the time. It is accounting software that was taken down. A conspiracy theorist in me thinks that most of the PR activity around the case was (and is) intending to paint this fact over.
The big issue here is that a single access could so easily be spun up into complete access. In a large company you have to assume that even the highest wall will eventually be breached. A current employee will eventually provide access to someone up to no good. Or that current employee will themselves get up to no good.
Corporate messaging is a good example of this sort of thing. Completely unencrypted inside the wall. Tons of login information just left laying around in old plaintext messages.
The account didn't have any kind of 2fa - even SMS would have stopped this breach. Obviously hardware keys are the gold standard but I wouldn't trivialize the amount of work it takes to roll that out to an organization.
For any sysadmins or future sysadmins reading this though, do NOT put your service's 2FA on SMS. Do not use SMS for anything other than sending your grandparents a happy birthday message. It is trivially broken with a sim card hijack and we have seen many high profile incidents of people being hacked precisely through this vector.
I would, it's pretty trivial (I have done it for multiple organizations of different sizes/technical competencies). Especially for something like a VPN; at this point if you use a VPN solution which doesn't support MFA it almost certainly has other security issues as well.
These days cheapskate companies will insist on a validator app on employee provided phone rather than take the hit on capital expenditure for a proper key. It would be nice to have legislation that bans that practice.
I have a family member who had a very junior role at a big finance company 15 years ago and they used hardware keys. Seems like national infrastructure should require similar levels of security.
As others have said, MFA would be a decent tool but not all attacks are against VPN so it wouldn't always be so useful. As everyone always teaches, security has to be in-depth and the one largely simple control that is very often missing is segmentation of the system.
Being able to attack a single department is annoying but unlikely to be considered fatal, however, I think that even savvy IT ops people do not really understand vlans, using switches and air-gaps to keep things apart and really concentrating on any single-points of ingress like Sharepoint sites or whatever although most problems seem to spread across the network layer rather than bridging machines over HTTP.
Also, even some simple controls like basic local backup, even to a crappy SAN box in your server room would at least enable you to stick your fingers up at ransomware.
I wonder if a useful counter-measure would be slower internet connectivity it such places. It would be very hard to exfiltrate 100GB at 8kbit speeds, while 8kbit is plenty for simple few-byte commands one would need to manage a pipeline.
From what I understand, the hackers never got anywhere near the control systems. The systems they compromised were related to billing and probably the company's sharepoint and emails (hard to think what else the could make up the balance of the 100GB outside of the monitoring of the control systems).
You could still slow down attacks like this by limiting bandwidth (though 8kbits might be painful for document sharing). However, if you limited users a _slightly_ more reasonable 1 megabit per second, they'd be able to complete their exfiltration in within a little over 9 days. Keep in mind, Colonial only learned of the attack after the ransom demand was made!
The article says the ransomware message appeared on a computer in the control room. That seems strange, do the pipeline operators adjust fuel flow based on how much cash is coming in? Is this a common practice? Or maybe it was not the billing system that was compromised? I guess it is possible billing would be used to predict required flow rate in some way, but that means any hack of the internet connected billing system has a direct route to controlling fuel flow in the pipeline, which seems rather dangerous. Post some fake orders and suddenly there are millions of gallons of fuel wherever you say they should be. Does the pipeline also run in reverse? If not it then that fuel could be stranded.
> The article says the ransomware message appeared on a computer in the control room. That seems strange
That computer was probably their "office" computer, which they would use for things like email. From what I understood from the article ("[...] an employee in Colonial’s control room saw a ransom note demanding cryptocurrency appear on a computer just before 5 a.m. The employee notified an operations supervisor who immediately began to start the process of shutting down the pipeline, [...]. By 6:10 a.m., the entire pipeline had been shut down"), the supervisor made the sensible decision of immediately starting an orderly shutdown of the pipeline, before the thing propagated to their control computers; as can be seen by the timing, doing an orderly shutdown takes a while.
The hack that took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast was the result of a single compromised password, according to a cybersecurity consultant who responded to the attack.
This is kind of like blaming unrecoverable data on the employee that accidentally deleted it - the problems are much bigger than a single compromised password (with the implied single-employee-blaming) and the credibility of reporters or security consultants proposing such causes risk being undermined with these kinds of simplifications.
Depends how you read it. When I see “was the result of a compromised password” I think “they should’ve used something mitigates the risk of compromised passwords”. I don’t think it’s blaming the employee but rather blaming the company’s fragile security model. Everyone should use MFA on as many things as possible
Not everything needs to be connected to the internet. They own right of ways for a pipeline. Go bury some fiber along the pipe. All critical company communications can now be handled internally.
O&G was temporarily brought to its knees by a single password compromise. This is sad. All of their money and they can’t even implement basic edge security protocols.
At this point I either use saved passwords or need to rotate the passwords on every request via password reset. Saved passwords are also an attack vector and as such my corporate laptop disables all mechanisms for saving passwords.
I think it's time to make pin + 2 factor the standard over passwords.
There are numerous ways of integrating VPNs with MFA ranging from a RADIUS integration to a SAML one. At Saas Pass the admin can choose and implement up to 15 different methods of MFA to VPNs ranging from FIDO keys to push approval to scanning encrypted barcodes to SMS (not recommended) etc...
Disclaimer: Worked on the design of some of the mfa interfaces for the SAML based VPNs and the access control policies.