Hacker News new | past | comments | ask | show | jobs | submit login
NSA Network Infrastructure Security Guidance [pdf] (defense.gov)
130 points by slacka on March 6, 2022 | hide | past | favorite | 55 comments



I have some experience here, not with the NSA though. BYOD is forbidden, and checked when you enter the building. There is only wired hardware - and wireless is, well, jammed. Any software introduced into such networks is vetted before introduction/deployment. It takes time. 3rd party apps don't auto update - Microsoft for example provide updates that can be vetted before being allowed onto the network.

And this is only some bleedingly obvious stuff...

The document does not describe SECRET or TOP SECRET environments. Not even RESTRICTED. R, S and TS policies are themselves marked with protective markings, which this PDF lacks.

Governments have a lower level of protection called PROTECTED or similar that is closer to what the document describes, but even that would be protectively marked...

Looks to me like NSA is sharing some of their lesser sensitive stuff to possibly help their vendors, businesses partners and public at large. Kind of like "we recommend Joe Public do it like so..."


To clarify a bit, you are talking about classifed network design. That isn't what this document is about.

The author of this document, Information Assurance Directorate (www.iad.gov) is focused on helping domestic government, their contractors, and private industry secure networks and devices against intrusion from hostile actors.


I'm talking about op-sec. Network design is but a single aspect of that, and often doesn't feature at all. I mean this more in the sense described by Wikipedia¹, and not the likes of SecurityStudio²/Fortinet³ (first two search results).

¹ https://en.wikipedia.org/wiki/Operations_security ² https://securitystudio.com/operational-security/ ³ https://www.fortinet.com/resources/cyberglossary/operational...


I’m not sure how that’s relevant?


It's relevant because your GP asked specifically whether I was talking about network design.


The point of the document is for industry and public consumption, not spooks. The GCHQ, ACSC and other agencies release similar publications.


> and wireless is, well, jammed

Uhm... wouldnt this, like, kill people with a pacemaker ?

I was always told that jammers are dangerous for people with a pacemaker.


The way it's usually implemented is two sides of a wall have a mesh in them, similar to chicken wire.

Jams all signals, doesn't require a beacon to be constantly transmitting.


I'd assumed "jamming" is when something actively disrupts communication, not passively prohibits the signal from reaching its intended recipient, which sounds more like "shielding".

(Maybe this is unnecessary nitpicking, and everyone else understands that it was informal; English is not my first language.)


Native English speaker: you're correct (ie, i agree with you).


So it's a faraday cage?


Yes, but part of the infrastructure of the building.


Are there guidelines on how to secure S and TS environments?


There’s heaps of content in the NIST 800- series, which is all available online


It’s covered in the NIST 800 series, which is available online


Yes. I assume you're asking if they're publicly available, which is a no. Is not an easy world to penetrate, for good reason.


Your website has great structure to communicate your values effectively; it must’ve helped you land some great opportunities.


Thank you. The only value I set out with when building the more recent version of the site was to share what I know. Too many people learn a thing and keep it to themselves.


Use of firewalls in series from multiple vendors sounds like a good idea in theory but in practice, does it not make it easier for an attacker to exploit the network? Instead of the attacker having to find a vulnerability in a single product, they can instead find a vulnerability in one of multiple products (a much easier task). Given that every network device communicates with a central authentication/authorisation system, a central logging system and likely central patching, configuration deployment, etc systems, all it takes is to find a vulnerability in one of multiple firewall products, then find a vulnerability in one of the central systems used to mange the network.

I'm also perplexed why there is mention of "traffic inspector" and "full-packet capture device" given that almost all traffic traversing a network nowadays is encrypted. Perhaps more useful today would be creating a good understanding of the normal traffic flows so that alarms can be configured for abnormal traffic. For example, perhaps no more than 100 requests to an authentication server occur per device per day. Or patches for a system are no more than 1GB so seeing 1.1GB or more transferred across the management network per day per device would be abnormal.


> Instead of the attacker having to find a vulnerability in a single product, they can instead find a vulnerability in one of multiple products (a much easier task)

I’m not sure I understand this argument you’re making. If you have them in series then you have to find vulnerabilities for all of them before you can talk to any other system. A vulnerability to one of them only lets you punch through that one layer. In theory of course. I think this advice may not be well flushed out though. For example, if the firewalls are running on the same appliance and have poor isolation/misconfigure settings, an escape in one can bypass the entire set of firewalls (which is maybe what you’re implying?). Having totally isolated firewall machines is inefficient from a capex perspective (more machines than needed for your load) and in general multiple vendors increased your opex costs too (you now need employees that are proficient with multiple vendors which ends up meaning you have teams with dedicated experts per firewall, you have to build automation for multiple vendors with non-standard APIs etc).

The bigger problem though is you’ve sacrificed availability which is now divided by N times because every layer you’ve added has to be functioning perfectly or the entire system is down (and if it’s not you’ve implemented this advice incorrectly).

I think not mentioning this security/availability/cost trade off is a disservice.


I guess he meant to say that each product has its own attack surface, using n different products just make your attacking surfaces multiply by n. It would be true if you mix those at the same layer. Even if you deploy them in different layers, you still have n different types of devices to maintain instead of one. And it would provide little benefit if different lays of the network were not segregated correctly as the chance to breach the very first layer remains the same.

Plus the points you pointed out, there is not much point to deploy such strategy unless you have more than enough budget.


The scenario I had in mind is Firewall A and Firewall B are in series and they're different products from two vendors. Firewall B is found to contain a vulnerability triggered by a specially crafted IPv6 TCP packet (that firewall A is happy to pass to firewall B), giving the attacker a level of control over Firewall B that allows them to then access the centralised authentication/authorisation system that would have otherwise been off limits to the attacker. The attacker communicates with firewall B using the same accepted protocol that firewall A and firewall B are configured to allow the attacker to access. Firewall B communicates to the authentication/authorisation system using the accepted protocol for doing so. Nothing suspicious appears to be going on unless you look at the patterns of traffic (throughput, duration of connections, number of connections in interval, etc) particularly to the authentication/authorisation system.

I don't think in-band attacks on routers, switches, firewalls doing simple ACL checks are a risk worth spending much concern on because parsing IPv4/IPv6/TCP/UDP headers is not hard to securely implement. Riskier architecture would be intrusion detection systems that perform deep packet inspection in-band (e.g. not out-of-band via a beam splitter to a standalone IDS) where there is 1,000,000's of lines of potentially buggy code exposed to attackers.

I agree with your point too that a complex network is harder to secure because you need more skill and expertise spread across more people. In such a scenario, it is easier for human mistakes to occur because of the difficulty in communicating, and difficulty in seeing the broader implication of what may appear to be a simple configuration change.


In your scenario though, if the company just used firewall B you’d have the same issue. It can’t be less secure.

I’m not familiar enough with how enterprise firewalls work to evaluate things from that perspective. I just know operationally, the more moving pieces, the more issues you have to deal with.


The company would be less secure because they would also be wearing the risk that independent of the vulnerability in firewall B, there is potentially also an unknown vulnerability in firewall A being exploited by a different attacker.


As you increase the number of systems in series you increase the likelihood of failure. In this case failure is "there's an exploit." When you have A->B->C it's possible that exploiting B, the middle layer, is both possible by going through A but also allows you to pass traffic through C that lets you launch an attack or exfiltrate from the network.

In practice it may be different, but it seems like security theater to just put a bunch of firewalls in series.


The security failure rate here cannot drop below the weakest link in the chain. In other words in your example A -> B -> C is no different than just B.

The likelihood of failure of independent systems in a series is defined as 1 - product(1 - p(failure of one component)) [1]. It only degraded to the probability of one of the links if the systems are correlated which they’re not here.

I don’t think regular failure analysis applies here (more components = more failure) because the security failure rate gets better with more components and worst case is that of your weakest link. In the traditional failure analysis you’re arguing for, the best case is the failure of the weakest component and it gets worse from there.

[1] https://www.sciencedirect.com/topics/engineering/failure-pro...


> because the security failure rate gets better with more components

[citation needed]

This isn't obvious to me, and as I explained (in a way that agrees with your first paragraph) more components could just mean more opportunities to get hacked. You'd have to go out of your way to prove that the security does overlap, and you can't trivially go around one component if the other is hacked.


If hacking one component gives you full access, then yes, more components is worse. If hacking one component means you need to hack the next as well, then security gets better. This is the very definition of defense in depth.


It's security through obscurity more than defense in depth because the failure mode of the two firewalls in series is the same, as observed by other systems on the network. Defense in depth would better describe the situation that even if a firewall was compromised, standalone[1] web servers on the network would only allow communication to occur if they are first presented with a client TLS certificate that the firewall in all practical circumstances can never obtain[2].

There is a school of thought that by making the system ridiculously complex with dozens of ICT security products from multiple vendors, the hope is that an attacker would have nightmares about planning how to traverse the network without raising alarms. It is also a nightmare to manage an unnecessarily complex ICT system so the chance of getting caught traversing the complex network is possibly lower because no one truly understands if an alarm is significant or not and monitoring in general becomes much harder. Real alarms get lost amongst the noise and confusion of the situation.

[1] (meaning: not accessible to or managed by the same central systems as the firewalls)

[2] (example: clients cannot accept inbound connections from the firewall and certificate management is isolated from the network)


Firewalls are good spoils, your payload can then monitor, exfiltrate, or meddle with traffic, or use it as a stepping stone to network infra/administrators.


They are describing a setup like this: https://imgur.com/a/R2jbfzj


In that example network, the Chinese, US, Russian, Israeli and other firewall vendors with their products in series could all have access to the network as each of them has implemented a backdoor that triggers on a hidden condition such as HMAC(secret_backdoor_key, CONCAT(ip.source_address, tcp.source_port, ip.destination_address, tcp.destination_port)) == tcp.sequence_number.

Obviously if such a scenario were to play out, a backdoor would be designed to look like a mistaken bug in the software (also making it very hard to detect) rather than the simple example.

Fault tolerant architectures such as N-modular redundancy[1] (typically used in the aerospace sector) may provide a more accurate architecture up front. Making protocols much more deterministic and only using nothing-up-my-sleeve numbers would be a good line of investigation.

[1] https://en.wikipedia.org/wiki/Triple_modular_redundancy

[2] https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number


When we are implementing this are there diminishing returns on the luck granted by the pfSense firewall?

If not, wouldn’t it be better to stack pfSense and rely on luck alone. This would also save us money as we could run them all on the same server in VMs.

Sincerely, your customer


Many big companies have what is basically a MITM attack on SSL. They resign everything between you and the network endpoint using an enterprise certificate. It's still encrypted but at some point it's decrypted, inspected, and then re-encrypted. It's mostly seamless but sometimes you have to get a copy of the certificate to add it to things like pip.


> Many big companies have what is basically a MITM attack on SSL. They resign everything between you and the network endpoint using an enterprise certificate. It's still encrypted but at some point it's decrypted, inspected, and then re-encrypted. It's mostly seamless but sometimes you have to get a copy of the certificate to add it to things like pip.

Yes. And this is a good thing (in an enterprise environment) for a couple of reasons (a few others too, but these should make the point):

   Proxying connections is inherently more secure
   than packet filtering/stateful inspection;

   As was pointed out in another comment[0], most 
   traffic is encrypted these days.  Any traffic 
   traversing the enterprise network needs to be 
   visible (unencrypted) to network administrators.  

   This allows them to identify, investigate and, 
   potentially, block suspicious traffic.
Any large organization (or even smaller ones with the need and the resources) should be able to decrypt and read every packet traversing their internal network and the ingress/egress points to/from that network.

That requires ubiquitous use of proxies for all connections that have an external endpoint.

It's likely that some sensitive internal networks/servers/applications should do so as well.

While that's very intrusive, it's not your home network, nor is it (semi-)public wifi. It's the wholly owned property (I'm not addressing "cloud-based" resources here -- that's a different long discussion) of that organization.

As such, they have every right to read every packet on their network and every byte on their storage.

Which is why many organizations have "guest" networks which not only has no access to internal resources, but also only allows connections to external networks and bypasses all the proxies/firewalls as well.

This allows employees, contractors and others to maintain their privacy on their own devices without impacting the security of the internal network.

Of course, this requires device authentication at the network layer (e.g., 802.1x) to ensure that only authorized devices are permitted access to the internal network.

None of the above is particularly new or particularly controversial.

[0] https://news.ycombinator.com/item?id=30574794


> As was pointed out in another comment[0], most traffic is encrypted these days. Any traffic traversing the enterprise network needs to be visible (unencrypted) to network administrators.

Defense in depth means the network administrators do not get access to everything because if -they- are compromised then everything is compromised. So you should have multiple admins with limited power over their own subnets and turning off encryption should be like getting a warrant from a judge. This helps prevents stalking also. Network administrators are a centralized primary vulnerability that needs to be mitigated.


>Defense in depth means the network administrators do not get access to everything because if -they- are compromised then everything is compromised. So you should have multiple admins with limited power over their own subnets and turning off encryption should be like getting a warrant from a judge. This helps prevents stalking also. Network administrators are a centralized primary vulnerability that needs to be mitigated.

A reasonable point. There certainly need to be policies and controls around who can access decrypted network traffic, as well as under what circumstances access to such traffic should be granted.

However, that doesn't mean the capability to do so shouldn't be there.

That said, good policies/controls can significantly mitigate the risks you addressed. Those can (and should) include separation of duties, clearly defined roles and granular access controls.

What's more, you need to hire honest, decent people. Fortunately, most folks are.

If one or more of your employees violates policy (and potentially the law) and exfiltrates corporate data/information, depending on severity, that employee(s) should be disciplined, fired and/or prosecuted.

And "turning off encryption" isn't anything to which I described, or even alluded. Rather, I discussed using proxies to mediate connections between internal and external sides of encrypted (and unencrypted ones for that matter). connections is good security practice.

>turning off encryption should be like getting a warrant from a judge.

That's a ridiculous statement on its face.

Organizations (and by extension, those designated by the organization to do so) have the legal right to examine all data that traverses their networks. Full stop.

Edit: Correct awkward phrasing.


>Organizations (and by extension, those designated by the organization to do so) have the legal right to examine all data that traverses their networks. Full stop.

I didn't say it should require going to an actual judge, I intended to say that digging into packets should require approval and the approval should not be given easily (maybe a "break glass" system). I do think that companies legal right to view all data on the network may change in the near future, especially in the USA if it ever gets decent privacy laws, and in the UK. Businesses may also move away from it because it could be a source of liability. It's rather like the rethinking going on around collecting more data than needed from customers rather than the minimum; the extra data becomes something the police can demand, that can be subpoenaed, that the company is liable for if stolen or modified or otherwise compromised, and that the company may be required to maintain the accuracy of. It also seems like examining decrypted network traffic to manage the network would only be necessary for some types of data, not all data. There can be internal company liabilities also, if an admin has carte blanche to look at lots of data and some crucial proprietary company data is leaked to the outside, just having had the potential for access makes the admin a suspect, innocent or not. Minimizing access and limiting the types of data which can be viewed to what is necessary seems logical, again with a break glass mechanism for unusual situations. It's mainly a historical artifact that network administrators have had such power and in many cases it is way more powerful than is actually needed. Hiring honest decent people is always good, but they can be blackmailed or simply ordered by the company to do unethical things, or ordered to grant access to someone who will do unethical things.


My apologies.

Apparently, I didn't clearly express my thoughts, as you've definitely misunderstood the processes and ideas I was attempting to describe.

I'll preface my attempt to express myself more clearly by reiterating that data on or traversing company owned property must be available to the company and/or its designees, even if that data is of a personal nature, as long as the there is a legitimate business need to inspect such data.

That said, I am not suggesting (nor am I sure why anyone would imagine that it's desirable or, in a large organization, even feasible) the decryption, capture and storage of every packet traversing such a network.

Rather, I'm merely pointing out that the owner of private property has the right to inspect such data. Even more, in certain circumstances (I'll address those below), an organization must do so to identify potential threats/incursions/data thefts and take appropriate action to mitigate or prevent such activity.

The circumstances under which this may be required include identifying potential compromise attempts (already widely done via IPS/deep packet inspection systems), malware payloads, data exfiltration and a variety of other threats.

I'm not claiming (or even implying) that all traffic should be reconstructed and manually reviewed.

That said, automated tools can (and should) be used to identify potential threats and network traffic tagged as such should be reviewed by the network/security personnel specifically designated and authorized to perform such activities.

I'd add that usage of corporate resources should be restricted to activities specifically related to the business of the organization, on devices that are owned and managed by that organization.

Other activities should be explicitly disallowed by policy and non-company owned devices should be barred from access to the internal network.

Which is why I specifically highlighted the use of "guest" networks in my initial comment[0]. That network to be used by external users as well as the personal devices of internal users for non-company business.

I do not advocate (unless the threat model requires it) ubuquitous surveillance of users. And all users should be informed of the policies and mechanisms in place that might trigger review of network data, as well as the potential repercussions of policy and/or legal violations.

When such reviews are triggered, any data collected around such potential threats/policy violations needs to be appropriately safeguarded, not only to protect the privacy of users, but also to protect the integrity of any investigation to be performed.

Again, we're talking about internal networks and systems wholly owned by the organization, and any device not in that set should be barred from connectivity to the internal network and forced to use the "guest" network.

As I also mentioned in my initial post[0], external cloud-based infrastructure is not in scope here, but is also an important topic that deserves an entire discussion itself. And Internet-facing applications accessed by external parties (e.g., customers) are also not in scope here. I am specifically referring to internal corporate users on company-owned devices.

As for bad actors and/or those who might "be blackmailed or simply ordered by the company to do unethical things, or ordered to grant access to someone who will do unethical things," organizational policies and the law should govern the consequences of such actions.

Until we have general AI[1] that can perform such tasks, we'll just have to use humans and (with appropriate policies/controls) trust them to do their jobs. When they don't, just as it's always been, there are consequences (including discipline, termination and civil/criminal legal action) for such bad behavior.

I hope I've been able to more clearly express myself this time.

[0] https://news.ycombinator.com/item?id=30575249

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...

Edit: Cleaned up some prose, clarified some arguments.


The MITM proxy architecture creates a central system (or systems) with privileged in-band[1] access to all clear text traffic of the organisation. This central system would have an enormous attack surface because of the millions of lines of code[2] required to handle the TLS protocol, parse protocol messages, etc.

The MITM proxy architecture also introduces an extreme risk of exposing one of the most sensitive private keys of the organisation (trusted by all computers of the organisation) in the event the MITM proxy system is compromised.

I'm more inclined to consider the MITM proxy architecture as an anti-pattern as it undermines proper isolation between systems on the network and undermines the concept of perfect forward secrecy.

[1] (meaning: directly in the flow of data such that the data can be manipulated by an attacker)

[2] By example, OpenSSL consists of 0.7 million lines of code per https://www.openhub.net/p/openssl/analyses/latest/languages_... and Wireshark consists of 4.5 million lines of code per https://www.openhub.net/p/wireshark/analyses/latest/language...


> Instead of the attacker having to find a vulnerability in a single product, ...

... they must now find at least one vulnerability in each of the firewalls.

It's not "pwn firewall A OR B", it's "pwn firewall A AND B".


WRT traffic inspection: if anything other than handshake packets show up unencrypted, it should be able to show an alert.


That’s not how “in series” works.


What, no threat model? I'm really not sure who this is for. If you're actually in charge of network and/or security architecture most of this is too simplistic. But if you're a newbie it leaves out the the most important context (particularly, the threat model) that drives the entire process of securing something. And personally, I'm just not comfortable doing things unless I know why I'm doing it, and that's what the threat model provides.


What do you all see as missing from this document? How about the security of apps and of 3rd party update services? Every app that has internet access is a potential vulnerability for the whole network, a direct route to the inside. And we've also seen recently that 3rd party software update services can compromise everything. Are those not considered part of "Network Infrastructure Security"? If you're talking about defense in depth you can't limit the discussion to just the wired network hardware itself because there are all sorts of other things that can compromise the network (insider attacks, BYOD, compromised hardware, phishing, blackmail). You could follow every single step in this document perfectly and your network may still be easily compromised. There's no mention of wireless networks either. Your network could be compromised via an open Bluetooth interface. This also seems like a very Cisco oriented document.


I am also conflicted trusting an agency that's other job is to make networks less safe when it comes to network safety. The conflict of interest alone if enough, it doesn't even need their track records in spying on everyone.


I highly doubt any of the content in this is ‘bad’ or misleading - most security people have a pretty good BS detector and it would spread pretty quickly in the community.

The worst thing might be a reference to an industry standard protocol, like IPSECv2, which is generally considered secure, but might be broken under certain conditions and only ever exploit under extreme circumstances (to avoid showing their hand).


> to make networks less safe

Could it be substituted by "to get into a network"? Then it could be argued that they need adversaries to establish reliance on a working network.


Slightly related, is there any analytical writing on the human side of security? How to build organizations that are resistant to intrusion in various forms?

From reading books and watching movies as well as applying a bit of common sense, organizations like spy agencies or terrorist networks with more or less independently operating cells work with a strict least-privilege type model such that a mole in one part of the organization doesn't compromise the organization as a whole. And, I'd guess, at least in more formalized organizations, strict logging on who does what etc.

All this obviously adds a lot of overhead and friction in communications, which, say, a business operating in a competitive environment can ill afford. I'm quite sure there's no "magic pill", but rather a bunch of choices with tradeoffs (like security vs. ease of cross-team communication I touched on above).


- need to know basis.

- strict separation of concerns.

- only outbound hiring.

- no hiring of people who can be blackmailed.

- understand your threat model.

- if you were an enemy and had to break into your org, what would you do? Improve that.


Cool. Defense in depth. But how many vendors/products are now on the table? What will always matter is the most external firewall because once inside that line each subsiquent internal firewall will have a harder time viewing good traffic from bad. And the most inner firewalls between devices inside the home network will have so many holes punched in them to barely be useful. Defense in depth isnt about having multiple layers that repeat the same protections. Defense in depth is about having other layers of non-firewall products to catch the stuff the firewalls miss. If your outside wall is 8' adding more and more 8' walls behind the first does nothing if the army has 9' ladders. You need something different than another wall.


Can you elaborate more on the firewall perspective? Your description makes it sound like for a three tier webapp[0], the entire pdf is suggesting to put a firewall around the presentation layer to limit it to appropriate sources, then the application tier to limit to appropriate sources, then the data tier to appropriate sources, etc.

While that should be done, and they are suggesting that, I feel like that's only covering section 2 at best. This doc isn't about adding other similar walls, it's reducing attack surface, limiting blast radius, and encouraging industry standards such as defense in depth, least privilege, and some elements of supply chain security. Your comment suggests you know that, but the PDF states many other things that have nothing to do with firewalls or differentiating good traffic from bad traffic.

My take on it is I see this document more as a source to point at internally at the NSA for best practices or a minimum bar to meet, not the best that the industry or the NSA has to offer. Even then, I'd assume that some organization externally will use this to say they aren't "up to the NSAs standards", and push for changes to fix their practices.

If it means that more folks learn of common practices in the industry and increase their security as a result, I'm all for folks sharing these practices, whether it's the NSA sharing it or a private organization.

[0] - https://www.ibm.com/cloud/learn/three-tier-architecture


I too am all for sharing best practices. I just disagree with some of those practices. Using a mixed bag of a particular product from a variety of vendors sounds great from a management perspective. It seems obvious that one might catch something that another misses. Having a variety of security products watching a network is like how a variety of COVID vaccinations can be better than repeating the same vaccine each time. But more vendors means a greater variety of associated traffic. You end up poking a hole in one firewall so that someone can manage some other firewall. Your IDS sees, and gets used to seeing, all sorts of strange management traffic. Your engineers become complacent, opening up holes upon request by anyone with the correct phone number. There is something to be said for a single strong firewall system from a single vendor. Then you have a single reporting/monitoring system with no shirking of responsibility. That one wall is manned/watched/managed as everyone's first priority. One very tall wall rather than a series of shorter ones.

The practice I would promote, but which is rarely ever used outside of defense and/or the biggest companies, is having separate networks for the really important stuff. Why is client data is traveling along the same network as employees streaming netflix? Have one network for general office junk and another physically-distinct network for client data. Why is the office birthday party announcement landing in the same inbox as an email from a "client" requesting a wire transfer? If separating these means some employees have to run two email inboxes or have two computers at their desk, so be it. But doing that costs money. Subscribing to a 3rd or 4th firewall vendor is cheap.


?

- No full static addresses requirement

- No double WAF vendors requirements


Why would those be particularly important?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: