Hacker News new | past | comments | ask | show | jobs | submit login

Look at any major CVE and you will almost always see "...that attackers can exploit remotely".

It is logical - 9x% of large cyber-attacks are done digitally, not with physical proximity to the target.

Yet, we often focus on the vulnerability (zero-day, misconfiguration, business logic gap, etc.), rather than the exploit method (the network). Almost implicitly taking it for granted that the server (e.g. Exchange) needs to be exposed to the network in order to do its job.

Given the impact, shouldn't we double down on methods which enable servers to do their job without 'listening' to the network?




> Look at any major CVE and you will almost always see "...that attackers can exploit remotely".

> It is logical - 9x% of large cyber-attacks are done digitally, not with physical proximity to the target.

A remote vulnerability means without access to run local code on the machine. It does not have anything to do with having physical access to the machine.


good point. i simply meant that the vulnerability can be exploited from the network (with no (initial) root access to the machine) and so almost all of them are.


Well, let me tell you about qmail. Each part of that setup has only privilege that it needs to go it's job.

Seems a good design and very few problems given its age.

Postfix seems next best by design.

How many parts of least privilege is message acceptance to mailbox in exchange? I suspect one program image.


>Given the impact, shouldn't we double down on methods which enable servers to do their job without 'listening' to the network?

UUCP exists, but generally, people prefer their business communications to be a bit more immediate.


And how do you plan to talk to a deaf and mute server? To talk to something, you must listen.

To keep the metaphor going, "Listening" for a server just means that a client gets to say the first word to start the conversation - it is not otherwise special. The problem is the malicious conversation that follows, not who started it.

You can do things to control who talks to what, but an employee laptop can be compromised abused to perform the attack from a "trusted" client so that is no good.

Think of it like trying to protect your loved ones from misinformation and crypto scams - you don't want to shut them out of the world, telling them to only trust specific sources backfires if those people sell out or end up hosting scam ads, and teaching them to never be misled by anything might be impossible.


agree, but shouldn't we try to further reduce the attack surface? e.g. the 'server' only listens on networks which force 'clients' to authorize before those clients are given access to that network (not always possible, but often is). sometimes for example this can be an auth before connect overlay network extended to the host such that it is only listening on localhost.

and, agree, the clients can be compromised - loved ones can be scammed as you said - but your loved ones are a far smaller attack surface than being exposed to any attack on the internet (which too often can find its way into the dmz until day two security and L7 authorization tries to identify and terminate the rogue L3 connections).


Auth before accessing a network - perimeter-based security - is the old-fashioned corporate security model. It is generally considered a broken band-aid, as you end up with vulnerable or outright unprotected services behind a wall that many are allowed to pass.

For example, if you have a vulnerable service and allow employees access to it through the perimeter, you fail - the employee might have gone rogue or have a compromised machine.


this is where zero trust has steered us wrong, inadvertently.

absolutely perimeter security based on weak auth (being 'on' the WAN) is insufficient.

but improving internal security doesn't address all internet-based attacks - which are the vast majority. in those cases, auth before connect is a good practice - but the auth (authentication and authorization) itself needs to be strong, and part of a multi-layered approach (the WAF etc are actually able to be simplified and do their job better when they don't need to filter the entire internet).


It does not matter how strong the auth is. As long as people can authenticate - regardless of how - there's a hole in the perimeter, and your security reduces to the capabilities of what is behind. Attacks do not even have to be targeted - when you have hundreds, thousands or even hundreds of thousands of employees, the likelihood of a laptop getting compromised by even a random attack is pretty high, and then the foot is in the door.

In other words, any kind of proper defense require the internal services to be fully battle-hardened to withstand arbitrary attacks anyway as the perimeter is breached, or you set yourself up for a catastrophic security breach. In this case, the perimeter added nothing but cost, inconvenience and a false sense of security.

If you are afraid of exposing such services without a perimeter, you should not be running those services at all.


well said and i agree. i believe we are talking about different layers. your point - essentially assume your network is always breached - absolutely.

and don't you make that 'battle hardening' simpler and more effective by reducing the attack surface? e.g. by taking your servers off the internet (meaning your inbound firewall rules become deny all inbound (even 443)). so enforce (strong) auth outside your dmz, before allowing sessions on your network (or overlay network), even for APIs, B2B etc (otherwise your fw has exceptions).

and, yes, when that gets compromised, the attacker now needs to deal with the next set of 'battle hardened' layers.

meaning, shouldn't reducing attack surface and battle hardened services be an 'and', not an 'or'?


If you have wall strong enough to block a missile, there is no value in putting a paper wall in front of it. "And" only wastes and confuses.

Attack surface is defined by the service, not by which malicious actors you expose it to.


"All these vulnerabilities require authentication for exploitation."


The actual problem isn't the network connection, it's the untrusted data. There's nothing you can do about that if your data is intentionally coming from untrusted sources, which is absolutely the case for a mail server.

Of course you can reduce attack surfaces by having things be tunneled and/or proxied, to have close-to-minimal network access to some appliance, but that won't fully save you from this sort of problem. In my opinion, the only obvious way out is to go category-by-category and eliminate each type of flaw by-design. I think even in an ideal world you could never accomplish this 100% of the way, or at least we're not close to a world where you could right now, but you can combine this with a layered approach to security, trying to eliminate single points of failure, adding hardening anywhere you can, and making tampering both as apparent and as unreproducible as possible (i.e.: ASLR harder and more often, randomize the order of things, don't expose internal IDs, etc.)


Tell that to Azure and its cross-tenant problems because they allowlist specifically all Azure IP ranges in Microsoft products. Gotta make them Azure spammers be able to bypass email filters, otherwise they won't pay premium, right?

In my opinion there's a huge conflict of interest between Microsoft and Azure as a cloud hosting service offering o365 integrations.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: