Hacker News new | past | comments | ask | show | jobs | submit login
Spoofing DNS records by abusing DHCP DNS dynamic updates (akamai.com)
191 points by Terretta 10 months ago | hide | past | favorite | 88 comments



Microsoft seems to either ignore the security repercussions of their design decisions, or more cynically, they deliberately introduce attack vectors.

Without details (because the article has withheld them), but given the title and overview, this attack makes use of Microsoft's "extensions" to the DHCP protocol that allow updates to DNS records from hosts with dynamically assigned IP addresses. This began as a Windows only thing, but it's now supported by some other DHCP servers as well. It always seemed odd and wrong to me that DNS could be updated as a result of a DHCP assignment. IMHO the correct way to do this (assuming dynamic address assignment is desired), is via DHCP reservation. The admin would (manually) choose an IP address for a particular host's MAC address, and also (manually) add a permanent DNS record for the chosen IP address.

Given the vast number of authentication features (such as SPF) that use domain as a key, it's a really bad idea to allow automated updates to domain records by unrelated services.

As usual, the mitigation for this issue is to turn off the Microsoft "extension" that is enabled by default -- in this case it is DHCP DNS Dynamic Updates.


I too find it odd that TLS implicitly relies on DNS (ACME from letsencrypt). If someone controls your DNS they can get signed certificates.

I’m optimistic letsencrypt validate dns via multiple streams (to prevent BGP hijacking) but I’ve got no evidence or proof. But that also doesn’t solve a breach of your nameservers (Godaddy has been breached) among other things.

And before anyone mentions it, dnssec is dead, at least as far as I can tell.


DNSSEC isn’t dead; the graph of its deployment is what you’d want for any emerging technology: high and to the right [1].

The problem is DNSSEC is complicated to deploy for large organizations like Microsoft with hundreds of domains (and subdomains).

Meanwhile, many domain registrars make it close to trivial to deploy DNSSEC for small and medium sized companies.

[1]: https://stats.dnssec-tools.org/#/?top=dane&trend_tab=0


Let's Encrypt does validate via multiple streams: https://letsencrypt.org/2020/02/19/multi-perspective-validat...

But even this is vulnerable if a strong adversary can make it seem like their compromised nameserver is the only reliable nameserver reachable over the network: https://i.blackhat.com/USA21/Wednesday-Handouts/US-21-Shulma...

An analysis from early 2023 indicates LE's default setup was suboptimal compared to a multi-cloud quorum policy, which seems like a no-brainer at their scale IMHO: https://arxiv.org/abs/2302.08000


Letsencrypt might but zerossl have suffered from BGP hacks before

https://therecord.media/klayswap-crypto-users-lose-funds-aft...


I can't imagine how Microsoft would profit from making their DNS server vulnerable, but I also can't imagine how something like this could happen given how many people and stages of development it must have gone through. I'm sure they have security people reviewing this stuff, but things that readily appear to be terrible for security slip right on by.


given how many people and stages of development it must have gone through

That's exactly the problem. The bigger the team, the less motivation there is for each individual to care about the big picture. This phenomenon can be witnessed whereever bureaucracy exists.

More bluntly phrased, "shared responsibility is no responsibility".

That said, I think in this case it's just a matter of these protocols being designed for and intended to be used in a network with high trust.


> IMHO the correct way to do this (assuming dynamic address assignment is desired), is via DHCP reservation

And how many end-user/dynamic hosts did you ever manage?

And more importantly, why a l33t hax0r, who is already in the internal network, as he can send DHCP requests, can't impersonate a MAC address?


I've managed quite a few, but typically you would want your "trusted" hosts on a separate VLAN or subnet from untrusted ones. It's probably fine to leave DHCP DNS Dynamic Updates turned on as long as it's not updating a DNS server with control over any trusted domains.

As you say, a host could spoof a MAC and get a reserved assignment, but only if the real host with that MAC is down. Again, isolating the security perimeters is the answer.


> spoof sensitive DNS records, resulting in varying consequences from credential theft to full Active Directory domain compromise

It sounds insane to me that the security of anything would depend on the validity of a DNS record, but I vaguely remember that AD is extremely dependent on DNS configuration


I would think in a typical AD domain that was built using reasonably good practices (maybe even the default settings) in the last 5 (maybe even 10) years, DNS changes alone wouldn't be able to directly result in the compromise of any Microsoft products, or third-party products that authenticate against AD. One or more cryptographic mechanisms (TLS certificate verification, SMB signing, etc.) should prevent other devices from trusting the system with the spoofed name.

That having been said, if the domain has relaxed security to support legacy systems, or was built 20 years ago and incrementally upgraded to newer Windows versions ever since, this technique seems like it could be used to compromise the domain. For example, if NTLM auth is enabled, but signing is not required, it seems like this would allow a network-wide version of NTLM relaying attacks. Those are usually done using subnet-specific name spoofing, so it would be a big increase in scope.

Additionally, in most large enterprise environments, there will be at least a few third-party products that are set up (deliberately or by default) to operate in a way that could be exploited using this technique. Web applications that authenticate using cleartext HTTP, network appliances with telnet or other unencrypted sevices, email that's exchanged over cleartext SMTP, etc., on the assumption that an attacker would have to already have privileged access to something to capture the information sent over that channel. I suspect there would usually be a way to exploit one of those things in a way that would eventually lead to compromise of the AD domain.

e.g. the organization uses Cisco network hardware configured to back up their configurations to a TFTP server every night. The attacker exploits the DHCP DNS vulnerability to pretend to be the TFTP server and captures the device configurations. They decrypt reversibly-encrypted network appliance administrator passwords, or crack the hashes if they're stored in a non-reversible format. Then they use those credentials to alter the TACACS configuration on one or more network devices to send credentials to their own system when someone tries to log on. TACACS is set up to authenticate to AD, so now the attacker has a bunch of AD credentials that belong to network administrators.


Unfortunately the security defaults haven’t meaningfully changed since 2003!

The more advanced protections introduced over the last two decades are mostly off by default.

As a random example, a trust established between two AD forests using only Windows Server 2022 domain controllers will default to RC4 encryption instead of AES!

It takes a lot of thankless work to properly secure an AD domain, and often it’s a impossible task because of incompatibilities with - checks list - Microsoft software released as recently as 2020.


This man/woman/smallanimal SysAdmins!


>I would think in a typical AD domain that was built using reasonably good practices

One would think that, but I literally cannot enumerate the number of Domains that were setup by...let's just say, people that had no business setting up domains. Default settings are partially defined during the install.


>It sounds insane to me that the security of anything would depend on the validity of a DNS record

Ahem. "The /etc/exports Configuration File" - https://access.redhat.com/documentation/en-us/red_hat_enterp...


Are you aware of how letsencrypt operates, for example? With a single spoofed dns record, you can get an https certificate. A whole lot of infrastructure relies on DNS for security.


There is a difference between spoofing a record for a specific DNS server and spoofing a global DNS request.


Maybe, but the top thread just said "the validity of a DNS record".


You need to spoof at least three records (for the same domain) to get a certificate.


…no? Or what are the other two records? If I can control example.com, I can use an http-01 challenge to obtain a cert for example.com.

I can't do a dns-01 challenge (that would require _acme-challenge records, although even then, I'm still up to 2, not 3, records), but I don't have to do a necessarily have to do a dns-01 just to get a cert…


If you can control example.com actually on the root NS you should get the cert for example.com! That's not spoofing, that's owning.

If you can control the record seen in one datacenter for example.com, you don't get the cert. They check the root NSs from three DCs.


To be fair these days you need to spoof it in at least two locations around the world - they will query you from at least two locations. But yeah, it is bootstrapping security out of seemingly nowhere.


Even in the dns-01 case you only need the one _acme-challenge record to *get* the cert. You only need more records to be able to do something meaningful with your cert, eg A/AAAA records that point to your malicious server and serve that cert.


Best practice is to use CAA records, which would help reduce the impact unless the CAA specifies letsencrypt.


It’s a terrible practice designed to build a moat for billion-dollar corporations feeling scared of free Let’s Encrypt certificates turning off the money printer.


My domain has a caa of letsencrypt, how is that a moat?


Please google "CAA DNS record"


It's beyond me how letsencrypt hasn't been shitlisted, yet.


Why? What's your preferred alternative and why is it more secure?


I don’t have insider knowledge but I feel like a good portion of digicert engineers are basically window-lickers based on some of my insights.


Another fun exercise is to announce IPv6 in a network that's designed to be IPv4 only (IPv6 is announced via its own protocol rather than via DHCP, though you can also use DHCPv6 if you want to). High chances that the switches will not drop the announcement and that hosts will configure IPv6, then try to use it without the user knowing.


Your post is incomplete. Instead of saying "announce IPv6", it might be better to say "announce IPv6 with SLAAC" or "announce SLAAC-supporting IPv6".

If, in the Route Announcements, you enable the M flag, then that disables SLAAC on your subnet, forcing folks to use DHCPv6 for an IP allocation (or use a static IPv6 address).

Besides, that "fun exercise" is what ISPs often do: For example, Comcast turned on IPv6 without any warning (at least, no warning pushed to customers). In my case, I (the user) didn't notice it was in use until I checked, which I think is exactly how it is supposed to work for the end-user.


As a pentester that did exactly this in many corporate networks: this is extremely effective. Just announce with a router advertisement that there is a DHCPv6 server and start handing out link-local IPv6 addresses while you specify your own system as DNS-server.

Clients (both Windows and Linux) will prefer the DNS-server specified through IPv6 over the one from IPv4. Then you can spoof any DNS record and capture juicy NTLM hashes flying through the network or relay their authentication and get a free authenticated connection.

This is most effective in networks that were designed for only IPv4 and didn't consider IPv6 at all. But it is also effective in some networks that do use IPv6.

Mitigations? Either disable the IPv6-stack on all systems, or configure your switches to block the router advertisements and do not allow DHCPv6 traffic to the wrong systems.


I never would have guessed that so many data centers ran their DHCP off of Microsoft Windows.


Microsoft's Active Directory, DHCP server, and DNS server integrate very closely. When a domain member gets a dynamic IP address, the DHCP server will inform the DNS server to update its record for that host.

Many companies are, let's say a bit lazy - when use an Active Directory domain anyway, you might as well use the DHCP and DNS servers, too, they handle replication and failover very smoothly. (I am not a big fan of Windows, but that part has worked pretty well in my experience.)

You can get a similar mechanism to work between BIND and ISC DHCPD; it's not a lot of work, but with Zeroconf/mDNS it is less useful than it used to be.


Active Directory and DHCP go hand-in-hand. Your Domain Controllers aren't always your DHCP servers, but under a certain scale, they very likely are.


I'm a 20+ year Windows sysadmin and I don't buy it. If you'd said "Active Directory and DNS go hand-in-hand" I'd agree-- the coupling there is pretty tight (and it's a pain-in-the-ass to run Active Directory with non-Microsoft DNS servers being authoritative for the AD domain name). DHCP is a lot less tightly coupled.


If you create a brand new domain, it will automatically configure it to be the DHCP server by default.


That's true of DNS, not DHCP. One has to specifically install the DHCP role in a new AD domain.


I find this number also very surprinsing but it's not really 40% of datacenter, it's 40% of the networks monitored by Akama


Yeah, I thought that too. Akamai seems to be popular with corporations.


Not the colo themselves running it, "in" datacenters. And more accurately, in networks in datacenters.

Colocation means many clients, and in any given colo there's almost certainly someone running a Windows AD + Microsoft DHCP box, meaning it's "in" that datacenter. I'm surprised as many as 40% of networks still have that tech, but that's enterprise for you. Point being, though, it's likely in well more than 40% of datacenters.


In the 90s Microsoft tied their dominant Exchange/Outlook to Active Directory which depends on DNS/DHCP.


Active Directory was officially only released in 2000 though.


Microsoft and Google together are trying to control the largest spectrum of computer usage globally.


Actual title for me: "Spoofing DNS Records by Abusing DHCP DNS Dynamic Updates".

A "by" in the title is necessary IMO.


Narrowing to Microsoft AD DHCP seemed MUCH more important.


If you want to say what you think is important about an article, that's fine, but please do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

"Please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html


I was thinking specifically of this rule.

Their original title was misleading, implying broader scope, all dynamic DHCP. Narrowing to saying Microsoft DHCP seemed less misleading and less click-bait. I could picture a pile of conversation saying "oh, this is just X" so tried to narrow it to just X.

From your comment, I take it I saw it wrong. Just saying that rule was my specific reason for the edit, to make less click-bait and less misleading.


If you were thinking of the rule and trying to abide by it, that's more than enough! This domain (titles etc.) is complex enough that it's totally normal for people to interpret the rule differently.


I wouldn't be surprised if ISC DHCP (now defunct) would suffer the same issue when configured for Dynamic DNS updates with Active Directory. Functionally, it works identically to MSFT DHCP when performing DDNS updates in an AD-integrated zone.


Adding the ‘*’ record to ADIDNS during a penetration test and watching auth attempts for mistyped domains roll in is always good fun


> We reported our findings to Microsoft, but a fix is not planned.

The old Microsoft is back then?


As former Windows Sysadmin, any fix is likely to break environments and Microsoft doesn't want to deal with it. DHCP feature was legacy thing for NT4 migrations but Microsoft never turned it off because almost no sysadmins know what they are actually doing. Fix seems to be just disable DHCP DNS registration.

Also, Microsoft is over Windows Server environments. They have gone full cloud and could care less about Windows OSes. They continue to develop them but it's clear it's switched from critical business unit to a side business for them.


As they should. I wish Windows Server a swift and excruciating death.

That's unfair. I actually don't hate Windows Server in and of itself, but I loathe the way people use it, and I dare say they're encouraged to do so.

I think the worst part is the lack of incentive to actually learn about the OS, or how to use it, let alone holistic best practices for system administration. Why bother expanding your skills, when everything you need is just an RDP and EXE away!

Windows Server is to system administration as WordPress is to software engineering. It's extremely powerful for beginners and it'll _probably_ work for a good chunk of what people want.

Both are often driven by incessant manual tweaking, until one day it finally works, and then hoping no one makes a sudden movement. And good luck consistently replicating any effort from one environment to another in either case.

However easy it might _seem_ easy at first, it's very quickly going to start hurting. And of course there are "correct" ways of doing them both, but: why on Earth would you bother when it's so damn easy to do it the naughty way?


> think the worst part is the lack of incentive to actually learn about the OS, or how to use it, let alone holistic best practices for system administration.

I have been out of Windows systems engineering for a while, but this was certainly not the case say 20 years ago.

The operating systems and best practices were well documented (via official Microsoft Press books) and back then MCSE certification actually let you learn some in depth stuff (if you wanted to).


Thank you for the perspective. It's probably true I've just never found myself in the right crowd. Interestingly, I started at a dysfunctional Windows shop 20 years ago. I spent a couple years there, and left for the Linux world where I fell in love with open source and never looked back... That was until recently when I ended back up in Microsoft land, somewhat inadvertently.

I'm sure that documentation you describe still exists. The Microsoft docs site is pretty thorough. But every time I have personally found myself involved with windows, things have been an enormous mess of string and bubblegum.

Linux shops are certainly not bastions of consistency, but I've just found better success with git-driven automation in Linux environments than I've ever experienced with my own senses in any Windows shop.

I firmly believe it boils down to the fact that RDP enables laziness out of the box, whereas linux requires clearing a few hurdles to make things quite as easy. So there is a tangible incentive to do things the sustainable, "right way" and not limp along with short-term quick and easy fixes.


Background: Unix (SCO and XENIX) and later Linux user and sysadmin (Slackware in '92), long-time Windows sysadmin for the "day job" (since NT 3.51).

> I think the worst part is the lack of incentive to actually learn about the OS, or how to use it, let alone holistic best practices for system administration.

Funnily, this is how I feel about Linux-based OS's. Being that there's only a single "distribution" of Windows I can be confident that time I spend learning about various "contrivances" Microsoft devices (service management, configuration storage, etc) are going to apply to any Windows machine I work on.

OTOH, every Unix and Linux distro has its own set of contrivances for configuration, service management (though systemd being rammed down everybody's throats has in some ways changed this), filesystem hierarchy, etc. I pushed for RHEL for "day job" deployments, for years, because I knew there would be stability in the product that was similar to Windows. (Debian is a close second.) Otherwise, Linux distros feel like the wild west.

Knowledge about the underlying architecture of NT going all the way back to the original 3.1 release still applies. The OS has been updated, for sure, but there's a strong lineage and adherence to design principles going all the way back to the start.

People who use any technology w/o understanding it are infuriating.

> Both are often driven by incessant manual tweaking, until one day it finally works...

Again, that's how I see a lot of Linux shops. I don't think Windows makes it any easier to have immature sysadmin practices.

My Windows environments are deployed from unattended installs, configured by Group Policy, and manual tweaking on individual VMs is highly discouraged. If I were deploying Linux at scale I'd be using configuration management tools there the same way.


I think your perspective is totally correct, thank you for sharing it and checking my vitriol.

The only thing I would challenge is that the pain of inconsistency (from distros to stacks, e.g. systemd, etc.) is exactly what I believe drives people to desire/aspire to automation and strive for the consistency.

That inconsistency, while absolutely frustrating at times, also forces users (often against their will) to understand the core concepts behind the software they're using or managing rather than memorizing GUI workflows.

In my opinion, a number of Windows folks seem to take consistency for granted, and become all too reliant on it. They get comfy and have little clue how to overcome adversity when it rears it's ugly head.

I don't think that sort of laziness is unique to the Windows world, but I do think Windows makes it easier to fail upwards. It's much harder to hide incompetence in a Linux environment, at least in my experience.

In other words: if you'll excuse my reductiveness, what doesn't kill, gives strength. Or at least hardens one's resolve.

Honestly my biggest gripe boils down to how easy RDP makes it to form bad habits, and how there is little (short term) consequence for operating in reactive ways which lack reproducibility because "I'll just pop into the server and click around for a sec"

Windows with RDP is faster, and it is easier. System admin that way (mostly) works. Best of all, for the majority of those who grew up in the PC age, it's familiar.

But I unfortunately don't trust a lot of my colleagues past and present not to abuse it.


> It's much harder to hide incompetence in a Linux environment, at least in my experience.

curl | bash enters the chat.

Seriously, though, junior sysadmins (or devs pretending to be sysadmins) are gonna do what they do regardless of the underlying substrate. For Windows sysadmins it might be clicking-around doing one-off "fixes". For Linux admins it's shitting-up production boxes with compilers and dev tools or adding sketchy untrustworthy package repos.

> Honestly my biggest gripe boils down to how easy RDP makes it to form bad habits, and how there is little (short term) consequence for operating in reactive ways which lack reproducibility because "I'll just pop into the server and click around for a sec"

> Windows with RDP is faster, and it is easier. System admin that way (mostly) works. Best of all, for the majority of those who grew up in the PC age, it's familiar.

It's the same w/ SSH on Linux machines, though. Junior people think it's easier to make one-off changes. Senior people realize that every one-off change is a gamble with the future. It's part of the culture and maturity of the individual and of the organization they're working in.


If an org has sysadmin(s) what are devs doing in production? I'm not referring to juniors or devs, although shit obviously rolls down hill.

My point is that whatever is done over SSH is at the very least repeatable with relative ease even if it's incorrect.

SSH has a command history. RDP means recalling what guis were clicked through and which options were selected. Neither is particularly scalable and both are imperfect but only one of those is faulty by default.

In the case of ssh at least I can copy and paste (then fix) some idiot's commands into a script as a starting point for automation. What are my options after a whackamole RDP session?


If you edit a file via SSH, all you’ll see in the shell history is that you edited that file. You don’t see what got changed.

But you’re certainly right that doing ad-hoc fixes is a losing battle, regardless of the platform.


> curl | bash enters the chat.

Along with setenforce 0.

And these people are telling me about security


How is logging in to a Windows box with RDP and clicking around different from logging in to a Linux box with SSH and messing around in a trial-and-error fashion with text files though?

Things like „oh this doesn’t work with SELinux enabled, so let’s just disable SELinux“ for example.

I‘ve seen both and I wouldn’t necessarily say I‘ve seen the former more often.

There’s probably as many incompetent Linux admins as there are incompetent Windows admins. At least in my personal experience.


There's no difference in the tasks you you described. I'm not saying it's not possible to do bad work over SSH, I'm saying it's in fact more punishing. If something is 500 keystrokes to do in Linux, you're incentized to do it in an automated fashion to avoid repetitive cli shenanigans. If the same thing is 2 clicks over RDP, who's going to bother scripting that? I'm not opposed to easy, I'm in favor of reproducibility and self documenting scripts.

There's no such incentive with windows because it's so "easy" and things like powershell's double hop and other similar quirks can actually make certian things more difficult to automate.

My point is, at least you can easily translate random CLI poking into scripts. more often than not, an RDP session doesn't lend itself to scripting.


>could care less about Windows OSes.

So you're saying they do care?

(I'm just giving you a hard time. The correct expression is "could not care less about <x>".)


The "fix" is to follow best practices established back in Windows Server 2003. A lot of people aren't, apparently-- big surprise.

It's a little disappointing a default configuration results in a less secure deployment (and Microsoft has done a lot to standardize on more secure-by-default configurations) but it's not like this was some great unknown zero-day.


A fix for what exactly?

Manually visit every DHCP server in existence and do exactly what listed in mitigations?

    >> The TL;DR version is:
    Disable DHCP DNS Dynamic Updates if you don't need them
    Client records should be safe if you configure a weak user as the DNS credential
    Managed records can’t be protected from spoofing with any configuration; use static DNS records for sensitive non-Windows hosts, if possible
    Do not use DNSUpdateProxy; use the same DNS credential across all your DHCP servers instead
    
Automatically do that with a patch and hear millions of 'M$ broke my shit again, M$ is shit'?


NetBIOS needs to go away. It made sense when computers were linked up in a token ring setup. There are way better and more modern/secure methods of service discovery.


How are folks managing AD users and computers with so many remote employees? Does AD still have value vs a more decentralized "BYOD" approach, especially with heterogenous OSs?

AD used to be the lynchpin for policies and SSO via GP, NTLM and file shares, but these days those are all web apps or cloud drives and AD has little control over them.


SAML extends AD-based authentication to any external app/service that can use it.


Sure, ADFS has been a thing for forever, but I don't think that is a good value prop for AD. Most online office suites and SSO platforms can provide SAML, and with a nicer way of managing users and groups.


AD is like Jira: it's not great, but loads of things integrate with it. You can do SAML or OIDC for more modern logins, and you can speak LDAP/AD for other systems, and all your Windows logins will just work. Also, your skills acquired 20 years ago on AD are still relevant, which is why as an IT head you'll probably stick with AD.


Not that SAML or OIDC are actually any better.

They’re a bit like YAML or JSON vs XML in my experience.


Office 365 + Azure AD/Entra + onprem sync + Intune


if you know all you can do with a default AD install, you won't even bother with dns poisoning.


I worked in Microsoft support for domains and security from the mid 90s through the launch of Windows 2000. I haven't worked for Microsoft in more than a decade and I don't speak for them; I'm just recollecting.

Many of you may be too young to remember, but TCP/IP did not really come to dominate corporate networking until the mid-late 90s. Before that, Novell had the IPX/SPX protocol suite and Microsoft had NetBEUI. These protocols solved name resolution via broadcast (although NetBEUI morphed into NetBIOS over TCP/IP [NBT] and gained a server-based name resolver, WINS, which people have the same complaints about).

For early adopters of Active Directory, one of the biggest problems they had was that TCP/IP was new to them, and as often happens in corporations, the people who had Windows AD expertise had no authority over their enterprise DNS, and the people who controlled enterprise DNS had no Windows expertise.

Support calls due to inability to locate a Windows resource in DNS were some of the most common, and were therefore one of the bigger identifiable costs of supporting Windows.

The Windows team had predicted all this (and also predicted that Windows 2000 would be the first introduction of TCP/IP into many smaller corporate networks and therefore they couldn't count on having DNS and DHCP already present), and they built AD-integrated DNS and DHCP and included it with Active Directory.

Microsoft didn't invent DNS or DHCP obviously, and of course at the protocol level both of them are insecure. But so was everything else on corporate LANs at the dawn of the internet and for years after. Having AD-integrated DNS and DHCP meant that literally you could boot a server with the Windows 2000 CD and install Windows and during the creation of Active Directory, you could select to have Windows install DNS and DHCP for you, giving you a turnkey TCP/IP network.

Windows DHCP could be configured to issue addresses and then take the broadcast name (remember NBT?) and populate DNS with it, making it easy for DNS name resolution to work and solving the other frequent support issue of not being able to resolve names via broadcast on segmented/routed networks.

Of course now all this looks hopelessly archaic, but all of the decisions were rational at the time they were made.

The problem now is that many Microsoft customers still use the really old features, they can't migrate off them for one reason or other, usually due to compatibility of some obscure app or other, or just due to costs.

Of course the "secure" solution to DNS is for an admin to manually create all the records, and use DHCP reservations. Most companies don't want to expend that effort, except for security sensitive machines. In AD, the really critical records (SRV records) have ACLs that protect them from abuse. So mostly client machines get the dynamic DHCP treatment.


NTLM has to die. And yes, I'm aware that MSFT is working on it.


How is this less secure than MDNS? Asking for a friend.


Pretty much a non story and hence MS didn't bother entertaining Akamai's rage bait post here. A lot more things can go wrong with a default AD install as others said. Yawn.


Akamai acting like they wouldn't go immediately bankrupt if they didn't have those sweet government contracts, I do not miss working with them


Ever the more reason to fix it? It sounds as if nothing is fixed if it is _that_ bad.


40% of datacenters? Akamai are you serious about your marketing stunts?


The HN title[1] was editorialized. The actual figure, used twice in the article, specifies it is talking about data center networks monitored by Akamai.

[1] As of this comment: "Spoofing DNS records abusing Microsoft DHCP server running in 40% of datacenters"


I'd argue it's more likely to be in a datacenter than in 40% of data center networks, since if even one network in the colo ran it, that would mean it was in that datacenter.

Remember, this isn't server share, it's share of DCs that have even one Windows Server with AD and Microsoft DHCP. For large commercial colos, you can assume that's effectively 100% of colos.

And with 85% of SMB being Microsoft shops, one can extrapolate for small colos, though we'd hope of that majority that are still on AD they either use Azure AD or run AD in the office and Linux in the colo.


If you think Windows Server isn't running in 40% of data centers, have I got news for you.


I don't like it very much, but every single small-to-medium non-IT company I have seen runs on Windows. Sometimes they require Windows specifically because of third-party software (SCADA/industrial automation for example), more often it's just that Windows is the "default" that people know and use, due to network effects.


Indeed - it doesn't take much to be in a datacentre. One server in thousands could be running it and it's "in the datacentre".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: