Hacker News new | past | comments | ask | show | jobs | submit | junon's comments login

It's definitely not from 1990.

Probably the author himself is from 1990.

Pretty sure this is also why, when you stand at the right spot in a techno concert, the music starts to sound like a jet engine.

We also have this in game development, where if two sound effect emitters play the same effect at the same time with just a bit of offset, phase, whatever, they sound like that.


If the offset is fixed, the effect is called a comb filter. If the offset is changing, the effect is called flanging. The name stems from recording engineers rubbing their fingers against the flange of a reel-to-reel recorder's tape reel, to brake it slightly, which adds increasing delay to the sound.

Huh so a flange is just a combing effect that shifts around? I'd never considered this. Also neat fact about flanges, didn't know that either. Thanks!

A side effect of this that I've started to notice on a few of my repositories are fake accounts trying to bolster their perceived credibility when they are very obviously (terrible) AI accounts - down to their profile READMEs (on GitHub) that have obvious LLM output, pointing to links that don't exist, etc. and in some cases even LinkedIn profiles that are completely fabricated.

I just had a PR opened that was a two character change, in Javascript, changing `if (!warned)` to `if (!== warned)`. They assured me, in an H1 no less, that they had tested everything and that it was fixing some problem, but didn't say what.

What the hell is happening, and what are we supposed to collectively do about this? Or is this just some new norm we'll have to adapt to?


That's not the takeaway here.

Yep. Any software these days can be "network accessible" if you put a server in front of it; that's usually what pumps the score up.

Perhaps giving a bit more information than throwing out random acronyms related to SSH would be a bit more fruitful in terms of responses.

What about TOFU and MITM would you like them to respond to? TOFU isn't inherently a bad thing. Neither is MITM. It depends on the threat model, the actors involved, etc.

Your comment (and the snarky followup) imply they're doing something wrong, but it's unclear what.


This is rehashing (no pun intended) a very long-discussed issue about versioning. Your post also contradicts itself.

> SHA `11bd71901bbe5b1630ceea73d27597364c9af683` - That means absolutely nothing to a human being

> Presumably it would be wise to check the SHA as well to ensure no changes have taken place maliciously

That's exactly why the first one happens so often - I've checked the dependency at that version, and I want to make sure I only get that version, as spoofing sha's in a Git context is not part of my threat model.


That is true. In the post I am replying to the idea was

SHA `11bd71901bbe5b1630ceea73d27597364c9af683

as what the script contains as a reference. While i would advocate

hashicorp/setup-terraform@v3-2025-03-26-15-01 Hash: `11bd71901bbe5b1630ceea73d27597364c9af683`

I consider it easier to deal with, .


Honestly here's how you do e.g. ARM:

- Get multitasking working in x86 since there are a ton of guides for that. Learn OSdev concepts.

- Read the ARMARM. It's many thousands of pages of dense technical material, nearly everything you need to get started. The only parts it doesn't cover are boot media/formats and peripherals.

There aren't good resources for ARM osdev because the environment is not standardized at all. ARM chips are used everywhere and you're often targeting specific boards, not chips. Writing the ARM-specific stuff is only half the battle.


- There are many books about programming in Rust (I own several and am reviewing one right now)

- Crabs are a fandom and branding thing, I don't see the point in criticizing that. Marketing is hard, an identifiable brand mascot is a good way of going about it. Go has gophers, Zig has the lightning bolt, Java has coffee (yes, "caffeinated" was a term often used in that world)

- Crates are just a name, what's the issue?

- Trans folks in the Rust community in particular are some of the least annoying or threatening people and generally have the best handle on programming theory and how Rust works. I am not part of that community whatsoever but this has never been a point of friction. Even if you're not a fan of the "woke DEI agenda" as your comment suggests, Rust community members are not ones to bring it into technical discussions. So, again, begs the question, what's the point you're trying to make?

- Oxidization is, again, a branding and fandom thing.

This is a bad faith comment and is very much against the guidelines here on HN. Take this sort of thinking to Reddit or Twitter where it belongs.


Well written article. It reminds me of the zero day that Apple tried to cover up somewhat - the "empty password tried twice" root login bypass. This was ca. 2017 or so, maybe 2018.

You were able to type in an administrator username in any root sign in box (e.g. in the settings panel via the padlock icon) with an empty password. Hitting the Sign In button the first time told you that the password was incorrect. Dismissing that alert box and hitting sign in a second time signed you in as that user.

We were able to reproduce it 100% of the time day-of, and of course was patched pretty shortly after making the rounds on social media. Still seems like a massive oversight though.

Seems there's still some cruft around the auth mechanisms in Mac. Interesting to see the port system mentioned - it's not a well known fact of Mach kernels.


> "empty password tried twice" root login bypass

Sounds like the time my cat hacked my Sun 3/60 by simply sitting on the keyboard. XDM crashed when the username buffer hit 256 random characters, and then dropped a root shell. But this was in the early 90's when everybody was a lot more innocent about security.


I feel like that (innocence about security) might be a bit of the reason why this vulnerability existed in the first place. I'm sure the NetAuthAgent code is very old, and was probably written at a time before even Apple was serious about security. And what really can you add new to a web server client? I wouldn't be surprised if the entitlement check is the first thing they've added to it in years.

It's funny, Apple themselves even suggest people use other apps for FTP: https://support.apple.com/guide/mac-help/servers-shared-comp...

> With read-only access, you can copy files from the server, but to copy files to the server, you may need another FTP app. Choose Apple menu  > App Store to find FTP apps available for macOS.

Another funny note: NetAuthAgent has had a bug for years where you cannot connect to an FTP server if your username contains an `@` symbol. I understand the technical reasons for that, but it is technically supported by the spec and other clients work with it just fine. I'd dig up some old posts talking about it going back years, but I don't want to put in the effort, lol.


>NetAuthAgent has had a bug for years where you cannot connect to an FTP server if your username contains an `@` symbol.

Basic auth and FTP had a username password schema like

ftp://username@ftpserver.address.com and even

ftp://username:password@ftpserver.address.com


> I understand the technical reasons for that, but it is technically supported by the spec and other clients work with it just fine.

I am completely aware of that, but I am aware that other clients handle it just fine. The bug actually forced me to use a different client for when I actually needed an FTP client.


I would've guessed that once it's established that the bug is caused by the app injecting the specified username into the USERINFO portion of a URL without the necessary percent-encoding, a workaround (other than using a different client) would be to manually percent-encode the username, i.e., replace the "@" with "%40" ... although this wouldn't work if the app actually did percent-encode an entered "%" despite not encoding an entered "@"!


Fascinating. I'll have to test that out!


Back in the day, on my first day in a huge corporation famous for its refined engineering culture, I was issued a laptop that had some installation program running, to create a user for me, enroll on the corp network, etc. It had a bug, which made it run certain actions in an infinite loop.

I pressed Ctrl+C, and got a root shell. From it, I ran the enrollment scripts I could find; it gave me sufficient access to pull from the corporate repo. So I took a ticket, found a related bug in the code, and created a patch fixing it on the first day, technically by cracking into a corporate laptop. (I had to wait for a properly completed enrollment to be able to push the code.)


> … hacked my Sun 3/60

L1-A to bring up the boot monitor. Search memory for `/bin/login` and replace with `bin/ed`. Enter any file name as your ‘user’ name, or just junk, and use `!sh`.

(Amazing what you remember from school…)


Glad to see vax and 68k sun guys of my vintage here still. The security game was different then. Miss the simple life. :-)


> But this was in the early 90's when everybody was a lot more innocent about security.

There are definitely more recent flaws that might have been suspectible to "cat butt on keyboard", like this one in 2016: https://www.bleepingcomputer.com/news/security/linux-flaw-al...



I remember the first time I encountered a Windows machine you had to log in to to use (1996). At my first job, I had write access to all the system directories on the corporate VAX machine. I could have easily taken down a major corporation through a careless mistake (but fortunately, never did).


MacOS is descended from NeXTSTEP which dates back to the 1980s. There is probably a lot of terrible code in there.

Edit: terrible from today's security perspective, I meant.


Take a look at it yourself if you're curious. The kernel code is open source: https://github.com/apple-oss-distributions/xnu


I remember using Novell netware in high school and if you random long lengths of passwords you could get the machine to crash. Didn't have any idea what was going on at the time.

After highschool I saw an exploit for this where if you typed the right magic characters in you could highjack the execution of the login prompt and get a shell.


Novell was really bad at locking down machines lol. It was namely susceptible to the windows XP `at.exe cmd.exe` root shell bypass.

Got in trouble a few times during school years doing that.


back in the 80s I got into a DEC system by doing roughly the same thing to an application running on VMS. basically you when it interrogated the terminal you could overflow it and drop into the command line.


Freakazoid moment…


Thanks for reminding me about XDM a source of seemingly infinite vulnerabilities.


Certain DOCSIS cable modems … had a similar flaw

“With many users now using modified config files to uncap their modems, most cable modem service providers acted to defeat this exploit by turning on the DOCSIS security feature that requires the CMTS to check the authenticity of the modem's config file during the registration process (this is explained in more detail in Chapter 9). As previously mentioned, this checksum is a HMAC-MD5 digest of the entire config file that uniquely iden- tifies its original contents, and it is constructed from the config file using a password chosen by the ISP. This defeats config file exploits because a user 8 Chapter 1| cannot create a checksum that would validate a modified config file without knowing the password that was used by the service provider when the original config file was created. Defeating the Message Integrity Check The fact that the systems of most ISPs had now been patched to prevent this type of uncapping was a challenge to be overcome. I began by attempting to hack the patch that the ISPs had implemented. My starting point was a phrase that was displayed in the modem's HTTP log page when the method described in the uncapping tutorial failed. The logs would read TFTP file complete-but failed Message Integrity check MIC. I wondered how I could bypass this message integrity check or MIC. One morning I awoke to frantic beeps coming from my computer; a member of my group was messaging me. He had the answer. The way to bypass the MIC was not to include the MIC! As simple as that might sound, I had no idea what he was talking about. He then sent me a copy of his config file and had me open it up in a basic hex editor (a program used to examine and modify binary files). The config file normally contained two different checksums at the end of the con- fig file: a standard MD5 checksum of the config, followed by another check- sum, the dreaded HMAC-MD5 (also known as the CmMic). He had simply trun- cated the config file, removing the HMAC-MD5 checksum and the two bytes before it (its header). Remarkably, this allowed any config to be used on any ISP. Once again, every ISP around the world was vulnerable to OneStep. NOTE This hack worked because the developers of the firmware used in the ISPs' routers, which process the config files and CMTS checksums sent from the modems, had not thoroughly tested the finished code. The basic config file processing function in the firmware would process operation codes (opcodes) that were present in the config file, including the CmMic opcode, and carry out the associated actions. But it would not check to confirm that the CmMic opcode had actually been sent (or even that the config file had success- fully authenticated). This flaw was severe because the ISP operators could not directly fix it in their routers; the only ones who could do so were the third-party vendors who supplied the firmware for the CMTSs. It would be a long time before the individual systems could be patched.”



> It reminds me of the zero day that Apple tried to cover up somewhat - the "empty password tried twice" root login bypass. This was ca. 2017 or so, maybe 2018.

2017 it seems, submission at the time: "macOS High Sierra: Anyone can login as “root” with empty password" - 3001 points | 1073 comments - https://news.ycombinator.com/item?id=15800676


Aside: >1000 comments in 2017 is significant. Is there a straightforward way to list such highly-engaged HN submissions? Ideally sorted by prominence within a short (1yr is fine) time window, but a simpler sort (no regard for prominence, just absolute engagement, which would strongly tilt the results toward the present, due to platform growth) would also be ok.



Awesome!


That's nothing. You used to be able to grep any user's FileVault password from the page file for many years. It was a simple one-liner and worked 100% of the time.


Damn, that's honestly hilarious.


TBH this is still possible in some scenarios, mostly when someone isn't using data protection and manually unlocked a local Keychain. It's pretty much the same as dumping LSASS memory on Windows when IOMMU isn't used, and in some cases even when IOMMU is used.


> manually unlocked a local Keychain

Does the Keychain stay unlocked for a while? And do people actually do this?


Yeah so it really depends on the local setup. Here's a wall of text if you're interested:

Say you do software development with a platform engineering and cloud flavour on top, you might be using aws-vault to keep access keys and SSO session keys in a dedicated keychain rather than in plaintext in ~/.aws/. That keychain has an ACL that only allows aws-vault to access it, and has a self-lock timeout of a few minutes. This is great, because it is pretty secure, there is nothing to 'steal' (even from an unlocked machine) and it's still extremely convenient.

However. Say you do this with an external non-TouchID keyboard, when the STS timeout expires and you need to re-authenticate, you also need to unlock the keychain for a few seconds so aws-vault can either read out the SSO session tokens or the static secret for non-SSO usage, and it has to write back the new STS session.

During that window, the keychain unlock from such a keyboard means manual password entry, which in turn means that has to be in memory for a bit. Because a legacy keychain doesn't use data protection (but if you create a new one you do have that option) it's essentially just an AES encrypted file on disk. Because humans aren't likely to remember an AES key, it's derived and wrapped so you have some KDF that uses a user-selected password, which has to be in memory for a bit while the key is unwrapped/derived. The AES key itself has to stay in memory the entire duration of the unlocked state of the keychain, because without the AES key it can't read or write secrets.

Technically, the same happens to encrypted disk images (the AES ones at least, other types I'm not 100% sure). The DEK has to stay in memory while it is in use. It's why Apple started using systems like cryptexes and SSVs so the container disk is almost irrelevant from an integrity point of view. Before that, encrypted disks were an all-or-nothing approach.


Oh fascinating. And it seems I had my terms confused. I didn't know the items themselves were called keychains.


Yeah, it's a bit overloaded. There are keychains (.keychain files) and keychain items (secrets inside of them). The keychains are visible in the Keychain Access app, but also available in the 'security' command line. And then there are modern keychains, those are more like SQLite databases and those can have anything from SQLCrypt type of management to Secure Enclave DEKs.

Security is pretty difficult to get right, so many tradeoffs as well.


It seems I was not confused then? The secrets are the "keychain items". I got really mixed up there.


So the hierarchy looks a bit like this:

Top level = Keychain Access.app or the security CLI tool

Mid level = keychains (in flavours of files, core storage, data protection-enabled, and iCloud)

Item level = an entry inside a keychain

There is a sub-level as well, some software stores encoded data as a single item so when it's decrypted it's a bunch of different data, not a single secret, but technically the keychain system isn't aware of that anyway.


Yeah I got confused there for a second based on your original message, but I'm glad to know my understanding initially was correct.


I mean, you should really just assume that physical access = owned.


> grep any user's FileVault password from the page file

I'm not sure if this necessitates physical access.


A filevault password is useless unless you have root/physical access. If it's mounted by the system, then filesystem ACLs will still be in effect. Otherwise you need root to read raw block devices.


> Interesting to see the port system mentioned - it's not a well known fact of Mach kernels.

I'm surprised to hear someone say that, given that it's the fact about Mach to me. I'm not sure how people could know about Mach but not its defining system.


Maybe OP is referring to the fact that there is a Mach kernel underneath. Most developers I guess would like to write CLI apps that are portable to Linux and macOS, so they only use the POSIX side of the API. These don't need to know about ports. And if developers are writing Mac-specific software, Core Foundation is probably the lowest practical level they will go with.


Yeah, if you're writing user-facing apps, writing against Mach wouldn't be smart. But if you're a vulnerability researcher on the other hand... it's a neat trick and might just help you find a CVE! (even though most stuff has moved to XPC now and you could just used the public XPC API's there)


Yes, that's what I meant. I'm making a kernel that uses ports not dissimilar to Mach's and I get asked if there are other modern kernels that do the same thing from time to time. Everyone I've talked to has been surprised to hear it has them.


Oh nice! Do you have a GitHub or something for the kernel?


https://GitHub.com/oro-os/kernel :)

Still in early stages of development, mind you.


I also submitted a report for CVE-2011-3226 for a password-less login, referenced here: https://support.apple.com/en-us/103345


Don't forget the time that Apple would show your password in plaintext instead of your password hint: https://news.ycombinator.com/item?id=15410953

“If a hint was set in Disk Utility when creating an APFS encrypted volume, the password was stored as the hint. This was addressed by clearing hint storage if the hint was the password, and by improving the logic for storing hints.”


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: