Hacker News new | past | comments | ask | show | jobs | submit login

The point is that secure software is easier to write, not that it's impossible to have security vulnerabilities.

Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.

Postgres is written in C and uses a complicated and bespoke network protocol. This is the root cause of that vulnerability.

If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.

The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.

Collectively, we need to start saying "no" to this legacy.

Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.




> Your specific example is a good one: interacting with Postgres is one of those things I said people choose despite it being riddled with security issues due to its age and choice of implementation language.

Ah there is the issue: protocol level bugs are language independent; even memory safe languages have issues. One example in the .net sphere is f* which is used to verify programs. I recommend you look at what the concepts of protocol safety actually look like.

> The security issue stems from a choice to use a legacy protocol, which in turn stems from the use of an old system written in C.

This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.

> If Postgres was a modern RDBMS platform, it would use something like gRPC and there wouldn't be any need to hand-craft the code to perform binary encoding of its packet format.

There is no guarantee a borked client implementation would be defect free.

This is a much harder problem than I think you think it is. Without resorting to a very different paradigm for programming (which, frankly, I don't think you have exposure to based upon your comments) I'm not sure it can be accomplished without rendering most commercial software non-viable.

> Meanwhile, I just saw a video clip of an auditorium full of Linux kernel developers berating the one guy trying to fix their security issues by switching to Rust saying that Rust will be a second class citizen for the foreseeable future.

Yeah, I mean start your own OS in rust from scratch. There is a very real issue that RIIR isn't always an improvement. Rewriting a linux implementation from scratch in rust if it's a "must have right now" fix is probably better.


The counter to any argument is the lived experience of anyone that developed Internet-facing apps in the 90s.

Both PHP and ASP were riddled with security landmines. Developers had to be eternally vigilant, constantly making sure they were manually escaping HTML and JS safely. This is long before automatic and robust escaping such as provided by IHtmlString or modern JSON serializers.

Speaking of serialisation: I wrote several, by hand, because I had to. Believe me, XML was a welcome breath of fresh air because I no longer had to figure out security-critical quoting and escaping rules by trial and error.

I started in an era where there were export-grade cipher suites known to be compromised by the NSA and likely others.

I worked with SAML 1.0 which is one of the worst security protocols invented by man, outdone only by SAML 2.0. I was - again — forced to implement both, manually, because “those were the times”.

We are now spoiled for choice and choose poorly despite that.


> protocol level bugs are language independent; even memory safe languages have issues. [...] This defect in particular occurs in the c# portion of the stack, not in postgres. This could have occurred in rust if similar programming practices were used.

But it couldn't have occurred in Python, for example, and Swift also (says Wikipedia) doesn't allow integer overflow by default. So it's possible for languages to solve this safety problem as well, and some languages are safer than others by default.

C# apparently has a "checked" keyword [0] to enable overflow checking, which presumably would have prevented this as well. Java uses unsafe addition by default but, since version 8, has the "addExact" static method [1] which makes it inconvenient but at least possible to write safe code.

[0] https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

[1] https://docs.oracle.com/en/java/javase/19/docs/api/java.base...


> C# apparently has a "checked" keyword [0] to enable overflow checking, which presumably would have prevented this as well. Java uses unsafe addition by default but, since version 8, has the "addExact" static method [1] which makes it inconvenient but at least possible to write safe code.

This is the point I'm making: verifying a program is separate from writing it. Constantly going back and saying "but-if we just X" is a distraction. Secure software is verified software, not bugfixed.

And that's the point a lot of people don't like to acknowledge about software. It's not enough to remediate defects: you must prevent them in the first place. And that requires program verification, which is a dramatically different problem than the one OC thinks they're solving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: