Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

C-nile developers have been making incorrect arguments for a while now. The reality is which almost everyone can see is that memory safe languages are pretty much always what you want to be using for new code. OS and security sensitive components are the prime targets for rewrites in more secure languages.

Now Google has put this to the test and has the data to prove it. We should not allow the worlds technology security to be held hostage by a group of people too lazy to adapt with the times.




> The reality is which almost everyone can see is that memory safe languages are pretty much always what you want to be using for new code.

Nitpick, this is not quite true. Memory safe languages are what you should be using in contexts where security and reliability are critical. This is generally the case but there are some contexts where other concerns are genuinely more important. Of course when this is necessary is often misrepresented but these cases do exist.


writing exploits, for example, is an utter pain in today's safe languages :) C is unmatched in the sheer ease of manipulating raw bytes with some light structuring on top for convenience. I've tried writing exploits in Swift a handful of times but I always gave up after I found myself buried under a pile of UnsafeMutableRawBufferPointers.


You mean, manipulating strings of bytes? Bytes don't have to be memory, you can just use bytestrings in Python or whatever.

Raw memory access is something you normally need to create vulnerabilities, not to exploit them :)


har har :)

It's an ergonomics thing, not a "can't" issue. There is a reason I called out Swift in particular—their "unsafe" APIs are so horrid to use that they make you regret doing unsafe things in the first place. Plus, throw in FFI and now you've got an even worse problem because not only are you forced to use the unsafe types, but often a lot of the critical APIs you need to interface with (in *OS exploitation, mach is the worst offender) have such funky types due to their generic nature that you have to go through half a dozen different conversions to get access to the underlying data.


I still don't understand why would you prefer raw memory manipulation to bytestring manipulation. If you want, just make a Swift library that will implement the memory like you want but without unsafe raw access (but just a few methods over a byte array). Back in the days when I did CTFs, I used Python for writing binary exploits, never C.

https://github.com/hellman/libformatstr

You can do something like this, no need to work with raw memory.


Swift has the advantage that it can directly “speak” C, without any indirection needed. (C can already do this of course.) In particular operations like scanning an address space for things, easily expressing the layout of something, and so on are much easier to do in these languages. In theory you could do the same in Python but it’s often not worth the effort.


Meh, bad argument. How about you write better code in Rust to replace existing stuff. There is so much more to consider and I think one of Rusts most repellent features is the preaching about its advantages. I definitely has some that are hard to deny. Still... how about just doing it, nobody is held hostage here.


> How about you write better code in Rust to replace existing stuff.

No matter what people do, it's never enough for the critics. Many people criticize Rust fans for "rewriting it in Rust"!

> how about just doing it

People absolutely are. That's what the article is about, even.


>The reality is which almost everyone can see is that memory safe languages are pretty much always what you want to be using for new code.

Not everybody is writing security-critical code. For some things productivity and time-to-market is more important and security is not enough of a concern to justify dealing with a language with horrible compile times and a self-righteous, dogmatic community.


Where has time to market and productivity ever been a factor for C and the C ecosystem? My most "productive" languages have been declarative languages like Haskell (depending on how you measure that). I don't care if it takes an extra few minutes to compile either, amortized away in the long term when you don't have to deal with entire additional classes of bugs to deal with. Also why is security never an important concern? There are a lot of security issues which are self inflicted if you use C. Using something like rust means a lot of issues simply don't exist. The optimal solution.


I spent a considerable amount of time learning Haskell but I always felt like a slave to the language. Oh, you want to do this other simple thing? Try language extension XYZ, but you'll have to learn some more language theory first. Also sorry but the extension isn't compatible with the extensions you're already using, and you need to require a new dependency on the outside interface.

There are some things that the language is good at, but at some other things (I think a lot actually) it isn't. At the very least, I wouldn't recommend it for writing a video decoder.


> sorry but the extension isn't compatible with the extensions you're already using

In nearly 10 years of professional Haskell I've never come across a practical situation where extensions that were mutually incompatible. Can you name any extensions that are incompatible in a way that actually matters in practice?


No, I haven't followed the language in 6 years. I remember there was at least one instance but can't recall. Other extensions significantly change the semantics of language module interfaces in more obtrusive ways than I'd like - compared to say C where it's pretty easy to offer a simple interface.

Some of these things would make working in the language pretty painful. I remember trying some library to toy around with a basic 400 line OpenGL program. It always needed 8 seconds to rebuild at the time. I don't recall and probably didn't understand why, but I suppose it has to do with some extra type or template hackery in the library that would just overcomplicate everything, probably even at the outset.

What remains is the feeling of, I can't do this thing yet in the type system, so take that extension. Oh wow. Now I can't do this other thing that I also need, the only fix is another extension (if it's available yet). I couldn't get around of this feeling of constantly having to hack around the language.

In my experience you need to have a very good overview and understanding of all the available tricks and extensions to be able to navigate your way around the language and not paint yourself into a corner. Maybe I have the wrong personality, the wrong motivations, or am just not smart enough. Obviously I'm not you, and am not Edward Kmett (who would resort to lots of GHC specific hacks and drop down to C++ as well).


> I remember trying some library to toy around with a basic 400 line OpenGL program. It always needed 8 seconds to rebuild at the time.

This is a fair criticism. Compile times are slow.

Extension confusion is not a fair criticism. Extensions typically remove restrictions. They don't create incompatible languages.

Granted, it's a bit annoying to have to turn them all on, one by one. These days one should just enable GHC2021 and forget about language extensions. That saves one having to be Edward Kmett, or from bothering to think about language extensions at all.


Which memory-unsafe language is more productive and has faster time -to-market than any managed language?


I'm pretty confident that systems programming (you know - moving data around) is easier with raw memory access compared to managed languages.


Is this a joke? Systems programming is a lot about accessing APIs, dealing with all sorts of intricacies like interrupts, different execution contexts, and managing memory as you said.

If you write in C and your program is complex enough, you will spend a lot of time just chasing segfaults and concurrency bugs and getting it to work the first time you write it. If your systems programming is in userspace, that's sort of fine. But if you're in kernel or on bare metal, the cost of debugging goes up by an order of magnitude. There's no debuggers on bare metal, and nobody can tell you how your program crashed—the device just stops responding, that's it.

That's why safe languages make even more sense in these restricted environments. If you get twice fewer memory safety bugs while you're getting your program to work, that can reduce your development time like 5-6 times.


> Systems programming is a lot about accessing APIs, dealing with all sorts of intricacies like interrupts, different execution contexts

So now you need to make your interrupts talk to your Java objects? Is this any safer?

Is it easier to get a VM running in your kernel (probably no mean feat to do that in the first place) and you'll never get any concurrency bugs? And if you reduce memory bugs by half, those will be easier to debug?

And you won't be annoyed that you can't guarantee to be able to link objects in queues (because of allocation failure) and access them with generic code to copy data, link/unlink them, and so on? You're fine to pay for callbacks and interfaces everywhere, both in terms of runtime and performance as as maintenance headaches?

I'm asking incredulously, but seriously. Because frankly I've never looked at a project like MirageOS or whatever. But given real world evidence of what has survived, I don't see why you should assume I'm joking.

That you can't debug a "bare metal" kernel isn't quite true, either. But sure, the more complex a system becomes, the more contemplation it requires to figure out problems. This is universally true, but you can't simply discuss complexity away. And adding complex object models on top without consideration doesn't make your task easier just like that.


An OS will always need some tiny assembly part, as some instructions needed for the kernel simply never gets generated by compilers. Also, an OS itself is pretty much a garbage collector for resources, it could very well reuse/link into its own GC for better performance.

Are we really talking about the “price” of managed code, when C code is chock-full of linked list data structures? A list of boxed objects is more cache-friendly than that. It is simply not always that performance sensitive to begin with (e.g. does it really matter if it queries the available display connectors in 0.0001s or 10x that?)

Regarding MirageOS, they actually achieved better performance by using a managed language than contemporary C OSs for select tasks. This is possible due to context switches being very expensive, and a managed env can get away with (some?) of those.


Those linked lists give some nice guarantee that significantly simplify a lot of code and give some guarantees that you just can't have otherwise.

Context switching can be expensive, but I don't see what's specific about managed envs about that. Fundamentally you have to have trust in code, and need to have the hardware support to enforce authenticity of the trusted code in order to avoid context switching. A different, promising development to reduce context switches is CPUs growing more and more cores, and more and more kernel resources being available through io_uring and similar async interfaces.


I have a really hard time following anything you say here.

How did we start talking about Java and VMs? Is this some sort of strawman argument?

> But sure, the more complex a system becomes, the more contemplation it requires to figure out problems

Strawman again? I wasn't talking about complexity. I said that the more "systems" your programming is, the higher is the cost of memory safety bugs.


> How did we start talking about Java and VMs? Is this some sort of strawman argument?

Is it a strawman if my comment was in response to "managed languages"?

> Strawman again? I wasn't talking about complexity. I said that the more "systems" your programming is, the higher is the cost of memory safety bugs.

For clarity, you spoke about the cost of debugging of memory bugs. And I said, it's universally true that programs are harder to debug the more "systems" they get. The reason is that it typically isn't sufficient to simply trace an individual thread anymore. "Logical tasks" are served over a number of event handlers executed in various (OS) threads.

It's not first and foremost a refutal of what you said. But an observation that I even placed in opposition to my other statement that it's not quite true that you can't use debuggers with kernels. FWIW and so on. I don't get why you are calling "strawman" repeatedly, and don't get the aggressive tone of your comments.


We probably need to bring in some kind of criminal liability for the companies that only cared about time to market and put their users at risk.

After memory safe languages become a bit more battle tested, C/++ needs to be regulated like asbestos.


Not everybody's building a product that deals with sensitive user data. By your logic all code not written in proof languages should be illegal.


Anyone who writes a library that might get used in a context where it is presented with inputs derived from potentially malicious data, is writing security-critical code whether they acknowledge it or not.


Only because unfortunely liability is still not enforcement by law as it should.


> enforcement of security vulnerability should be by law

I think whether there "should" be a law making you liable could depend on the details of the exploit.

If you get exploited via rowhammer, I don't think anyone would blame you. It would be unreasonable if every small business running a website could be sued if they didn't defend against electromagnetic interference within the RAM.

However, if you're Apple and say -- you could get pwned because someone clicked a button to register version 9000 on the public npm/pypi registry (https://medium.com/@alex.birsan/dependency-confusion-4a5d60f...) -- maybe I agree there's an argument for some accountability there :)


Yes it definetly should.

Computing is the only industry, where people accept to live with tainted goods instead of forcing whoever sold them to pay back, cover for their damage or whatever.

We already have high integrity computing, digital stores with returns, consulting with warranty clauses, and some countries are finally waking up that computing shouldn't be a special snowflake.

https://www.twobirds.com/en/insights/2021/germany/the-german...


Just pointing that all software is exploitable. And punishing the application developer might not be right if the vulnerability is caused by a lower level dependency. For example, log4j.

I agree if there's a high social cost to a breach then the government should punish those involved. Also, the security of your software depends on your threat model and which threats are in scope and you're willing to invest in protecting against. The tradeoff is ease of development and velocity. So maybe such laws will incentive this process differently, and maybe it's a worthwhile change.

I look at computing as a big experiment. Personally, I am very careful to use trustworthy services and don't depend on software for anything critical (besides banking, but luckily FDIC). Most people don't take the same precautions and rely very heavily. It's obviously critical infrastructure at this point. Maybe it's time to stop thinking of it as an experiment, and maybe these laws make sense.

I don't like the concept for emotional reasons; to me it's sad and signals another step towards the end of the golden age of the internet.



> self-righteous, dogmatic community.

Worst Rust feature by far.

I understand how Rust solves some problems and these are indeed very important ones. But it still is a constraint that has to prove itself.

C is horrible, out of the question. Really dated too without thousands of band aids. C++ has millions of those. But why not start re-implementing stuff in Rust if that is so close to your heart?

We can also reimplement everything in JavaScript. It is memory safe too. Wait, where is the enthusiasm now?


> We can also reimplement everything in JavaScript

Look at js package stats - that's exactly what's happening. Many of apps and packages created today in js would be created in C/C++ few years ago. People who learn programming today don't know what C/C++ is. If they need something low-level, it's Rust.


> For some things productivity and time-to-market is more important

Who would choose C/C++ and not something like Go in that situation?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: