Hacker Newsnew | past | comments | ask | show | jobs | submit | vitaminka's commentslogin

these features will eventually trickle down into the mainstream, kind of like C11 is doing at the moment

also, unless you're targeting embedded or a very wide set of architectures, there's no reason why you couldn't start using C23 today


Or in other words, for embedded and existing code: most use c99, some use c11 and nobody uses c23 until at least 10 years from now.


This depends on the platform. Many embedded systems are based on arm these days and have modern toolchains available.

I cannot remember the last time I saw C99 used. C codebases generally use C11 or C17, and C++ code bases use C++20


Unless you can vouch for the C++ compiler, the best C++ portable code can offer today is C++17.

Also 8 and 16 bit embedded toolchains are certainly not on C11 / C17, they can hardly afford full C89.


SDCC is a niche C compiler for 8-bit CPUs and is more uptodate than MSVC ;P

https://sdcc.sourceforge.net/

That's the nice thing with C: it's much easier for small teams to fully support than the latest C++ standards.


Now try to use it on the embedded deployments that require certification.


Most devices that are 6+ years old (as far as I can tell) use C99. If not C89. And/or C++17, if that.

That's A LOT of devices out there. A lot of which still get maintenance and even get feature updates (I'm working on one right now, C99).

So the claim that "C codebases generally use C11 or C17, and C++ code bases use C++20" intuitively sounds like totally untrue to someone working in embedded C/C++. I've been doing this for 15+ years and I've never touched anything higher than C99 or C++17.

If you're talking about gaming, sure. But that's not "C code bases generally".


most non-embedded and non-legacy codebases could use c23, that's not an insignificant set


I would argue that is an insignificant set.

Unless you think that code-bases created in the past year are a significant part of code bases that have been created since the inception of humanity.


is rust cargo basically like npm at this point? like how on earth is sixteen dependencies means no dependencies lol


Indeed. It's the one cultural aspect of Rust I find exhausting. Huge fan of the language and the community in general, but a few widespread attitudes do drive me nuts:

* That adding dependencies is something you should take very lightly

* The everybody uses or should use crates.io for dependencies

* That it's OK to just ask users to use the latest release of something at all times

* That vendoring code is always a good thing when it adds even the slightest convenience

* That one should ship generated code (prominent in e.g. crates that use FFI bindings)

* The idea that as long as software doesn't depend on something non-Rust, it doesn't have dependencies

Luckily the language, the standard library and the community in general are of excellent quality.


Yes, basically. Someone who is a dependency maximalist (never write any code that can be replaced by a dependency) then you can easily end up with a thousand dependencies. I don't like things being that way, but others do.

It's worth noting that Rust's std library is really small, and you therefore need more dependencies in Rust than in some other languages like Python. There are some "blessed" crates though, like the ones maintained by the rust-lang team themselves (https://crates.io/teams/github:rust-lang:libs and https://crates.io/teams/github:rust-lang-nursery:libs). Also, when you add a dependency like Tokio, Axum, or Polars, these are often ecosystems of crates rather than singular crates.

Tl;dr: Good package managers end up encouraging micro-dependencies and dependency bloat because these things are now painless. Cargo is one of these good package managers.


How about designing a "proper" standard library for Rust (comparable to Java's or CommonLISP's), to ensure a richer experience, avoiding dependency explosions, and also to ensure things are written in a uniform interface style? Is that something the Rust folks are considering or actively working on?

EDIT: nobody is helped by 46 regex libraries, none of which implements Unicode fully, for example (not an example taken from the Rust community).


The particular mode of distribution of code as a traditional standard library has downsides:

- it's inevitably going to accumulate mistakes/obsolete/deprecated stuff over time, because there can be only one version of it, and it needs to be backwards compatible.

- it makes porting the language to new platforms harder, since there's more stuff promised to work as standard.

- to reduce risk of having the above problems, stdlib usually sticks to basic lowest-common-denominator APIs, lagging behind the state of the art, creating a dilemma between using standard impl vs better but 3rd party impls (and large programs end up with both)

- with a one-size-fits-all it's easy to add bloat from unnecessary features. Not all programs want to embed megabytes of Unicode metadata for a regex.

The goal of having common trustworthy code can be achieved in many other ways, such as having (de-facto) standard individual dependencies to choose from. Packages that aren't built-in can be versioned independently, and included only when necessary.


Just use the rust-lang org's regex crate. It's fascinating that you managed to pick one of like 3 high-level use-cases that are covered by official rust-lang crates.


> like how on earth is sixteen dependencies means no dependencies lol

You're counting optional dependencies used in the binaries which isn't fair (obviously the GUI app or the backend of the webui are going to have dependencies!). But yes 3 dependencies isn't literally no dependency.


trying to build this rn, and the download scripts has already pulled like a gigabyte of dependencies wth

edit: it's already like 2 gb


You mean the textures and meshes it downloads?


ye, it's approaching 50% of LOD of a vulkan hello triangle


does this mean you can run, say, modern openCL code on smth like a VisionFive2?


Oomph-wise, they claim a good 4x over what the Raspberry Pi 4/400 have. Note that rPi's is known to be anemic due to serious memory bandwidth bottlenecks, barely able to keep up with filling a 1080p screen.

Currently they do not support the specific variant used in JH7110, but they seem to both be variants of the same architecture.

JH7110 has the BXE-4-32MC1[0], whereas the driver currently supports the BXS-4-64-MC1[1].

No idea about compute, and AIUI openCL suport in mesa3d is still a disaster for all drivers.

0. https://www.imaginationtech.com/product/img-bxe-4-32-mc1/

1. https://www.imaginationtech.com/product/img-bxs-4-64-mc1/


I'm curious how you can tell that it was BXS-4-64-MC1 that was supported. I looked and couldn't find that.

Besides the JH7110 there's TH1520 and it's using the BXM-4-64-MC1, also not supported, but 2X the speed of BXE-4-64-MC1 if I understand it correctly.


really no better than a us tech firm, unless you for some reason have a preference for nationality characteristics of your hardware backdoors lol


> let us draw to the screen in CUDA without the need for OpenGL/Vulkan interop

how would that work? like GPU frameworks would just be compute (like cuda) and some small component of it would just allow to write the end result to a buffer which would be displayed or smth?


Exactly. Things like that already work with a workaround: You can use Cuda-OpenGL interop to expose an OpenGL framebuffer in CUDA, then you can simply write into that framebuffer from your CUDA kernel, and afterwards you get back to OpenGL to display it on screen. Just directly integrate that functionality in CUDA by providing a CUDA native framebuffer and a present(buffer) or buffer swap functionality.


i’m curious, what’s is the approach for maintainable and decoupled various gpu backends?


It was designed in #915 (read just the OP and the linked PRs at the end) and the implementation pretty much follows it closely, at least for the Metal backend. The CUDA and OpenCL backends are currently slightly coupled in ggml as they started developing before #915, but I think we'll resolve this eventually.

#915 - https://github.com/ggerganov/llama.cpp/discussions/915


interesting decoupling method, ty :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: