Hacker Newsnew | past | comments | ask | show | jobs | submit | Ar-Curunir's commentslogin

While git itself can be improved upon, the GitHub is not git; there are many improvements to GitHub that people have been requesting for many years now.Also, they could even just not make it worse and that would be a welcome change from their recent strategy

> suppose you want Zig's "try" functionality with arbitrary payloads. Both functions need a compatible error type (a notable source of minor refactors bubbling into whole-project changes), or else you can accept a little more boilerplate and box everything with a library like `anyhow`. That's _fine_, but does it help you solve real problems? Opinions vary, but I think it mostly makes your life harder.

This is not true, you simply need to add a single new variant to the callers error type, and either a From impl or a manual conversion at the call site


"compatible error type"

Which is prone to causing propagating changes if you're not comfortable slapping duck tape on every conversion.


It depends on whether people depend on the structure of the errors. If they just stringify them, that shouldn't result in changes.

If people are getting into the structure of the errors, they might need to update their code that works with them.


People are already trying to make their own kernels in Rust. You just don’t hear about those because it takes a fuckton of time to build a useful kernel

More important their businesses? Yes, very likely. To what extent should their business interests be protected over nature?


Because folks like to program in Rust, not CUDA


"Folks" as-in Rust stans, whom know very little about CUDA and what makes it nice in the first place, sure, but is there demand for Rust ports amongst actual CUDA programmers?

I think not.


FYI, rust-cuda outputs nvvm so it can integrate with the existing cuda ecosystem. We aren't suggesting rewriting everything in Rust. Check the repo for crates that allow using existing stuff like cudnn and cuBLAS.


Do you have a link? I went to the rust GPU repo and didn't see anything. I have an academic pipeline that currently heavily tied to CUDA because we need nvCOMP. Eventually we hope us or someone else makes an open source gzip library for GPUs. Until then it would at least be nice if we could implement our other pipeline stages in something more open.



I take it you're the maintainer. Firstly, congrats on the work done, for the open source people are a small crowd, and determination of Rust teams here is commendable. On the other hand, I'm struggling to see the unique value proposition. What is your motivation with Rust-GPU? Graphics or general-purpose computing? If it's the latter, at least from my POV, I would struggle to justify going up against a daunting umbrella project like this; in view of it likely culminating in layers upon layers of abstraction. Is the long-term goal here to have fun writing a bit of Rust, or upsetting the entrenched status quo of highly-concurrent GPU programming? There's a saying that goes along like "pleasing all is a lot like pleasing none," and intuitively I would guess it should apply here.


Thanks! I'm personally focused on compute, while other contributors are focused on graphics.

I believe GPUs are the future of computing. I think the tooling, languages, and ecosystems of GPUs are very bad compared to CPUs. Partially because they are newer, partially because they are different, and partially because for some reason the expectations are so low. So I intend to upset the status quo.


Have you considered post-GPU accelerators? For large-scale machine learning, TPU's have won, basically. There are new vendors like Tenstorrent offering completely new (and much simpler) computing hardware. GPU's may as well live on borrowed time as far as compute is concerned.


Yes, see the proposed WG link I posted above. When I say GPU I'm using it as shorthand...indeed, I think the "graphics" part is on borrowed time and will just become fully software. It is already happening.


Tenstorrent is often criticised for having lots of abstraction layers, compilers, IR's in the middle—it's all in C++, of course. GPU's are okay, but none of them got network-on-chip capability. Some promising papers have been coming out, like SystolicAttention, etc. There's just so much stuff for GPU's, but not that much for sysolic NoC systems (TPU, TT, NPU's) I think Rust could really make an impact here. Abandon all GPU deadweight, stick to simple abstractions, assume 3d twisted torus for topology and that's it. Food for thought!


Rust expanded systems programming to a much larger audience. If it can do the same for GPU programming , even if the resulting programs are not (initially) as fast as CUDA programs, that’s a big win.


What makes cuda nice in the first place?


All the things marked with red cross in the Rust-CUDA compatibility matrix.

https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...


I mean that will improve only with time though, No? Maintainers recently revived the rust-gpu and rust-cuda backends. I don't think even the maintainer would say this is ready for prime time. Another benefit is having able to run the same code (library aka crate) on the CPU and GPU. This would require really good optimization to be done on the GPU backend to have the full benefits but I definitely see the value proposition and the potential.


Those red Xs represent libraries that only work on Nvidia GPUs and would represent a massive amount of work to re-implement in a cross-platform way, and you may never achieve the same performance either because of abstraction or because you can't match the engineering resources Nvidia has thrown at this the past decade. This is their moat.

I think open source alternatives will come in time, but it's a lot.


I don't see any of the Xs that would not possible to generate code for and expose compiler intrisincs for. You don't reinvent the wheel here you generate the NVVM bytecode and let Nvidia handle the rest.


Wait my browser scrolled to the wrong place. For libraries it is even easier to create or write bindings and like the status says several is already in progress.


Plus IDE integration, GPU graphical debugging with the same quality as Visual Studio debugging, polyglot ecosystem.


I am much more productive with Rust than any other programming language, except maybe python for programs shorter than 100 lines. Does that mean every other language has terrible productivity? No, it just means that I am more experienced with Rust. In general, experienced rust devs tend to be as efficient with Rust as other devs with other languages. There’s even Google data corroborating that internally Rust teams are as productive as Go teams


CS61C uses RISC-V now.


Oh, cool! I remember hearing a lot about RISC-V back then, and it's also from Berkeley, so makes sense.


Makes sense. Isn't MIPS like a commercial variant of RISC-I?


IIRC, Berkeley RISC was mainly SPARC, although it was also the AMD 29k.

Stanford was MIPS.


There is not one group of developers C folks won’t throw under the bus (eg by calling them sloppy) to defend C.

Like here you have one of the lead devs of the Linux kernel saying that Rust solves many problems that he has seen happen repeatedly in the kernel, and you’re saying “hm well they must just have been sloppy”.


Yep.

Frankly there is a whole subsection of our profession which is composed of engineers who have differentiated themselves on their elite skills with difficult low-level languages, and if you pry that cultural marker away from them, they get defensive.

I see this in C/C++ forums all the time: "Rust doesn't solve a problem I have". Actually, yes it does. They might not think it, but I've been in enough codebases over the years by some very skilled C & C++ programmers to know better. The mistakes are preventable by a decent modern language that doesn't let you just null and leak all over the place in entirely preventable ways.

Rust has all sorts of problems, it's definitely not perfect. It is not the final form for systems programming languages.

But it's getting tiresome to see people who won't admit that the C legacy is a real problem at this point. And it's not like it (lack of safety, rigour, etc.) wasn't known along the way (or Ada would never have been a thing, etc.) -- it just wasn't considered important.

Well it should be now. Hell, in the kernel in privileged execution space more than anywhere else it is most important.


Strictly speaking this also requires associativity.


Yes that something is the rapidly vanishing government funding for public universities.


I'm all for doing anything funding-wise that brings higher ed back to some kind of reality, even if we have to burn it all down to get there. The cost of education is outrageous and grads are not benefitting from their degrees the same way I was able to (at far less cost). We're at the inflection point there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: