In Rust, unsafe (unchecked) C interop is almost automatic.
The hard part is in translating C code's high-level safety requirements into Rust APIs that enforce them. I'm talking about requirements that aren't expressed in the C syntax, but are arbitrary domain-specific rules defined in English by the C code's authors ("this function can be called only on Wednesdays"). These turn out to be hard, because they may not be precisely defined, and/or the conditions are complex and implementation-specific.
That's less of language inteop problem, and more in creating formal definitions of informal documentation.
Having said that, there are a couple of things that Rust made harder for itself:
• Rust allows moving objects to a different address safely. C code often assumes objects never move and their addresses are unique and meaningful. If Rust had built-in support for immovable types, it wouldn't need Pin and macro hacks.
• Rust's reference types require strict immutability or strict pointer aliasing (exclusive access), while C allows memory to be mutated from anywhere, and pointers to const don't mean the data is immutable. This prevents Rust from using its nice safe reference and slice syntax on memory externally mutated in surprising/clever ways (by memory-mapped hardware, shared mem, MMU trickery).
It's the other way around: there's an existing ratatui library that is pretty nice for making rich terminal UIs, and since ratatui is written in Rust, the easiest way of porting it to the web is through WASM.
Exactly, Ratatui can bring terminal
aesthetics to web but it also works the other way around. A lot of the modern terminal aesthetics is inspired by the web. It is not real
80s, more faux 80s.
Ratatui's creator had an excellent talk about this at FOSEM which basically was structured into these two chapters.
Not sure this is a good analogy. LM Studio is closer to Dropbox as both takes X and makes it easier for users who don't necessarily are very technical. Ollama is a developer-oriented tool (used via terminal + a daemon), so wouldn't compare it to what Dropbox is/did for file syncing.
A phisher may insert text for an LLM with a disclaimer that's only an educational example of what not to do, or that they're the PayPal CEO authorizing this page.
The ? syntax agrees that errors should just be regular values returned from functions, and handling of errors should be locally explicit. It's not a different approach from `if err != nil return err`, it merely codifies the existing practice, and makes expressing the most common cases more convenient and clearer.
It's clearer because when you see ? you know it's returning the error in the standard way, and it can't be some subtly different variation (like checking err, but returning err2 or a non-nil ok value).
The code around it also becomes clearer, because you can see the happy path that isn't chopped up by error branches, so you get high signal to noise ratio, fewer variables in the scope, without losing the error handling.
> Why do we need separate build systems for every language?
Because being cross-language makes them inherit all of the complexity of the worst languages they support.
The infinite flexibility required to accommodate everyone keeps costing you at every step.
You need to learn a tool that is more powerful than your language requires, and pay the cost of more abstraction layers than you need.
Then you have to work with snowflake projects that are all different in arbitrary ways, because the everything-agnostic tool didn't impose any conventions or constraints.
The vague do-it-all build systems make everything more complicated than necessary. Their "simple" components are either a mere execution primitive that make handling different platforms/versions/configurations your problem, or are macros/magic/plugins that are a fractal of a build system written inside a build system, with more custom complexity underneath.
OTOH a language-specific build system knows exactly what that language needs, and doesn't need to support more. It can include specific solutions and workarounds for its target environments, out of the box, because it knows what it's building and what platforms it supports. It can use conventions and defaults of its language to do most things without configuration.
General build tools need build scripts written, debugged, and tweaked endlessly.
A single-language build tool can support just one standard project structure and have all projects and dependencies follow it. That makes it easier to work on other projects, and easier to write tooling that works with all of them. All because focused build system doesn't accommodate all the custom legacy projects of all languages.
You don't realize how much of a skill-and-effort black hole build scripts are is until you use a language where a build command just builds it.
But this just doesn't match my experience with Blaze at all. For my internal usage with C++ & Go it's perfect. For the weird niche use case of building and packaging BPF programs (with no support from the central tooling teams, we had to write our own macros) it still just works. For Python where it's a poor fit for the language norms it's a minor inconvenience but still mostly stays out of the way. I hear Java is similar.
For vendored open source projects that build with random other tools (CMake, Nix, custom Makefile) it's a pain but the fact that it's generally possible to get them building with Blaze at all says something...
Yes, the monorepo makes all of this dramatically easier. I can consider "one-build-tool-to-rule-them-all isn't really practical outside of a monorepo" as a valid argument, although it remains to be proven. But "you fundamentally need a build tool per language" doesn't hold any water for me.
> That makes it easier to work on other projects, and easier to write tooling that works with all of them.
But... this is my whole point. Only if those projects are in the same language as yours! I can see how maybe that's valid in some domains where there's probably a lot of people who can just do almost everything on JS/TS, maybe Java has a similar domain. But for most of us switching between Go/Cargo/CMake etc is a huge pain.
Oh btw, there's also Meson. That's very cross-language while also seeming extremely simple to use. But it doesn't seem to deliver a very full-featured experience.
I count C++ projects in the "worst" bucket, where every project has its own build system, its own structure, own way to run tests, own way to configure features, own way to generate docs.
So if a build system works great for your mixed C++ projects, your build system is taking on the maximum complexity to deal with it, and that's the complexity I don't want in non-C++ projects.
When I work with pure-JS projects, or pure-Go projects, or pure-Rust projects, I don't need any of this. npm, go, and rust/cargo packages are uniform, and trivial to build with their built-in basic tools when they don't have C/C++ dependencies.
Sleep (and nap!) tracking precision in Pebble is still waaaay ahead of Apple Watch.
Apple Watch is packed with sensors that Pebble never had, but it can't reliably detect when I'm sleeping. It even woke me up once with a go-to-bed reminder! (only once because I turned that off immediately).
Apple's tracking naively uses my configured "Downtime" start time as a reference for when I'm "in bed". That's not a measurement, that's made up data!
It only doesn't apply to existing versions of existing packages. Newer releases would apply Zopfli, so over time likely the majority of actively used/maintained packages would be recompressed.
Since the operator uses user's computer via screenshots and keyboard/mouse, it's not easy to block by IP, browser fingerprinting, or convoluted page markup. The initial implementation probably has some detectable traits in its mouse and keyboard use, but a company that could train pretty convincing text and voice models can easily train a model to move mouse naturally enough. Simple screenshots can be messed with via flicker (using persistence of vision), but that's fixable too.
This cat and mouse game can end up ugly.
My pessimistic prediction is that Google et al. will extend their DRM from video to entire Web pages, and sell it as a service that blocks "unauthorized" screenshots and soft mouse, along with a cryptographically-strong way to detect the closed-source Chrome. It won't be accessible, but conveniently Republicans have added accessibility to their list of woke things to destroy. Sites will submit to this and advertise it as "use Chrome to skip those horrible CAPTCHAs!"
The worst thing is that Apple assumes everyone will use Xcode's GUI, and a lot of functionality is de-facto only available through Xcode.
It doesn't seem like a standalone SDK with an IDE layered on top, quite the opposite, the CLI build tools feel like a headless Xcode runner.
Building and code signing without Xcode is nearly impossible. The non-Xcode tooling is a disorganised half-deprecated bunch of ad-hoc commands, and they're mostly undocumented.
The most common end-user operating systems are Android and Windows, neither of those require a special GUI to create an app. You don't even need to buy specific hardware to write programs for those.
Try to create native Android apps without Android Studio, or Windows native apps without Visual Studio tooling, by native meaning using the platform GUIs, SDKs, APIs and blessed tier 1 programming languages.
Not trying to fit somethig that requires lots of yak shaving, and workarounds, to make it work without the vendors tooling.
Native APIs from the platform, not stuff that abstracts the platform.
How about using the vendor tooling? Here for example is the guide to running and building C# desktop Apps on MacOS: https://dotnet.microsoft.com/en-us/learn/maui/first-app-tuto...
This is one of the blessed tier 1 languages, GUI made by the platform vendor, and using a collection of open-source cross platform SDKs from the platform vendor.
Debugging on Windows if your customers are on Windows is probably more convenient, but people aren't using XCode for convenience, it's literally the only option to build any app.
Also, what's with this moving of goal posts:
> by native meaning
To ship non-native Apps on iOS you still need XCode.
> the platform GUIs
You can't publish a CLI on iOS, and for non-platform GUIs you still need XCode.
On all other mainstream platforms you can write a posix-compliant C program, and cross-platform it from almost any OS to almost any other OS. Android makes is a bit difficult to run CLI apps, but otherwise iOS is the one big exception to this, right? Don't you agree that iOS is a little bit different in that regard to other platforms?
The only way to get around it is to pay someone else to run XCode for you. Or
No, but it is on Windows? But you don't need Visual Studio or even Windows to write a desktop GUI application, and you can then run "natively" on Windows. "Native" is what Microsoft call it.
No, I'm not going to. I didn't claim all Windows APIs can be debugged or easily used without Visual Studio. I didn't claim all of Android features can be debugged without Android Studio.
Pornel made this claim:
> Building and code signing without Xcode is nearly impossible. [...]
Not for specific APIs, for all APIs. Just building code in general, and getting it signed (for e.g. iOS) requires XCode. To which you said:
> A trait common to all platforms, with exception of UNIX ecosystem, and even then commercial UNIXes aren't as free choice as BSD/Linux clones.
Why are you trying to limit this to specific APIs? Yes, there are some tasks you can only do in VS. But not to write any program working on Windows. Same for Android.
Indeed, but there's AvaloniaUI for this luckily. You could say it isn't blessed but works everywhere. There are case studies where rewriting WPF applications in Avalonia was solving memory and performance issues.
The hard part is in translating C code's high-level safety requirements into Rust APIs that enforce them. I'm talking about requirements that aren't expressed in the C syntax, but are arbitrary domain-specific rules defined in English by the C code's authors ("this function can be called only on Wednesdays"). These turn out to be hard, because they may not be precisely defined, and/or the conditions are complex and implementation-specific. That's less of language inteop problem, and more in creating formal definitions of informal documentation.
Having said that, there are a couple of things that Rust made harder for itself:
• Rust allows moving objects to a different address safely. C code often assumes objects never move and their addresses are unique and meaningful. If Rust had built-in support for immovable types, it wouldn't need Pin and macro hacks.
• Rust's reference types require strict immutability or strict pointer aliasing (exclusive access), while C allows memory to be mutated from anywhere, and pointers to const don't mean the data is immutable. This prevents Rust from using its nice safe reference and slice syntax on memory externally mutated in surprising/clever ways (by memory-mapped hardware, shared mem, MMU trickery).