My startup is betting big on Bevy, Dioxus, and WASM. (The thing we're building requires this.)
It's a bleeding-edge weird world, and there's definitely a lot of risk, sharp corners, etc. But it's also incredibly exciting and a breath of fresh air.
One of my big worries is picking the "wrong" tech and the community electing something else, leaving us in an evolutionary pit. We already see this in other areas. We chose Actix Web, and now Axum appears to be in the lead. That's not as big of a concern as the frontend stack changing, though.
Crowdstrike did this to our production linux fleet back on April 19th, and I've been dying to rant about it.
The short version was: we're a civic tech lab, so we have a bunch of different production websites made at different times on different infrastructure. We run Crowdstrike provided by our enterprise. Crowdstrike pushed an update on a Friday evening that was incompatible with up-to-date Debian stable. So we patched Debian as usual, everything was fine for a week, and then all of our servers across multiple websites and cloud hosts simultaneously hard crashed and refused to boot.
When we connected one of the disks to a new machine and checked the logs, Crowdstrike looked like a culprit, so we manually deleted it, the machine booted, tried reinstalling it and the machine immediately crashes again. OK, let's file a support ticket and get an engineer on the line.
Crowdstrike took a day to respond, and then asked for a bunch more proof (beyond the above) that it was their fault. They acknowledged the bug a day later, and weeks later had a root cause analysis that they didn't cover our scenario (Debian stable running version n-1, I think, which is a supported configuration) in their test matrix. In our own post mortem there was no real ability to prevent the same thing from happening again -- "we push software to your machines any time we want, whether or not it's urgent, without testing it" seems to be core to the model, particularly if you're a small IT part of a large enterprise. What they're selling to the enterprise is exactly that they'll do that.
RNNoise is an amazing feat, but please, don't overdo it. Most of the time, you don't really want complete ambient noise elimination, as human speech appearing from dead silence sounds unnatural. Moreover, most noise reduction software is considerably less effective in reducing noise during a person speaking, either removing too much, producing degraded speech sound (worst case) or too little. If it's possible, always start adding your noise reduction gradually, stop when it sounds good to your ear and then back up a bit.
If you're doing voice recording/streaming, please, get to know Expanding and Compression first, and only after configuring your sound processing chain add noise reduction in.
On of the serious offenders is OBS studio, which recently added RNNoise filter, but provides no means of mixing processed sound with the dry one (in other words, filter is always 100% on). Wet/Dry mix knob is heavily needed for most filters there.
I'm very saddened by the state of sound quality in lots of amazing videos people have been producing lately and now I'm considering writing a guide for voice processing for streams/conferences/etc for the techy people, if anyone's interested.
I recently learned that USB3 is not only badly designed, flaky and unreliable, it is also an EMI/RFI nightmare. This is woefully understated in the article when it says:
"It's hard not to generate harmful interference."
We built a prototype sensor payload that included a USB3 external hard drive. Started suffering broad spectrum interference that stomped all over L-band (Iridium and GPS) reception. Made the entire system unusable!
"USB 3.0, or SuperSpeed USB, uses broadband signaling that can interfere with cellular and 2.4GHz WIFI signaling. This interference can significantly degrade cellular and 2.4GHz WIFI performance. Customers using cellular networks or 2.4GHz WIFI networks near USB 3.0 devices should take measures to reduce the impact of these devices on their network connectivity. Please note that interference is generated by both the actual USB 3.0 device as well as its cable."
> Some 5-10 years ago when UML went out of vogue (for good reasons; much of it is overspecified and overcomplicated)
Perhaps UML ceased to be a fad for some, but to this day UML is still the best tool (and practically only tool) that helps describe and document code effectively and efficiently, from requirements to what classes comprised a submodule.
The "oversimplified and overcomplicated" jab makes no sense at all. UML is used to provide a view of a system without needing to go too much into detail (that's what code is for), and just because a bunch of people did a great job putting up a specification aimed at software developers intending to develop UML editora and code generators it does not mean it is overcomplicated". Anyone can use UML extensively if they can draw rectangles and lines on a piece of paper, or click and drag around in an editor. That's hardly a downside or a sign it's "overcomplicated".
I'm building with Rust, Rocket, Diesel, and agree with much of the original article. I can also add more: in my direct personal experience, the lead people creating Rocket and Diesel are superb about responsiveness, ongoing communication, and enabling people (e.g. me) to help diagnose issues and fix the them. I'm continually thankful for the quality of participation among the people in the ecosystem.
On the technical side, there are definitely some learning curves among Rocket, Diesel, and other tools. A typical example is that different crates have their own implementations of concepts such as a UUID, and the developer must handle conversions, and also ensure that various dependency versions all align, and also can't (yet) easily use the current UUID crate, and also can't (yet) build using stable Rust. All of these aspects will be fixed soon, likely within a month or so as the crates stabilize.
If you try Rust, I highly recommend trying rust-analyzer, which provides near-real-time code advice as a plugin to most major editors.
Well, this is (one of my) areas so here goes. DSLs are a concept, not an implementation. As implemented they can vary from chained procedure calls to actual sub languages with lexers and parsers (and I tend to consider the latter to be 'proper' DSLs, but that's just my view).
To have a 'proper' DSL I reckon you need two things, and understanding that a thing can and should be broken out into its own sublanguage, and the ability to do so. The first takes a certain kind of nouse, or common sense. The latter requires knowing how to construct a parser properly and some knowledge of language design.
Knowing how to write a parser is not particularly complex but as the industry is driven by requirements more of knowing 'big data' frameworks rather than stuff that is often more useful, well, that's what you get, and that includes people who try to parse XML with regular expressions (check out this classic answer <https://stackoverflow.com/questions/1732348/regex-match-open...> Edit: if you haven't seen this check it out cos it's brilliant).
I think this reflects the fundamental problem in software development of the market's not knowing what's actually needed to solve real business problems.
It's a bleeding-edge weird world, and there's definitely a lot of risk, sharp corners, etc. But it's also incredibly exciting and a breath of fresh air.
One of my big worries is picking the "wrong" tech and the community electing something else, leaving us in an evolutionary pit. We already see this in other areas. We chose Actix Web, and now Axum appears to be in the lead. That's not as big of a concern as the frontend stack changing, though.
Anybody else headed down this path?