Lidars have been reporting per-point intensity values for quite a while. The dynamic range is definitely not 1 bit.
Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view.
Not OP but I think this could be an instance of leaky abstraction at work. Most of the time you hand-write an accelerator kernel hoping to optimize for runtime performance. If the abstraction/compiler does not fully insulate you from micro-architectural details affecting performance in non-trivial ways (e.g. memory bank conflict as mentioned in the article) then you end up still having per-vendor implementations, or compile-time if-else blocks all over the place. This is less than ideal, but still arguably better than working with separate vendor APIs, or worse, completely separate toolchains.
Because I was originally writing some very CPU intensive SIMD stuff, which Mojo is also fantastic for. Once I got that working and running nicely I decided to try getting the same algo running on GPU since, at the time, they had just open sourced the GPU parts of the stdlib. It was really easy to get going with.
I have not used Triton/Cute/Cutlass though, so I can't compare against anything other than Cuda really.
> The active class is clearly redundant here. If you want to style based on the .active selector, you could just as easily style with [aria-selected="true"] instead.
I vaguely remember (from 10+ years ago) that class selectors are much more performant than property selectors?
The TL;DW is: yes, class selectors are slightly more performant than attribute selectors, mostly because only the attribute _names_ are indexed, not the values. But 99% of the time, it's not a big enough deal to justify the premature optimization. I'd recommend measuring your selector performance first: https://developer.chrome.com/docs/devtools/performance/selec...
From first principles I think the concept can make sense. From car-specific function-specific ECUs, to platform-shared (but still function-specific) ECUs, then to Zonal architecture and domain controllers. The goals: consolidate and generalize HW across the lineup moving model-specific bits to FW/SW/Config (amortizes the development cost and simplifies certification), and also simplify wiring (saves you precious copper wires which are costly, messy, and heavy) because you can pretty much just plug every miscellaneous sensor or actuator to its nearest "anchor point" without worrying (too much) about arbitrary ECU limitations.
This might sound like purely implementation detail, but having the (non-safety-critical) "business logic" of a car as software gives the manufacturer flexibility to late-bind behavior as new use cases / demands inevitably get discovered.
Something can simultaneously be a good idea, buzzword'd by marketing, and/or deviate from the original intentions.
I'd argue that chassis tech is more sophisticated in the BEV case due to more weight. Adaptive dampers, air springs, rear-axle steering, etc. might not be necessary on a comparably sized ICE vehicle.
OTOH, ABS and ESP systems can achieve similar or even better results with less complexity because motor torque control is inherently low-latency, which can also complement brake deployment (hydraulics is not as well behaved as e-motor).
You do get rid of emissions control and tiny little sensors / flap actuators sprinkled all around the engine bay, so yeah, probably overall still a simplification win, but I doubt you can get very far without "massive amounts of [Mechatronics] engineering".
How is it distinct in any way that would undermine their argument? Do people go airport to airport to then not drive, where people going to downtown would want to drive? Their point is that people go to other cities without their vehicle all the time with plane travel, so high speed rail would have plenty of demand up to a certain distance.
You're not wrong, but that just means the <adjective> is where the bulk of information resides. The trade-off matters. Maybe it's a model with good enough quality but really cheap to serve. Maybe it's a model that only plays poker really well but sucks at everything else because it bluffs too much. Etc. etc.
With inflation and the terribly high cost of housing in places like the Bay Area, “seven figures is the new six figures” is an apt observation. I make six figures, yet I can’t afford to buy a home within a reasonable commute from my job, and even finding a decent rental near my job is challenging. Seven figures is indeed the new six figures.
+1. Frameworks learning costs are non-trivial. After you worked so hard to become productive in a particular framework it's frustrating to see its core and/or ecosystem fizzle --- especially when your existing code already tied the framework's unique constraints, making it a hassle to port to other frameworks. In this sense the "bus factor" / resilience of the framework dev does matter to the app dev building on top of it.
> as the overall development workflow is lightyears ahead of C++, mostly due to tooling
My experience has been the other way around. Eclipse-based IDEs from NXP, TI, ST all have out-of-the-box usable tooling integration:
- MCU pinout and configuration codegen
- no need to manually fiddle with linker scripts
- static stack and code size analyzers (very helpful for fitting stuff in low-cost MCUs)
- stable JTAG-based debugging with:
- peripheral registers view (with bitfield definitions)
- RTOS threads view (run status, blocked on which resources, ...)
And yes, these are important enough for me to put up with Eclipse and pre-modern C/C++. I really want to write Rust for embedded but struggling with the tooling all the time didn't help.
Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view.
reply