Hacker News new | past | comments | ask | show | jobs | submit | more pornel's comments login

It's nice that this worked without need for communication between cars.

This should be a built-in feature of adaptive cruise control in regular cars.


We're trying to convince folks that this should be the case!


On-site battery storage is the standard for DC fast charging installations. It's already cheaper.


It's incredible that paying fealty to the president is talked about so casually, and framed as just a normal and necessary thing to do.

This is something that should be expected in an absolute monarchy, not a democracy.


I agree completely, and I think it's disgusting and despicable. But honestly this sort of thing has been happening for many, many decades, maybe even centuries, it's just been done a lot more discreetly in the past. The big difference now is that it's so blatant.

While that might sound like an improvement (and kind of is as at least we're getting more honest), I also view it as a big regression. At least when there's perceived shame in being corrupt, people aspire to be better. When it just becomes routine, I fear it's the beginning of the end.


Use of a dense matrix is an artificial constraint you've imposed yourself, but that only disproves feasibility of your proposed solution, not the entire problem in general.

A similar problem, n-body simulation*, has n² gravitational interactions. You will similarly hit a wall if you try to do it with a dense n² matrix. However, there's a hierarchical solution that takes advantage of the sparsity and exponential decay, and can solve it in (n log n) with an imperceptibly low loss of precision.

Social interactions are sparse, and group interactions can be optimized with clustering. Fine-grained simulation of the entire society is such a massive chaotic problem with so many variables, that some loss of precision from clustering is completely insignificant compared to the inevitable simplifications you'll have to make in the design of the model itself.

* I mean the naive one with a fixed timestemp, not trying to solve chaos.


Isn't the naive one hopeless since those systems require perfect measurement, and economic interactions are lagging, the calculations are ordinal, and mathematical systems fail because you can't relabel? (i.e. fails a function test so you end up trying to compare unrelated mathematical objects within differing systems).

Even statistics is fully thrown out with the islands of regularity.

The stochastic nature, lack of measure-ability, and multiple hidden underlying states (value is subjective) require any model to solve chaos somehow.

https://science.ku.dk/english/press/news/2024/islands-of-reg...


>Use of a dense matrix is an artificial constraint you've imposed yourself, but that only disproves feasibility of your proposed solution, not the entire problem in general.

The use of a dense matrix is the traditional way of solving the problem. The issue is that it solves the wrong problem. You need a dense tensor which requires more storage than the world currently has for an economy of 20 people.

Social interactions are sparse until they aren't. If you think otherwise try to estimate what every Europeans interaction with Gavrilo Princip was on 27 June 1914 vs 28 June 1914.

As for gravitation: I'm very happy for the planets and asteroids out there. Unfortunately the economy isn't a solar system.


On Linux you get WebKitGTK, which is more of a gamble than browser versions of Chromium or Gecko.


Tauri is an application framework. If you want to use Rust for your application, there's going to be one more option for rendering its UI.

This does make it closer to Electron. We'll see whether Servo can be made leaner or faster (Servo is focused on GPU-based rendering).

Long term, I dream, there could be tighter integration between Tauri and Servo's DOM, so that UI changes won't have to go through JavaScript.


The kernel knows about system-local users, but not the remote ones. Servers may need to access data of multiple users at once, so it's not as simple as some setuid+chroot CGI for every cookie received. Kernels like Linux are not designed for that.

Maybe it would be more feasible with some capability-based kernel, but you'd inherently have a lot of logic around user accounts, privileges, and queries. You end up involving the kernel in what is row-level database security. That adds a lot of complexity to the kernel, which also makes the isolation itself have more of the attack surface.

OTOH you can write your logic in a memory-safe language today. The VM/runtime/guaranteed-safe-subset is your "kernel" that protects the process from getting hijacked — an off-by-one error can't cause arbitrary code execution. The VM/runtime itself can still have vulnerabilities, but that just becomes analogous to kernel vulnerabilities.


> That adds a lot of complexity to the kernel, which also makes the isolation itself have more of the attack surface.

Not if you remove auth from the kernel: https://doc.cat-v.org/plan_9/4th_edition/papers/auth The Plan 9 kernel is very small and portable which demonstrates that you don't need complexity to do distributed auth properly. The current OS hegemony is incredibly dated design wise because their kernels were all designed to run on a single machine.

> OTOH you can write your logic in a memory-safe language today.

Memory safety is not security.


> Not if you remove auth from the kernel

The factoctum looks very much like a microservice or a database with stored procedures handling access control, but of course plan9 makes it a file system instead of some RPC. It's a sensible design, but if IPC is the solution, then you don't even need plan9 for it.

> Memory safety is not security.

I didn't say it was. However, it is an isolation barrier for the memory-safe code. It's roughly equivalent to process isolation, but in userland. Instead of an MMU you have bounds checks in software.

Kernels implement process isolation cheaply with the help of hardware, but that isn't the only way to achieve the same effect. It can be emulated in software. When the code is memory safe, it can't be made to execute arbitrary logic that isn't in the program's code. If the program attempts some out-of-bounds access, it will be caught with userland checks instead of a page fault, but in either case it won't end up with an illegal memory access.


> but of course plan9 makes it a file system instead of some RPC.

Actually, Plan 9 does IPC via 9P, the RPC based file system protocol. The protocol serves a tree of named byte addressable objects which are composable per process. The Plan 9 kernel is a VFS multiplexer. It *only* speaks 9P. All disk file systems,e.g. ext4, are served via user space micro services the bell labs people called file servers. Unlike clunky Unix and it's copies there are no special files or character files nor ioctl(). It's all via the foundational concept of how people organize resources into "files" and folders (directories) via path names. All this is transparent over the network by default.

The reality is the OS is a very portable light weight channel based container host, the container being the process. Each with it's own namespace which means it's own collection of mounted resources composed of 9P mounts and binds organized into a tree of named objects. Those objects are protected by Unix-like permissions: user/group/everyone-RWX served from as m yet another micro service using the same protocol. A process can rfork() more with flags to share resources and control it's file system view to only what the child container needs to see. Those containers then fork off more. You can keep firing off boxes with CPU and RAM and whatever what is hanging off via pxe and instantly have access to that compute and resources. 9P is architecture independent so file servers running on arm are no different than any other arch like x86, mips, risc-v, etc. anyone can mount any other hardware. It's a light weight cloud ready micro service host that was stated in the 80's by the same people who made Unix and c. It's friggin wild.

I highly encourage people try to really understand how it works. It's pretty damn eye opening and refreshing. It sorta blew my mind when I saw the process as the container that can fork off more and more with the ability to control the system view of each one. And you can understand the code. Like all of it.


> Maybe it would be more feasible with some capability-based kernel, but you'd inherently have a lot of logic around user accounts, privileges, and queries. You end up involving the kernel in what is row-level database security. That adds a lot of complexity to the kernel, which also makes the isolation itself have more of the attack surface.

Microkernels/exokernels sacrificing some performance to bring reliable kernels that allow for reliable userspace.


As mentioned in a previous post in these series, this is implemented in the LLVM MCA tool:

https://llvm.org/docs/CommandGuide/llvm-mca.html

I particularly like the wrapper for it https://lib.rs/cargo-show-asm `cargo asm --mca function_name` that makes it easy to isolate a specific function from an arbitrarily large project.


Cargo has been co-created by Yehuda Katz, who worked on Ruby's Bundler before. Cargo has been designed after npm, so it definitely took lessons from it, but it doesn't make sense to just broadly attribute this to Rust being JavaScript-adjacent.

The Rust syntax is not coming from JavaScript. It even has conflict with it, using `let` and `const` differently, since the `let` in Rust comes from Ocaml, not JS.

Both JS and Rust copy from the same C/C++ roots. Rust's curly brace flavor is more similar to Go and Swift. The original author of Rust liked a lot of languages with different syntaxes, but the C-like syntax has been a pragmatic choice to avoid putting off the target audience of C++ programmers:

http://venge.net/graydon/talks/intro-talk-2.pdf


The rayon library uses work stealing for this. Its parallel iterators offer some control of splitting strategies and granularity, so you can tune the trade-off between full utilization and cost of moving things between threads.

Additionally, in Bevy, independent queries (systems) are executed in parallel, so there's always something else to do, except your one worst loop.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: