> The injustice [done to Minsky] is in the word “assaulting”. The term “sexual assault” is so vague and slippery that it facilitates accusation inflation: taking claims that someone did X and leading people to think of it as Y, which is much worse than X. (…)
> The word “assaulting” presumes that he applied force or violence, in some unspecified way, but the article itself says no such thing. Only that they had sex.
> We can imagine many scenarios, but the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates.
> I’ve concluded from various examples of accusation inflation that it is absolutely wrong to use the term “sexual assault” in an accusation.
This seems a very reasonable thing to say. Also useful concept "accusation inflation" thanks for linking.
> "sex with someone under 18 is rape”, “sex with a prostitute under 18 is enslavement”, and “making a nude photo of someone under 18 is a sexual assault.”
What is happening here is law being repurposed. Rape already has big sentences, and we want to give under 18s extra protections, so let's redefine what the word rape means so we can reuse the rape law.
> Efforts against the business of making and distributing images of that are justified — but these must not be done by dangerous methods.
Anyway I don't use Rust and I just learned that writing generic code is more complicated. It's interesting because in Julia generic code is the simple path: just don't add types and it's generic. Multiple dispatch is a powerful idea and I wish more languages adopted it.
It is a commonly used abbreviation for "the f...ine article". On discussion sites of the Internet of yore, people would tell you to RTFA — "read the fucking article" — when it appeared you were chiming in without having first read the item under discussion. This is an extraction from that.
I don't think this shows deep thought on his part.
By Stallman's own telling a free Objective-C frontend was an unexpected outcome. Until it came up in practice he thought a proprietary compiler frontend would be legal (https://gitlab.com/gnu-clisp/clisp/blob/dd313099db351c90431c...). So his stance in this email is a reaction to specific incidents, not careful forethought.
And the harms of permissive licensing for compiler frontends seem pretty underwhelming. After Apple moved to LLVM it largely kept releasing free compiler frontends. (But maybe I'd think differently if I e.g. understood GNAT's licensing better.)
> Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
Huh it's not everyday that I hear a genuinely new argument. Thanks for sharing.
I guess I don’t find that argument very compelling. If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.
This feels like chasing arbitrary 100% test coverage at the expense of safety. The code quality isn’t actually improved by omitting the checks even though it makes testing coverage go up.
> If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.
I don't think I would (personally) ever be comfortable asserting that a code branch in the machine instructions emitted by a compiler can't ever be taken, no matter what, with 100% confidence, during a large fraction of situations in realistic application or library development, as to do so would require a type system powerful enough to express such an invariant, and in that case, surely the compiler would not emit the branch code in the first place.
One exception might be the presence of some external formal verification scheme which certifies that the branch code can't ever be executed, which is presumably what the article authors are gesturing towards in item D on their list of preconditions.
The argument here is that they're confident that the bounds check isn't needed, and would prefer the compiler not insert one.
The choices therefore are:
1. No bound check
2. Bounds check inserted, but that branch isn't covered by tests
3. Bounds check inserted, and that branch is covered by tests
I'm skeptical of the claim that if (3) is infeasible then the next best option is (1)
Because if it is indeed an impossible scenario, then the lack of coverage shouldn't matter.
If it's not an impossible scenario then you have an untested case with option (1) - you've overrun the bounds of an array, which may not be a branch in the code but is definitely a different behaviour than the one you tested.
> Because if it is indeed an impossible scenario, then the lack of coverage shouldn't matter.
At the point where a load-bearing piece of your quality assurance strategy is 100% branch coverage of the generated machine code, it very much does matter.
> I'm skeptical of the claim that if (3) is infeasible then the next best option is (1)
In the general case, obviously not. But, in the specific case we’re discussing, which is that (2) has the rider of “the development team will be forced to abandon a heretofore important facet of their testing strategy at the exact moment they are rewriting the entire codebase in a language they are guaranteed to have less expertise in,” I think (1) seems pretty defensible.
I think you’re misreading their statement. They aren’t saying they don’t want the compiler to insert the additional code. They’re saying they want to test all code the compiler generates.
In safety critical spaces you need to be able to trace any piece of a binary back to code back to requirements. If a piece of running code is implicit in code, it makes that traceability back to requirements harder. But I'd be surprised if things like bounds checks are really a problem for that kind of analysis.
Yeah sounds too clever by half, memory safe languages are less safe because they have bounds checks...maybe I could see it on a space shuttle? Well, only in the most CYA scenarios, I'd imagine.
Critical applications like that used to use ADA to get much more sophisticated checking than just bounds. No certified engineer would (should) ever design a safety critical system without multiple “unreachable” fail safe mechanisms
Next they’ll have to tell me about how they had to turn off inlining because it creates copies of code which adds some dead branches. Bounds checks are just normal inlined code. Any bounds checked language worth its salt has that coverage for all that stuff already.
There is a whole 'nother level of safety validation that goes beyond your everyday OWASP, or heck even what we consider "highly regulated" industry requirements that 95-99% of us devs care about. SQLite is used in some highly specialized, highly sensitive environments, where they are concerned about bit flips, and corrupted memory. I had the luxury of sitting through Richard Hipp's talk about it one time, but I am certainly butchering it.
If a code branch can't be ever taken. Doesn't that mean you do not need it? Basically it must be code that will not get executed. So leaving it out does not matter.
If you then can come up a scenario where you need it. Well in fully tested code you do need to test it.
So is the argument that safe langs produce stuff like:
// pseudocode
if (i >= array_length) panic("index out of bounds")
that are never actually run if the code is correct? But (if I understand correctly) these are checks implicitly added by the compiler. So the objection amounts to questioning the correctness of this auto-generated code, and is predicated upon mistrusting the correctness of the compiler? But presumably the Rust compiler itself would have thorough tests that these kinds of checks work?
Someone please correct me if I'm misunderstanding the argument.
One of the things that SQLite is explicitly designed to do is have predictable behavior in a lot of conditions that shouldn't happen. One of those predictable behavior is that it does its best to stay up and running, and continuing to do the best it can. Conditions where it should succeed in doing this include OOM, the possibility of corrupted data files, and (if possible) misbehaving CPUs.
Automatic array bounds checks can get hit by corrupted data. Thereby leading to a crash of exactly the kind that SQLite tries to avoid. With complete branch testing, they can guarantee that the test suite includes every kind of corruption that might hit an array bounds check, and guarantee that none of them panic. But if the compiler is inserting branches that are supposed to be inaccessible, you can't do complete branch testing. So now how do you know that you have tested every code branch that might be reached from corrupted data?
Furthermore those unused branches are there as footguns which are reachable with a cosmic ray bit flip, or a dodgy CPU. Which again undermines the principle of keeping running if at all possible.
In rust at least you are free to access an array via .get which returns an option and avoids the “compiler inserted branch” (which isn’t compiler inserted by the way - [] access just implicitly calls unwrap on .get and sometimes the compiler isn’t able to elide).
Also you rarely need to actually access by index - you could just access using functional methods on .iter() which avoids the bounds check problem in the first place.
I had Vec in mind but regardless nothing forces you to use the bounds-checked variant vs one that returns option<t>. And if you really are sure the bounds hold you can always use the assume crate or just unwrap_unchecked explicitly.
Keeping running if possible doesn't sound like the best strategy for stability. If data was corrupted in memory in a was that would cause a bounds check to fail then carrying on is likely to corrupt more data. Panic, dump a log, let a supervisor program deal with the next step, or a human, but don't keep going potentially persisting corrupted data.
What the best strategy is depends on your use case.
The use case that SQLite has chosen to optimize for is critical embedded software. As described in https://www.sqlite.org/qmplan.html, the standard that they base their efforts on is a certification for use in aircraft. If mission critical software on a plane is allowed to crash, this can render the controls inoperable. Which is likely to lead to a very literal crash some time later.
The result is software that has been optimized to do the right thing if at all possible, and to degrade gracefully if that is not possible.
Note that the open source version of SQLite is not certified for use in aviation. But there are versions out there that have been certified. (The difference is a ton of extra documentation.) And in fact SQLite is in use by Airbus. Though the details of what exactly for are not, as far as I know, public.
If this documented behavior is not what you want for your use case, then you should consider using another database. Though, honestly, no other database comes remotely close when it comes to software quality. And therefore I doubt that "degrade as documented rather than crash" is a good reason to avoid SQLite. (There are lots of other potential reasons for choosing another database.)
outside political definitions, I'm not sure "crash and restart with a supervisor" and "don't crash" are meaningfully different? they're both error-handling tactics, likely perfectly translatable to each other, and Erlang stands as an existence proof that crashing is a reasonable strategy in extremely reliable software.
I fully recognize that political definitions drive purchases, so it's meaningful to a project either way. but that doesn't make it a valid technical argument.
Yes, Erlang demonstrates that "crash and restart with a supervisor" is a potentially viable strategy to reliability.
But the choice is not just political. There are very meaningful technical differences for code that potentially winds up embedded in other software, and could be inside of literal embedded software.
The first is memory. It takes memory to run whatever is responsible for detecting the crash, relaunching, and starting up a supervisor. This memory is not free. Which is one of the reasons why Erlang requires at a minimum 10 MB or so of memory. By contrast the overhead of SQLite is something like half a MB. This difference is very significant for people putting software into medical devices, automotive controllers, and so on. All of which are places where SQLite is found, but Erlang isn't.
The second is concurrency. Erlang's concurrency model leaks - you can't embed it in software without having to find a way to fit Erlang concurrency in. This isn't a problem if Erlang already is in your software stack. But that's an architectural constraint that would be a problem in many of the contexts that SQLite is actually used in.
Remember, SQLite is not optimized for your use case. It is optimized for embedded software that needs to try to keep running when things go wrong. It just happens to be so good that it is useful for you.
You're right and when I thought about it more I considered that "supervisor" isn't what I would want. Rather I'm thinking raising errors to the program that embeds sqlite so that it can decide what to do. I do have a desktop app that uses sqlite and I'd rather it raised an error than tried to recover.
But getting back into a well-defined state which you can communicate via an error is a viable recovery strategy.
This state can be "the database is corrupt", allowing for a restore from backup, it can also be "shut down gracefully" and communicate that.
Especially for databases there is a ton to concinder: what about database entries that are not committed yet? When you run into oom, should you at least try to commit the data still in ram by freeing as much space as possible?
It still needs to detect that there is corrupted data, dump the log and the supervisor would not be the best if it was external since in some runtimes it could be missing, they just build it into it and we came full circle.
> But (if I understand correctly) these are checks implicitly added by the compiler.
This is a dubious statement. In Rust, the array indexing operator arr[i] is syntactic sugar for calling the function arr.index(i), and the implementation of this function on the standard library's array types is documented to perform a bounds-check assertion and access the element.
So the checks aren't really implicitly added -- you explicitly called a function that performs a bounds check. If you want different behavior, you can call a different, slightly-less-ergonomic indexing function, such as `get` (which returns an Option, making your code responsible for handling the failure case) or `get_unchecked` (which requires an unsafe block and exhibits UB if the index is out of bounds, like C).
Nothing in this world is perfect, but this behavior is less of an abomination than whatever a junior dev on a timeline might write to handle this condition.
I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior. If you haven’t tested a given control flow, the issue is that it’s possible that the end result is some indeterminate or invalid state for the whole program, not that the given bounds check doesn’t panic the way it’s supposed to. On embedded for example (which is an important usecase for SQLite) this could result in orphaned or broken resources.
> I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior.
The way I was thinking about it was: if you somehow magically knew that nothing added by the compiler could ever cause a problem, it would be redundant to test those branches. Then wondering why a really well tested compiler wouldn't be equivalent to that. It sounds like the answer is, for the level of soundness sqlite is aspiring to, you can't make those assumptions.
But does it matter if that control flow is unreachable?
If the check never fails, it is logically equivalent to not having the check. If the code isn't "correct" and the panic is reached, then the equivalent c code would have undefined behavior, which can be much worse than a panic.
In the first case, it often is optimized out. But the optimizer isn't perfect, and can't detect every case where it is unreachable.
If you have the second case, I would much rather have a panic than undefined behavior. As mentioned in another comment, in c indexing an array is semantically equivalent to:
if (i < len(arr)) arr[i] else UB()
In fact a c compiler could put in a check and abort if it is out of bounds, like rust does and still be in spec. But the undefined behavior could also cause memory corruption, or cause some other subtle bug.
> questioning the correctness of this auto-generated code
I wouldn't put it that way. Usually when we say the compiler is "incorrect", we mean that it's generating code that breaks the observable behavior of some program. In that sense, adding extra checks that can't actually fail isn't a correctness issue; it's just an efficiency issue. I'd usually say the compiler is being "conservative" or "defensive". However, the "100% branch testing" strategy that we're talking about makes this more complicated, because this branch-that's-never-taken actually is observable, not to the program itself but to its test suite.
it's ignoring that many of such checks get reliably optimized away
worse it's a bit like saying "in case of a broken invariant I prefer arbitrary potential highly problematic behavior over clean aborts (or errors) because my test tooling is inadequate"
instead of saying "we haven't found adequate test tooling" for our use case
Why inadequate? Because technically test setups can use
1. fault injection to test such branches even if normally you would never hit them
2. for many of such tests (especially array bound checks) you can pretty reliably identify them and then remove them from your test coverage statistic
idk. what the tooling of rust wrt this is in 2025, but around the rust 1.0 times you mainly had C tooling you applied to rust so you had problems like that back then.
Bound checks are usually conditionally compiled. That's more a kind of "contract" you'll verify during testing. In the end the software actually used will not check anything.
#ifdef CONTRACTS
if (i >= array_length) panic("index out of bounds")
#endif
an hygienic way to handle that is often "assert", can be a macro or a built in statement.The main problem with assertions is the side-effects. The verification must be pure...
Ok, but you can still test all the branches in your source code and have 100% coverage. Those additional `if` branches are added by the compiler. You are responsible for testing the code you write, not the one that actually runs. Your compiler's test suite is responsible for the rest.
By the same logic one could also claim that tail recursion optimisation, or loop unrolling are also dangerous because they change the way code works, and your tests don't cover the final output.
If they produce control flow _in the executable binary_ that is untested, then they could conceivably lead to broken states. I don’t believe most of those sorts of transformations cause alternative control flows to be added to the executable binary.
I don’t think anyone would find the idea compelling that “you are only responsible for the code you write, not the code that actually runs” if the code that actually runs causes unexpected invalid behavior on millions of mobile devices.
Well this way of arguing it may seem smart but it is not fully correct.
Google already ships binaries compiled with Rust in Android. They are actually system services which are more critical than SQLite storage of apps.
Moreover Rust version of SQLite can ship binaries compiled with a qualified compiler like Ferrocene: https://ferrocene.dev/en/ (which is the downstream, qualified version of standard Rust compiler rustc). In qualification process the compiler is actually checked whether it generates reasonable machine code against a strict set of functional requirements.
Most people don't compile SQLite with qualified versions of GCC either. So this exact argument actually can be turned against them.
>You are responsible for testing the code you write, not the one that actually runs.
Hipp worked as a military contractor for battleships, furthermore years later SQLite was under contract under every proto-smartphone company in the USA. Under these constraints you maybe are not responsible to test what the compiler spits out across platforms and different compilers, but doing that makes the project a lot more reliable, makes it sexier for embedded and weapons.
I believe there's a Rust RFC for a way to write mandatory tail calls with the become keyword. So then the code is actually defined to have a tail call, if it can't have a tail call it won't compile, if it can have one then that's what you get.
Some languages I was aware of are defined so that if what you wrote could be a tail call it is. However you might write code you thought was a tail call and you were wrong - in such languages it only blows up when it recurses too deep and runs out of stack. AIUI the Rust feature would reject this code.
I don't see anything wrong with taking responsibility for the code that actually runs. I would argue it's that level of accountability has played a part in Sqlite being such a great project.
>You are responsible for testing the code you write, not the one that actually runs
That's a bizarre claim. The source code isn't the product, and the product is what has to work. If a compiler or OS bug causes your product to function incorrectly, it's still your problem. The solution is to either work around the bug or get the bug fixed, not just say "I didn't write the bug, so deal with it."
You are a better developer than me, then. I take it you have tests in your product repos that test your compiler behaviour, including optimisations that you enable while building binaries, and all third party dependencies you use. Is that accurate?
There is a difference between "gcc 4.8 is buggy, let's not use it" and "let's write unit tests for gcc". If you are suspicious about gcc, you should submit your patches to gcc, not vendor them in your own repo.
>I take it you have tests in your product repos that test your compiler behaviour, including optimisations that you enable while building binaries, and all third party dependencies you use. Is that accurate?
Are you asking whether I write integration tests? Yes, I do. And at work there's a whole lot of acceptance testing too.
>There is a difference between "gcc 4.8 is buggy, let's not use it" and "let's write unit tests for gcc".
They're not proposing writing unit tests for gcc, only actually testing what gcc produces from their source. You know, by executing it like tests tend to do. Testing only the first party source would mean relying entirely on static source code analysis instead.
> Are you asking whether I write integration tests? Yes, I do.
Exactly. You don't need unit tests for the binary output. You want to test whether the executable behaves as expected. Therefore "rust adds extra conditional branches that are never entered, and we can't test those branches" argument is not valid.
It's the sort of argument that I wouldn't accept from most people and most projects, but from Dr Hipp isn't most people and Sqlite isn't most projects.
Certainly don't get me wrong, SQLite is one of the best and most thoroughly tested libraries out there. But this was an argument to have 4 arguments. That's because 2 of the arguments break down as "Those languages didn't exist when we first wrote SQLite and we aren't going to rewrite the whole library just because a new language came around."
Any language, including C, will emit or not emit instructions that are "invisible" to the author. For example, whenever the C compiler decides it can autovectorize a section of a function it'll be introducing a complicated set of SIMD instructions and new invisible branch tests. That can also happen if the C compiler decides to unroll a loop for whatever reason.
The entire point of compilers and their optimizations is to emit instructions which keep the semantic intent of higher level code. That includes excluding branches, adding new branches, or creating complex lookup tables if the compiler believes it'll make things faster.
Dr Hipp is completely correct in rejecting Rust for SQLite. Sqlite is already written and extremely well tested. Switching over to a new language now would almost certainly introduce new bugs that don't currently exist as it'd inevitably need to be changed to remain "safe".
The way you know is by running the full SQLite test suite, with 100% MC/DC coverage (slightly stricter than 100% branch coverage), on each new compiler, version, and set of flags you intend to support. It's my understanding that this is the approach taken by the SQLite team.
Dr. Hipp's position is paraphrased as, “I cannot trust the compilers, so I test the binaries; the source code may have UBs or run into compiler bugs, but I know the binaries I distribute are correct because they were thoroughly tested" at https://blog.regehr.org/archives/1292. There, Dr. John Regehr, a researcher in undefined behavior, found some undefined behavior in the SQLite source code, which kicked off a discussion of the implications of UB given 100% MC/DC coverage of the binaries of every supported platform.
(I suppose the argument at this point is, "Users may use a new compiler, flag, or version that creates untested code, but that's not nearly as bad as _all_ releases and platforms containing untested code.")
Autovectorization / unrolling can maybe still be handed with a couple of additional tests.
The main problem I see with doing branch coverage on compiled machine code is inlining: instead of two tests for one branch, you now need two tests for each function that a copy of the branch was inlined into.
If it was as completely tested as claimed, then switching to rust would be trivial. All you need to do is pass the test suite and all bugs would be gone. I can think of other reasons not to jump to rust (it is a lot of code, sqlite already works well, and test coverage is very good but also incomplete, and rust only solves a few correctness problems)—just not because of claiming sqlite is already tested enough to be bug free of the kinds of issues that rust might actually prevent.
no, you still need to rewrite, re-optimize, etc. everything
it would make it much easier to be fully compatible, sure, but that doesn't make it trivial
furthermore part of it's (mostly internal) design are strongly influenced by C specific dev-UX aspects, so you wouldn't write them the same, so test for them (instead of integration tests) may not apply
which in general also means that you most likely would break some special purpose/usual user which do have "brittle" (not guaranteed) assumptions about SQLite
if you have code which very little if at all changes and has no major issues, don't rewrite it
but most of the new "external" things written around SQLite, alternative VFS impl. etc. tend to be at most partially written in C
Yes. You have to write `unsafe { ... }` around it, so there's an ergonomic penalty plus a more nebulous "sense that you're doing something dangerous that might get some skeptical looks in code review" penalty, but the resulting assembly will be the same as indexing in C.
I figured, but I guess I don't understand this argument then. SQLite as a project already spends a lot of time on quality so doing some `unsafe` blocks with a `// SAFETY:` comment doesn't seem unreasonable if they want to avoid the compiler inserting a panic branch for bounds checks.
I wonder if this problem could be mitigated by not requiring coverage of branches that unconditionally lead to panics. or if there could be some kind of marking on those branches that indicate that they should never occur in correct code
I think those branches are often not there because it's provably never going out of bounds. There are ways to ensure the compiler knows the bounds cannot be broken.
It's interesting to consider (and the whole page is very well-reasoned), but I don't think that the argument holds up to scrutiny. If such an automatic bounds-check fails, then the program would have exhibited undefined behavior without that branch -- and UB is strictly worse than an unreachable branch that does something well-specified like aborting.
A simple array access in C:
arr[i] = 123;
...can be thought of as being equivalent to:
if (i >= array_length) UB();
else arr[i] = 123;
where the "UB" function can do literally anything. From the perspective of exhaustively testing and formally verifying software, I'd rather have the safe-language equivalent:
if (i >= array_length) panic();
else arr[i] = 123;
...because at least I can reason about what happens if the supposedly-unreachable condition occurs.
Dr. Hipp mentions that "Recoding SQLite in Go is unlikely since Go hates assert()", implying that SQLite makes use of assert statements to guard against unreachable conditions. Surely his testing infrastructure must have some way of exempting unreachable assert branches -- so why can't bounds checks (that do nothing but assert undefined behavior does not occur) be treated in the same way?
The 100% branch testing is on the compiled binary. To exempt unreachable assert branches, turn off assertions, compile, and test.
A more complex C program can have index range checking at a different place than the simple array access. The compiler's flow analysis isn't always able to confirm that the index is guaranteed to be checked. If it therefore adds a cautionary (and unneeded) range check, then this code branch can never be exercised, making the code no longer 100% branch tested.
you basically say if deeply unexpected things happen you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error. ... that doesn't seem right
worse it's due to a lack of the used tooling and not a fundamental problem, not only can you test this branches (using fault injection) you also often (not always) can separate them from relevant branches when collecting the branch statistics
so the while argument misses the point (which is tooling is lacking, not extra checks for array bounds and similar)
lastly array bounds checking is probably the worst example they could have given as it
- often can be disabled/omitted in optimized builds
- is quite often optimized away
- has often quite low perf. overhead
- bound check branches are often very easy to identify, i.e. excluding them from a 100% branch testing statistic is viable
- out of bounds read/write are some of the most common cases of memory unsafety leading to security vulnerability (including full RCE cases)
> you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error.
SQLite isn't a program, it's a library used by many other programs. As such, aborting is not an option. It doesn't do "wildly arbitrary" things - it reports errors to the client application and takes it on faith that they will respond appropriately.
> In incorrect code, the branches are taken, but code without the branches just behaves unpredictably.
It's like seat belts.
E.g. what if we drive four blocks and then the case occurs when the seatbelt is needed need the seatbelt? Okay, we have an explicit test for that.
But we cannot test everything. We have not tested what happens if we drive four blocks, and then take a right turn, and hit something half a block later.
Screw it, just remove the seatbelts and not have this insane untested space whereby we are never sure whether the seat belt will work properly and prevent injury!
> On 4 October 2025, the Chinese Ministry of Commerce issued an export control notice prohibiting Nexperia China and its subcontractors from exporting specific finished components and sub-assemblies manufactured in China. Nexperia is actively engaging with the Chinese authorities to obtain an exemption from these restrictions and has deployed all available resources to that end. Nexperia is in close dialogue with all relevant national and local government authorities to mitigate the impact of this measure.
This doesn't make any sense, why do the Dutch think taking control of Nexperia will somehow force China to provide components to them?
The Dutch ASML is prohibited from providing EUV to China which is apparently what forced China to decouple. Little wonder the other day the Chinese said "We aren't afraid of trade war". Escalating this BS brings nothing good to ordinary people.
The Chinese export ban on 4 October is a reaction to the 7 October Dutch takeover even though the dates are reversed, because the Dutch action involved court proceedings before being finalized whereas the Chinese action presumably didn't.
You have omitted from the key part that the Chinese export control notice was a reaction to the rule announced on 29 September 2025 by the United States Bureau of Industry and Security, which has put Nexperia on the list of entities to which US export restrictions apply.
So like for all these kinds of actions, regardless whether they happen in the Netherlands or in China or in Taiwan, their origin is in USA.
> The word “assaulting” presumes that he applied force or violence, in some unspecified way, but the article itself says no such thing. Only that they had sex.
> We can imagine many scenarios, but the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates.
> I’ve concluded from various examples of accusation inflation that it is absolutely wrong to use the term “sexual assault” in an accusation.
This seems a very reasonable thing to say. Also useful concept "accusation inflation" thanks for linking.
> "sex with someone under 18 is rape”, “sex with a prostitute under 18 is enslavement”, and “making a nude photo of someone under 18 is a sexual assault.”
What is happening here is law being repurposed. Rape already has big sentences, and we want to give under 18s extra protections, so let's redefine what the word rape means so we can reuse the rape law.
> Efforts against the business of making and distributing images of that are justified — but these must not be done by dangerous methods.
Dangerous methods meaning redefining words.
reply