My personal conclusion is a need for "escape hatches" in any language with strong constraints rather than an indictment of Turin-incompleteness in general.
The usual criticism that is leveled at Turing-incomplete languages is "well you've only changed the problem from taking an infinite time to taking more time than the heat death of the universe which really isn't that much better," but that's completely ignoring the ergonomics of how this actually plays out in real-life coding.
In real-life coding, the choice is almost never between "does not terminate" and "heat death of the universe." When we write buggy, non-terminating code, the fix is usually a small change that takes it from "does not terminate" to "terminates in a reasonable amount of time." The fix usually does not take it to "indistinguishable from non-termination."
This might be forgetting a base case in a recursive definition or getting a loop termination condition wrong. More insidiously this can happen when termination conditions are based off of non-local things and therefore depend on assumptions that "far away" from the original code.
The point of Turing-incomplete languages is to go after those use cases, where the function would otherwise complete in a reasonable amount of time, save for one small little issue.
What the article is pointing out is that in pursuit of that dream, the language can be overly restrictive to the point that people are forced to write "heat death"-style functions to get around its restrictions.
This is an important distinction! People aren't accidentally writing functions that take insane amounts of time to complete in Turing-incomplete languages, but are rather doing so intentionally because that's the only way they can get their desired functionality. This is why the analogy of "non-termination" to "termination after the heat death of the universe" doesn't work. The former occurs when people don't realize that they've done that. The latter is a conscious, angry choice by users.
That's why I think you need escape hatches to make sure people aren't forced to abuse the language when its restrictions really make a certain task impossible to complete in a reasonable amount of time.
I think it's not just about termination, it's broadly about restricted semantics.
If we had a meta-language like Lisp or ML that had very good capabilities to quickly develop DSLs with very clean restricted semantics for every particular component of a big system, and tooling to automate proofs for said restricted semantics, it'd be much easier to prove the validity of any piece of code.
The scenario we currently have is quite different, so it requires heroic efforts to get any kind of guarantee on big systems (e.g. the Astree static analyzer or seL4).
Citing Alan Perlis: "Beware of the Turing tar pit, where everything is possible but nothing of interest is easy".
> Citing Alan Perlis: "Beware of the Turing tar pit, where everything is possible but nothing of interest is easy".
I posited here on HN Postscript / PDF recently is just such a tar pit, but got down voted for it. If you've every tried to copy and paste from a PDF and got rubbish (as opposed to a message saying you aren't allowed to do that), the reason is the Turning complete program painting the page doesn't have to paint things in a particular order and the things it paints may be bit maps. In the worst case you may as well be trying to copy a bunch of PNG images that have been plonked onto the canvas that happen to look like letters to a human.
We would have been far better off with a declarative way of painting a page. It's not that hard - SVG achieves the same end result, and is purely declarative.
> Most languages ... do not gain any benefit from this restriction. One exception is in domains where you are proving properties or doing analysis
Not even then. In some cases you need to forbid recursion in the proof language to make it consistent if it is based on the lambda calculus, but making the object (programming) language non-Turing complete does not make analysis/proof easier, unless you consider going from potentially infinite difficulty to unbounded difficulty "easier." The complexity of analysing a program written in a language with only if/else, no recursion, not even bounded loops, and just boolean variables and no arrays is already PSPACE-complete in the size of the program.
Some people think that program analysis is hard because of the halting theorem (or its corollary, Rice's theorem), but that's just the simple story told to first-year students. A generalisation of those theorems (sometimes called "bounded halting", but it's part of one of the most theorems in computer science, the time-hierarchy theorem) says, roughly, that, in the worst case, you cannot determine in under N steps whether a program will halt (/emit some output/reach a certain state) on input X within N steps. That's the "master" theorem that determines the difficulty of program analysis/proof, and not only does it hold for some of the more powerful non-Turing complete languages, it even holds for the simplest computational models: finite state machines.
To give an intuition as to why termination cannot possibly make analysing programs or proving interesting properties about them, consider that we can take any program, in any language, and mechanically translate it into such a termination language (even better: into a finite state machine!) while maintaining its behaviour for all practical intents and purposes: We add a decrementing counter, initialised to, say, 2^1000, to all loops/recursive calls, and terminate the program when it reaches zero (possibly changing types to allow that, but that is easy to do even mechanically). This program will behave just like the original at least for the duration of the universe, so we don't care about any differences that could arise later. In other words, if termination (or even "finite-statedness") alone made a difference to program analysis, we can already assume, without loss of (practical) generality, that all programs are like that already.
On the other hands, many specific instances of programs in either terminating or Turing-complete languages are relatively easy to analyse.
I think you've swung way hard the other way. Specifically you're overlooking various classes of analysis that do get easier with more restricted languages and focusing too hard on the general analysis of "let me prove an arbitrary statement about an arbitrary program" (i.e. the same pitfall people fall into when they point to Rice's theorem and say "ah so static analyzers are a worthless waste of time").
For example, with a strongly normalizing lambda calculus, I can prove intentional equality for any two functions and also have the nice property that extensional and intentional equality coincide.
This means that I can prove e.g. that refactors that aren't supposed to change behavior in fact don't. A specific class of analysis to be sure that's not as useful as a general "prove anything about this function," but still quite useful. This specific property happens to be true of dhall.
From the viewpoint of state machines, this is also true of FSMs. In that case, another way to think of it is that you get optimization for free. You can construct the minimal FSM with the same behavior as once you've created (this is also true for the lambda calculus case but generally it seems the lambda calculus isn't thought of usually in operational terms the same way FSMs are).
Moreover, your last comment also illustrates that there are indeed classes of programs that are quite easy to analyze. When you say "many specific instances of programs" I presume you don't literally mean hard-coded single instances of a program, but rather classes of programs that all are structurally similar. These classes do not always align nicely along with the usual classes in theoretical CS, but that doesn't mean they aren't useful.
For example a language consisting entirely of the holonomic functions with the usual semantics of them preserves symbolic integration, a nice little bit of analysis in the world of mathematical computing.
Just to be clear, I am not saying that very restricted languages in some very restricted domains (such as configuration or hypertext) are not useful. I'm saying that in general purpose languages, restricting the programming language is not a generally fruitful avenue if analysis and correctness are the goal.
> For example, with a strongly normalizing lambda calculus, I can prove intentional equality for any two functions
What's the complexity of that?
> You can construct the minimal FSM with the same behavior as once you've created
The complexity of that is more than linear in the number of states, that is usually at least exponential in the size of the program.
> This means that I can prove e.g. that refactors that aren't supposed to change behavior in fact don't
But that's often the case in Turing-complete languages, too. Moreover, the easy cases are usually not harder to detect in those languages.
Any benefits can't be true in general because it is trivial to transform any program to an "effectively equivalent" FSM. A claim that could be true is that in restricted languages, many more programs people write happen to be easier to analyse than the programs they would have written to do the same thing in an unrestricted language. That's certainly a very interesting empirical claim, but it requires empirical evidence.
Probably huge, but very reasonable for the subset of code people normally write (this again the analogy of Rice's theorem vs static analyzers). I don't know off the top of my head, but I'm sure there's pathological cases that trigger cases where normalization becomes the same thing as full-on evaluation.
But it doesn't show up in practice (see e.g. dhall as a working case), because you have to write pathological code. This is similar to the case with many statically typed programming languages (e.g. Java, Haskell, etc.) that have worse-than-exponential blow-ups in their type systems that essentially never show up in practice. I'm not sure any programmer alive has ever accidentally written a Java program that took days to compile (which could easily be possible with the right pathological behavior). EDIT: I forgot about Turing completeness in Java and Haskell's type systems; I meant leaving those tricks aside.
> But that's often the case in Turing-complete languages, too. Moreover, the easy cases are usually not harder to detect in those languages.
Yes if you program in a fragment of a language you're fine. We're trying to disincentivize users from stepping outside the fragment or at least let them be aware when that happens. One way of doing that is to make it off-limits to step outside that fragment. Another is a stratified language (e.g. Hume). Another way is an analyzer separate from the language. Yet another is community convention.
> Any benefits can't be true in general because it is trivial to transform any program to an "effectively equivalent" FSM.
By arguments analogous to this and the usual ones about Turing completeness this statement is always true in some form for any claimed benefit about any programming language. By this standard all programming languages are indistinguishable at a general level (which is true, but not very useful).
> A claim that could be true is that in restricted languages, many more programs people write happen to be easier to analyse than the programs they would have written to do the same thing in an unrestricted language. That's certainly a very interesting empirical claim, but it requires empirical evidence.
That is at the end of the day the contention behind any programming language. Yes I agree that this means that almost any proponent of a language talking about how it's "objectively" better is wrong. I also agree that this turns the entirety of programming languages into essentially an art-form rather than anything approaching a science. But such is the current state of things (yes also I happen to agree with your usual contention that this is a huge problem with the value proposition of PLT).
Barring this, we have weak (but not unusually so for the field of programming) evidence, in the form of user communities of Turing-incomplete languages today, that people aren't generally creating heat-death style code on accident. They're doing it on purpose to get around restrictions of the language.
EDIT: The name of the game when creating programming languages is to target code that "a normal user would write." Completely restricting the entire language is just another lever in that process.
> This is similar to the case with many statically typed programming languages ... that have worse-than-exponential blow-ups in their type systems that essentially never show up in practice.
I'm not so sure. Type checking/inference with "degenerate cases" (e.g. Hindley-Milner) is in FPT (fixed-parameter tractable) as are, AFAIK, all problems that are hard in the worst-case but could be easy in practice (e.g. SAT). Program analysis, (assuming "FSMification", of course, to avoid undecidability), was proven in 2005 to not be FPT.
> Yes if you program in a fragment of a language you're fine.
I'm claiming that analogous code to that which would be feasibly checkable in a restricted language would also be feasibly checkable in a non-restricted language, and that both are subject to the same difficulties.
> By this standard all programming languages are indistinguishable at a general level (which is true, but not very useful).
Right, but that doesn't mean that a specific empirical claim should be assumed true until proven false, rather than the opposite, which is the scientific norm.
> Barring this, we have weak (but not unusually so for the field of programming) evidence, in the form of user communities of Turing-incomplete languages today, that people aren't generally creating heat-death style code on accident.
Are you talking about niche languages (like configuration languages) or those intended to be general purpose?
> The name of the game when creating programming languages is to target code that "a normal user would write." Completely restricting the entire language is just another lever in that process.
Perhaps, but this claim requires evidence, unless we agree to keep it in the realm of art and aesthetics, which I'm more than happy to.
I'm just butting into this conversation, but I think you're setting the bar to high.
> Program analysis, (assuming "FSMification", of course, to avoid undecidability), was proven in 2005 to not be FPT.
I think your using "program analysis" in a different way than I'm used to. I thought program analysis was just the attempts to infer/check properties of the program. A property like termination and a property like "the type of this variable" are very different, obviously. What specifically are you referring to?
> I'm claiming that analogous code to that which would be feasibly checkable in a restricted language would also be feasibly checkable in a non-restricted language, and that both are subject to the same difficulties.
I mean, you're correct here, but missing the point. Consider a type checker, or Rusts borrow checker, or a side effect checker. The static analysis is going to be the same if you just translate the code to Python, for example. Nonetheless, the differences matter to people. I have a completely different confidence in my program's properties using somebody's pure function in Haskell than I do using some random Python function. If I looked at the source code, yeah I could reason through either of them in the same way, But in one, I don't have to.
> Right, but that doesn't mean that a specific empirical claim should be assumed true until proven false, rather than the opposite, which is the scientific norm.
A specific empirical claim should be suspected in whichever direction you personally think, given the arguments and incomplete evidence, held in a state of uncertainty until proven definitely one way or the other. The scientific norm is decidedly NOT assuming one way or another prior to evidence/proof.
> I thought program analysis was just the attempts to infer/check properties of the program.
It means to infer/check a property of the program: not of its structure though (such as how many subroutines it has), but of its behaviour.
> Nonetheless, the differences matter to people.
But when people speak of terminating languages they hint at being interested at far "deeper" properties (not compositional, less local) than those that can be checked with a simple type checker. I am not claiming that languages don't matter, at least not here, only that termination doesn't help program verification (of the "deep" kind).
> The scientific norm is decidedly NOT assuming one way or another prior to evidence/proof.
If you make a claim that variable X has an effect on observation Y then the scientific norm is that while we don't know one way or another, the working hypothesis is that there is no effect, i.e. we give a preference to the assumption of no causal relationship between things. For example, we don't know whether or not eating lettuce on sunny Sundays increases baldness in men, but the scientific norm is that the working hypothesis is that it does not.
The article being discussed paints a very incomplete picture of what Idris does, but Idris seems to be in agreement with pron's comment here: it allows functions that loop forever, but if you want to write a proof, you'll need to ensure that it terminates. In fact, all functions are, by default, allowed to be partial.
I'm having trouble reconciling your first and second paragraphs. The first paragraph states that program analysis is PSPACE complete in the size of the program while the second appears to be saying that it is EXPTIME. But it is not known that PSPACE = EXPTIME.
In the first paragraph I mentioned that a specific, very simple FSM language is PSPACE, and in the second paragraph I was talking about terminating languages in general, and didn't mention EXPTIME. For more elaborate terminating languages, the complexity of analysis is much, much bigger than EXPTIME.
There's a more intuitive way: In a virtual machine interpreter loop, increase a value (commonly called "gas") at every instruction step (or each block of instructions). Different instructions can have different costs.
Turing incomplete languages are not necessarily niche or a pita to use. For example SQL (before version 3) had no recursion yet was very widely used. Even today, I'd guess 99% of written SQL does not use recursive CTEs.
I think Starlark (formerly known as Skylark) [1] is a practical example of a Turing Incomplete language. As far as I know, this language doesn't allow recursion or even while loops. Only Python like for loops over bounded collections are allowed.
The language is very similar to Python so it's extremely easy to learn for many people.
This language is used as a configuration language for the Bazel build system [2]. Ensuring that scripts will run is a very nice property for configuration languages, giving a nice balance between flexibility and sanity.
I would love to have a similar configuration language but based on Clojure instead of Python.
(note that Neil, the author of the blog post, commented there too)
My experience working on build systems and Bazel is that recursion is rarely useful (and most of the use-cases are for things that should be discouraged). Interestingly, Meson advertizes "non-Turing complete DSL" a feature on their front-page.
The fun follow-up question is whether we should extend this observation to static type systems too. Should we just accept that those should be Turing-complete too?
We need a moniker for Turing Incomplete that's easier to explain. I'd really like to see this become more of a thing. When groups organize, you can lean on their expertise and expenditure of effort to make your point for you rather than having to fight on your own the whole time.
I'm trying to put together a community wiki for a non-technical group (with ideas of expanding it out to others should I succeed) and every time I dig back into it I'm pretty sure I want a markdown format, and any fancy layout is up to me to sort out. But most markdown engines allow embedded HTML. Which is great when all content is authored by the maintainers, not so great when one tech-savvy troublemaker misuses it to put turing-complete content into the page.
I'm thinking of switching implementation languages to Elixir, and the Elixir Markdown implementations all seem to assume inline HTML is a feature you wouldn't want to toggle. I am already biting off too much, I don't want to throw a parser into that too.
I'm not sure what Turing completeness has to do with such a problem. There's a theoretical non-Turing complete subset of HTML+JS that has all the problems you claim, meanwhile Markdown with a built-in lambda calculus interpreter is Turing complete yet much less problematic. I think you're overcomplicating things for yourself by bringing terms like 'Turing-complete content' into the discussion, surely it just suffices to explain that the more powerful the system, the more open to abuse it is?
Eh, I dunno. "Turing complete" is a pretty easy concept to understand: such a language is capable of simulating a universal Turing machine. "Turing incomplete" could just be "not Turing complete."
It's pretty clear that this is a well-defined definition, because every computer language either is, or is not Turing complete.
The people who make budgetary decisions are not in this bubble, which is a big part of why we keep getting overruled on important things. We can either go on acting surprised, or we can figure out how to do the work of communicating. We have decades of evidence that the rest of the world really doesn't give a damn, and are willing to pay a lot of money to continue not giving a damn.
Turing complete sounds like something people want to have. Why are you the "good guy" for trying to make something incomplete? Wouldn't it be more desirable if we completed it?
That reminds me, I was once going into a clinic, and a guy coming out was looking at a paper in his hand, then at me, and said "Positive is good, right? It says here HIV positive. That's good, right?"
The people who want the turingness to be complete are the ones who want the HIV to be positive. And I've never met any of those making budgetary decisions, though I'm sure some exist.
(For the record, it was someone I knew, and he was a joker. He was not HIV positive, this was just a joke).
That is not a funny joke, and yet I can see a couple of my relatives making it.
It's often less about Turing Complete specifically and more about feature factory thinking. "You mean we could have a feature for a bit more (or less) work right now? Why wouldn't we want to have that? Think of all the money we can make!" followed 18 months later by pathological inability to understand why development has slowed down so much due to opportunity costs for which they spent the bonus check so long ago they can't even remember if they had fun spending it.
> We can either go on acting surprised, or we can figure out how to do the work of communicating.
Or we can figure out how to fix the fact that incompetent people are the ones who make budgetary decisions, or we can go on not acting surprised that they make bad decisions.
In eBPF's case, at least the verifier puts a static cap on the total number of instructions that can be executed when invoking an eBPF hook – a reasonable cap, not "this will terminate some time after the heat death of the universe". This means you can be sure your eBPF hook will always run to completion in practice, as opposed to timing out or wedging the system indefinitely. That's a real benefit, but it requires stricter limitations than you see in "Turing-incomplete because we can" languages.
This limitation also allows the verifier to be simpler, while still allowing the program to be JITted to high-performance native code, that is guaranteed safe without unnecessary bounds checks. That simplicity is a second benefit, albeit one for kernel developers rather than eBPF-program developers. (You could say the performance benefits eBPF-program developers, but more complex JIT designs can achieve comparable performance without the limitations.) In contrast, "Turing-incomplete because we can" languages are typically high-level interpreted languages: they're not particularly fast to start with, and they wouldn't get any faster if the restrictions on recursion were lifted.
Despite the two aforementioned benefits, I believe that eBPF's guaranteed termination is somewhat overrated; I think it could benefit from a mode that allows arbitrary loops, even if that would require moving some compile-time checks to runtime.
I think turing incomplete languages are the future, because every property you care about of a program that you will write is reasonably efficient to reason about. Otherwise your teammates aren't going to be very happy that you're writing it that way.
This defines some subset of computable functions that has some real nice properties. Eventually I expect us to gain more and more understanding of where that boundary lies, defining humans-give-a-shit-about-these-programs-space. Over time, I expect our tools will slowly gain the ability to automatically check the things that we care about and restrict us from doing it in ways that get too hairy. I expect that that future is a very long way off.
Termination is relevant if you never run the program, because the point is to prove that something exists by writing the program. In languages like Agda or Idris, you can prove things exist by compiling the program, not running it. It doesn't matter if it takes a million years to run, so long as you have a guarantee it won't loop forever.
For ordinary programs that we run because we want the result, if you want a guarantee that it will finish then you need something like real-time programming, because it's not enough to know that it terminates. You want to know that it will meet a deadline.
Sadly not. SQL (or, at least, T-SQL and Postgres) can implement a cyclic tag system, GLSL can run the Game of Life and OCL supports an ad hoc, bug-ridden, slow implementation of half of Common Lisp.
there are a few languages guaranteed to terminate:
- JSON
- YAML
- TOML
- all dialects of INI
- CSV
i don't see these languages getting recursion added anytime soon, but you might want to program in them, so you add recursion with jinja or whatever.
if you want a configuration language that is guaranteed to terminate, you may want to end up with a turing-incomplete language: https://dhall-lang.org/
None of those listed are programming languages. They are data formats. They perform no computation. Calling them Turing-complete or Turing-incomplete is meaningless.
Similarly, wc is a programming language with ~4 operations:
'a' # inc byte counter, add word flag to word counter, clear word flag
# any other non-space/newline is the same as 'a'
' ' # inc byte counter, mark word flag
\n # inc byte counter, mark word flag, inc line counter
EOF # (implied) print counters
That set of operations is of extremely limited usefulness, but that's more or less true of any non-turing-complete language (and many that are turing-complete, even).
this point is dangerous - the same logic can be applied to C code or any other programming language. source files only encode the AST, after all. it's the compiler/interpreter doing the actual work. it just so happens that the INI interpreter doesn't usually do much else than load the list of key value maps - and that's the problem - people extend INI source files in a different language which can do more.
No, a programming language like C have both a syntax and an execution model. In Turing-machine terminology, it has a set of symbols and rules for processing the symbols. This is why you can reason about things like the halting-problem.
If you only have a set of symbols but no rules for processing the symbols, you don't have a machine, and you cant meaningfully talk about "termination".
INI does not have a defined execution model, so it is not a machine the way C is. A particular implementation could read the data into key-value maps but it is not required to. E.g. it could just search through the file for a specific key. Or it could do nothing.
They are all data formats that are used for configuration.
Configuration systems tend to sprout macro systems of various kinds. So you're writing in JSON, but it is fed into a macro system and voila! You have a programming language.
The progression from data format to custom programming language is surprisingly short.
I'm aware of the distinction. Data syntax from data formats is sometimes used directly for programming languages, too; the path from S-expressions to Lisp is very short, too, and I write that as a lisper. A very relevant recent article is https://news.ycombinator.com/item?id=24892297
The point is that when we are talking about "JSON, but with macros" we already have a programming language in play, not merely just a data format. This is because macros are functions from text into text, or from data into data, and these functions require to be written in some sort of actual programming language. At this point the data format becomes an extension of this programming language, either some sort of input for functions in this programming language or another means of expressiong this language's syntax.
This is a dangerous misconception, which those of us in langsec have worked hard to dispel.
Data is not inert. Trivially, yes, insofar as a .json file just sits there until you do something with it – but this is true of .js as well.
Data in use is active, it is an input to your program which drives computation. This is the same as inputs which are Turing complete, just restricted in various ways.
Being complacent about this fact is a great way to receive a malicious payload, which puts your program into an unexpected and exploitable state.
I agree with your statement even though it seems to completely miss the original point that I have made.
Yes, you are correct, if you are running a suid binary and you specify / as a temporary directory instead of the default that looks like /tmp/foo-12345abcd/ in its configuration file, then the script may invalidly perform cleanup by deleting everything on the filesystem.
But this does not suddenly mean that Unix pathnames are Turing-complete or Turing-incomplete or that Turing-completeness somehow applies to strings as data formats.
It's not the path string that does the deletion. It's not the path string that was written in a programming language that performs the unlink() syscall. It's not the data itself that causes people to curse loudly.
Sure, secure your data, validate your inputs, ensure that your code cannot be misused, just don't say that data, even when it is in use, performs some computation on its own.
I'm getting the sense that you're not quite picking up what I'm laying down here.
Code also performs no computation on its own. A function with no parameters is just a complex constant.
Computation is always the union of code and data. People can and do embed full programs in 'inert' formats like JSON, and write an interpreter for that program.
This is a bad idea, and the notion that JSON is "just data" is exactly what fools people into writing such inner languages (implementing, of course, half of Common Lisp).
Reducing the motivating examples (JSON, TOML, etc.) to path strings is a weak argument, because path strings are not strong enough to provide for such an inner platform, while the data formats under discussion are, it happens all the time.
The very fact that I can point out that a path string is capable of driving less complex calculations than YAML is, points out that there is in fact a continuum from pure code to 'mere data' (a float, let's say), which undermines the argument that there is some strong distinction to be made here.
You can't say they are guaranteed to terminate since they are not executable languages at all. Of course a language could be implemented with JSON syntax, but whether it was guaranteed to terminate would then depend on the language semantics, not the syntax.
Idris is an other example for a programming language that always terminates. For that matter any total functional programming language does and is therefore turing incomplete.
Totality is optional in Idris. Also, the big dependently-typed languages like Coq, Agda, Idris all support corecursion, so they can run programs infinitely provided the program does something productive.
Blockstack is working on a new version of our blockchain with smart contracting capabilities. We've created a new language, called Clarity, for smart contract development on our platform. It's lisp-based and is _not_ Turing complete. We believe that decidability is quite important for smart contracts. Algorand has joined our efforts, and they will support Clarity on their blockchain.
I am not involved with the development of Clarity, but I really enjoy writing smart contracts with it!
The usual criticism that is leveled at Turing-incomplete languages is "well you've only changed the problem from taking an infinite time to taking more time than the heat death of the universe which really isn't that much better," but that's completely ignoring the ergonomics of how this actually plays out in real-life coding.
In real-life coding, the choice is almost never between "does not terminate" and "heat death of the universe." When we write buggy, non-terminating code, the fix is usually a small change that takes it from "does not terminate" to "terminates in a reasonable amount of time." The fix usually does not take it to "indistinguishable from non-termination."
This might be forgetting a base case in a recursive definition or getting a loop termination condition wrong. More insidiously this can happen when termination conditions are based off of non-local things and therefore depend on assumptions that "far away" from the original code.
The point of Turing-incomplete languages is to go after those use cases, where the function would otherwise complete in a reasonable amount of time, save for one small little issue.
What the article is pointing out is that in pursuit of that dream, the language can be overly restrictive to the point that people are forced to write "heat death"-style functions to get around its restrictions.
This is an important distinction! People aren't accidentally writing functions that take insane amounts of time to complete in Turing-incomplete languages, but are rather doing so intentionally because that's the only way they can get their desired functionality. This is why the analogy of "non-termination" to "termination after the heat death of the universe" doesn't work. The former occurs when people don't realize that they've done that. The latter is a conscious, angry choice by users.
That's why I think you need escape hatches to make sure people aren't forced to abuse the language when its restrictions really make a certain task impossible to complete in a reasonable amount of time.