To be in NP, witness must be verifiable in polynomial time with respect to the size of the original input. In this problem (VAS Reachability), the solution can be `2^2^2^...^K` steps long. Even if that's linear with respect to the witness, it's not polynomial with respect to the set of moves + goal.
Hmm.. I'd love to see a more formal statement of this, because it feels unintuitive.
Notably the question "given a number as input, output as many 1's as that number" is exponential in the input size. Is this problem therefore also strictly NP-hard?
I believe that is not a concern in my code; if "a=a" then commiting to that cannot hide any possible solutions where either side of the equals could be different because we've already tested and found them equal (they unify). One can avoid that worry by swapping in this version instead:
score(X, X, Adder, NewScore) :-
NewScore is Adder + 10.
score(X, Y, Adder, Adder) :-
dif(X, Y).
Leaving choicepoints where I don't want them is an annoying part of Prolog when I know the answer is deterministic - that's where I used zcompare/3 in my other answer in this thread, but it's far too much effort to write that pattern for every test situation. For people who object to `is` then make it `succ(Adder, NewScore)` and adjust the player scores down by a factor of ten as per the blog post. For people who comment that succ/2 doesn't always work, note that the Adder is always initialised to 0 in the foldl goal so succ/2 always has one instantiated value.
[It just wouldn't be a Prolog solution if it wasn't hard to write, hard to read, hard to verify that it's working properly, full of single letter variable names OrLongWordVariableNames and immediately making people talk about cut and constraint handlers and ISO compatibility and Prolog internals rather than the problem. On that note, PabloLife's code has "flip(correct, incorrect)" which is imperatively named but doesn't do anything imperative, uses nested lists instead of Number-Answer pairs which is more visually cluttered and uses more memory...].
- It says that `olddb = [k \in Keys |-> CHOOSE v \in Values: TRUE]` initializes olddb nondeterministically. This is NOT true: `CHOOSE` is guaranteed to be a deterministic expression. The actual thing is `olddb \in [Keys -> Values]`.
- `key \in Keys; val \in Values; newdb[key] := val;` is a syntax error. I think what it meant was `with key \in Keys, val \in Values { newdb[key] := val;`
- `newdb[key] := val` is an error anyway, because `newdb` was initialized as a set.
- ` for key \in DOMAIN(olddb) do` is straight up a syntax error, there's no for loop in PlusCal.
- It's mixing p-syntax (`if then end if`) with c-syntax (`if {}`).
I couldn't tell you the historical connections, but they "feel" related to me. This comment got me speculating about how to explain this connection; disclaimer I'm writing completely this off the cuff.
First off, NFAs and Nondet Turing Machines (NTM) would be "deterministic" with the OP definition I gave, since they always give the same decision for the same input. Their nondeterminism is a capability they have, an aspect of their process that helps reach a decision. If we need to distinguish "deterministic result" and "deterministic process", I've seen the term Observationally Deterministic used to exclusively mean the former.
A critical theorem in automata theory is that NFAs and NTMs are no more powerful than DFAs or TMs: the former cannot recognize any languages the latter cannot. This is why nondeterministic algorithms (ie, algorithms with a deterministic capability) cannot "solve more problems" than deterministic algorithms, just (probably) solve them faster (NP vs P).
(Interestingly, nondeterministic pushdown automata ARE more powerful than regular PDAs. Only NPDAs can recognize all context-free languages.)
So what does it mean for "nondeterminism" to be a capability of the process? I see the "automata/computation" and the "formal methods" as being the same idea, except with opposite outcomes. When deciding to see if a problem is in NP, we allow the algorithm to make nondeterministic choices, and accept the construction if any choice leads to valid outcome. When deciding to see if a system is satisfies required properties, we allow the model to make nondeterministic choices, and reject if any lead to an invalid outcome.
(Even more handwavy part: I've heard the terms "angelic" and "demonic" choice to distinguish the two. I think from the perspective of complexity, FM's demonic nondeterminism kinda looks like angelic choice in the co-complexity class. From the perspective of FM, complexity's angelic nondeterminism kinda looks like checking a reachability property, as opposed to a safety or liveness property.)
In either case, both are essentially abstractions over concrete implementation details. Just as I cannot physically implement "nondeterministically enter state A, B, or C", I cannot implement "nondeterministically match the regex `A|B|C`". I have to instead find a deterministic implementation, like "try to match A, and if that fails, backtrack and match B, etc."
I thought this was an interesting meditation on the limits of AI and how ordinary people don't see those limits, wrapped up in a context I normally wouldn't expect to see AI at all.
It's quite a different level of commitment to truth than one usually runs across these days. Instead of a world of meme and vibes, a costly commitment to only say things that are true.
To an extent, I also think the determination of the Apologetics Project also shows the tendency of people to go into denial about the limits of the technology. There is a lovely SF short story, The Quest for Saint Aquin, on how a true AI might feel about religious belief but we are a long way short of that.
It worries me a lot more that governments and the like will also be in denial about what they can do with AI. I can ignore low quality apologetics, I cannot ignore the government (I have to pay taxes, for example).
The "one or two wrong words" line of thought was one of the things that made me step away from religion. If there's no provenance then text, then what hope does anyone have.
I feel like the way around this is hermenutics and exegesis and that's where things differ substantially between the bible and llms. The bible has both whereas LLMs are arguably approached hermenutically, but have no exegesis to speak of.
People ask AI because other people don't respond to their questions. For some wrong answer might be worth less than no answer. For many it isn't (which is fitting since we are on subject of religion).
The chapters on both constraint solvers and logic programming are pretty short overviews, we're talking less than 15 pages. I wouldn't recommend it if that's what you're specifically interested in.
For SAT/SMT in Python, I've heard this book is pretty good: https://theory.stanford.edu/~nikolaj/programmingz3.html. Google OR-tools has a Python frontend and they have interactive tutorials (https://developers.google.com/optimization/examples). I don't know about logic programming in Python. Note that while there's some overlap between logic programming and constraint solving ([1] [2]), the communities are pretty independent from each other. Different histories, tools, techniques, etc.
I am amazed that I somehow didn't use the word arity once in the email.
That said, I don't believe I'm using the terms unary and binary wrong. "Unary" historically meant "a math operation that takes one parameter" and "binary" meant "operation that takes two". An algebraic group, for example, is defined as a set and a binary operation that follows certain properties.
arity is a way of generalizing unary/binary to arbitrary parameters. It is equally correct to say that `+` is a binary operation and to say that it is a 2-arity operator. It's like how "nth power" generalizes "square" and "cube": "9 cubed" is the same as "9³".
reply