Considering that it still requires a significant amount of code that the Prolog solution would not require, and that code will have to be tested and debugged, I don't consider it "trivial" in any way I use the word. If we were talking theory, maybe, but we're talking implementation.
Consider that we know, and have known for some time, proper ways to manage dynamic memory. We still consider doing it correctly in practice not trivial and consider automatic garbage collection a productivity win.
From what I can remember of being an academic researcher (it was a while ago) the term "trivial" was indeed used to indicate something that wasn't of any real research interest.
This does indeed annoy people who put a lot of work into building complex systems that actually do useful stuff.
I do systems research. We only use "trivial" if something is so easy as to be an afterthought. If you're talking theory, then, fine, solved problems are "trivial." But we're not talking theory.
I'm in grad school right now. That sounds about right to me. "Trivial" means "someone already did it, while "easy" means "my graduate student is working on it." :-)
Prolog in effect supplies the recursion into search trees, but for these kinds of problems, it doesn't give you much more than that. Recursion in imperative languages, on the other hand, is not exactly difficult, so long as you're not using an 80s era BASIC interpreter or similar. The clauses largely turn into predicates, Boolean expressions in if-statements. For this kind of static problem, you might have 3x more code in an imperative language, but most of it would be bookkeeping for recursion and backtrack, and syntactic cruft with holes in the middle for your constraints and satisfaction criteria predicates. For most static problems with a fixed set of rules, you really won't be looking at much extra code.
This kind of algorithm is normally expected to be written, tested, debugged and presented in one hour in programming competitions with fifteen year-old contestants. And it's only the algorithm that Prolog would help you with in this case; it's the whole application on the exterior where all the real polish goes, where the testing is more subtle. Writing the core algorithm of a packing problem, or a routing problem, is completely trivial compared to the work that goes into the interface for entering the data. Programming competitions work with a text input file to get rid of this complexity. I speak as one who has competed in and won such competitions.
And FWIW, I don't think GC is a productivity win because managing dynamic memory is not trivial. Managing dynamic memory is trivial, but it's tedious, and it warps your program design, forcing it to be more imperative, more mutable and less functional. The cost of manual management isn't in managing the memory per se, but in managing the complexity of an application that can't rely on GC to manage the memory.
IMO GC's productivity win comes from the lowered bookkeeping costs of more modular and expressive programming styles - most notably in the ability to return newly allocated structures as return values from functions without having to negotiate away the ambiguity as to who owns them. It lets you built data structures that may share substructure, and write transformations over them which may or may not rewrite parts of the substructure, and the transformation may or may not be memoized etc. Doing this with GC is straight-forward; handling all these options with manual memory management requires ownership handoff protocols, shared ownership hacks (e.g. reference counts), inflexible conventions, a preference to mutate rather than return new, etc., greatly increasing code complexity.
And I speak of GC as someone who these days normally writes in a non-GC language, but have implemented garbage collectors, both on the compiler (metadata) and runtime (the marking and sweeping) sides.
"This kind of algorithm is normally expected to be written, tested, debugged and presented in one hour in programming competitions with fifteen year-old contestants."
Would you deploy this code in a production application?
"Programming competitions work with a text input file to get rid of this complexity. I speak as one who has competed in and won such competitions."
So are you then saying that these kinds of problems take about an hour to solve, assuming you are someone who competes in and wins timed programming competitions? Also, what will be the maintenance costs of changing requirements going forward, as compared to adding Prolog clauses?
Well a lot of prolog implementations also feature finite domain and rational domain solvers, global constrains that I don't think your 15 year old buddies would be able to code in an hour.
Having said that, most problems we solve here are NP-Complete, most of the time the trivial pruning you are mentioning does not really extend the limits for the computation.
For sure, I was speaking in the context of the packing problems, and even simpler discrete combinatorial problems, which really are pretty simple. But I wouldn't dismiss trivial pruning lightly; many problems can have their search branches pruned early based on their costs exceeding the current best solution, and as that solution gets better, it will start pruning higher up the search tree.
The knapsack problem, as you know, is NP-Complete.
Consider that we know, and have known for some time, proper ways to manage dynamic memory. We still consider doing it correctly in practice not trivial and consider automatic garbage collection a productivity win.