For me, I think the problem is that normal, boring, stupid, unsophisticated, plebian imperative programming lives in a world of O(1) operations. That is to say, a "normal" line of code you type will be O(1), and then you generally start gluing those together with various things that start stacking on O(n) complexities. As things like the accidentally quadratic blog [1], in the normal programming world it is a bit humorous to even so much as accidentally stack two of those things unnecessarily on top of each other.
Certainly, people manage to make poorly performing things even so, but at least at the base level, your primitives may be stupid, but they are generally fast.
The logic programming world works in a default space of O(n) operations, that stack together more freely than the imperative world, and that gives easy access to O(2^n). Since this is essentially impossible, a great deal of work is done to try to get that down, but you're always intrinsically starting behind the eight-ball. It is easier to work up from O(1) operations than to write an exponential or super-exponential algorithm and then try to trim it back down to a decent complexity for a normal programmer.
I think this is the root cause as to why logic programming is generally not something we see a lot of. It's like a wild stallion; it may be powerful but the amount of effort you pour into just keeping it under control may exceed any benefit you could get.
It isn't useless, of course. Amazing work has been done in the field of SAT solvers, and there's certainly a niche for it. The problems that are intrinsically higher-order polynomials or (technically) exponential, well, they are what they are and if you're going to be stuck in that world, logic programming may offer you a much better toolset than conventional programming on its own. But there was a hope a long time ago, in the Prolog era, that it could become part of the normal toolkit of general purpose programming, and I don't think that will ever happen, because of this line of logic.
This is a bit tangential to the article, it's just what I happened to read that finally crystallized this in my mind.
I took a comparative programming languages class in 1993 and the prof said that he thought Prolog was the future.
It had numerous setbacks. The Japanese thought they could parallelize Prolog programs for their Fifth Generation Computing Project in the 1980s but found out quickly that you couldn't. They made a language called KL1 which was parallelizable, but it wasn't as good as Prolog in other respects.
The ability to implement simple parsers in the Prolog using its evaluation process impressed me a lot when I didn't know much about parsers, now that I know how to implement parsers it doesn't impress me much.
You can write mixed logical/imperative problems in Prolog but boy is it awkward.
When I got interested in RDF circa 2010 I was interested in Datalog (a pure logical language) but found nobody else seemed interested in it and it was hard to find literature on it. That situation has changed a lot as Datalog is a pretty clear way to make database query code composable.
Production rules engines (say Drools) are another "old A.I." technology that is largely forgotten even though they are heavily used in banks and a few other corners of the business world. These implement “forward chaining” inference distinct from the “backward chaining” inference implemented in Prolog. Between RETE engines and effective indexing structures it is easy to handle 1000x's more rules in your knowledge base in the 1980s but production rules never got standardized like C, FORTRAN or COBOL and no really general answers have come up for questions like controlling the order of execution when that matters.
Prolog comes from the era (the 70s) where computationally interesting and algorithmically complex problems were a proportionately much larger part of the computing scene than they are today, what with our personal computers, data crunching, and web applications. It’s not really that those kinds of computing paradigms are bad so much as they aren’t nearly as relevant as they were. And a typical programmer, even if they do encounter a problem appropriate for that kind of approach, is much better off using the typical software they have tons of experience with, rather than attempting to learn a new paradigm and uncovering all its pitfalls in the process of learning it.
Exhaustively searching an exponential space with no heuristics for reducing the search space is obviously gonna be way easier in a familiar language than in a new language.
The one use case I do see for prolog and similar tools in the modern day is for Constraint Satisfaction programming. Obviously, exhaustively searching an exponential solution space won’t be better for that, but typically you can use concepts like Edge and Arc consistency (which are core to CSP solvers) to greatly reduce the actual search space. Implementing those algorithms from scratch in an imperative language is annoyingly hard, so reaching for optimized, generalized implementations can be a good choice.
For something like a Sudoku solver, edge/arc consistency can greatly improve compute times. If you’re good at sudoku, you are already implementing these techniques when solving puzzles, just like good chess players implement minimax search without having to be told what minimax search is.
In my experience, performance concerns have rarely been priority or the primary blocking issue with software development. With Prolog, you can often write entire programs in just a few lines that would require hundreds of lines in another language, which would likely contain the same performance pitfalls of the Prolog code.
The issue you have circled around here is a bit deeper and nuanced than trimming down exponential code to something more tractable. In fact, it's often worse, and O(2^n) can be a blessing!
The problem with logic programming is the 'logic' part.
Modern imperative and functional programming languages are constructive.
Logic programming is not, and the expressive power varies with the exact logic being used.
Elementary logic gives you the following intuition.
Propositional logic -- easy.
First order logic -- easy (with caveats).
And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.
For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.
This is much more difficult than it seems like.
For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.
This seems right but I think embedded logic programming in the standard lib is way closer to the right solution than expecting to tackle the appropriate tasks with a separate language. Many programs have a sub-problem that is a good fit for logic programming, but it's rare that it be so well encapsulated it's worth deploying an entire separate language for it.
Having these tools close at hand the same way we have regex and (more recently) PEGs for grammars definitely makes it more likely that people will reach for them when appropriate, and exposure and familiarity to the practices can grow.
I suspect part of the problem is just that minikanren is primarily a learning tool and doesn't prioritize practical ergonomics. I haven't used core.logic but I've used minikanren in this embedded way and it's difficult to do cleanly in ways that have nothing to do with logic programming per se. I may be off but I think it's something that having a high quality professional strength implementation in the stdlib could actually help with.
>That is to say, a "normal" line of code you type will be O(1)
Huh? This isn't remotely true especially today, and for the type of programming most do, where people work with functions and classes from libraries, with each line doing all kinds of O(?) stuff under the hood, and applying those to collections of objects, and so on.
>> I think a lot of people mistakenly believe that core.logic is working some magic behind the scenes to solve puzzles in some amazingly efficient manner. On the contrary, it's just brute force search, using complex machinery which just slows it down versus a for comprehension.
If that's what core.logic is, then core.logic is not logic programming.
Logic programming refers to one of two things: either Prolog-like languages where programs are sets of definite clauses executed by SLD-Refutation of a goal, and evaluation returns all bindings of the variables in the goal that make the goal true; or Answer Set Programming (ASP), where programs are similar to definite programs, but the semantics are stable model semantics and evaluation finds all the stable models of a program.
Both of these approaches incorporate search, but neither is "just brute force search", in any way, shape or form.
As pointed out in the comments in the article, these kinds of logic puzzles are easier to solve using constraint programming than "regular" logic programming.
What you really want for logic puzzles is a SAT or SMT solver as it gets the same results as exhaustive search except it usually can eliminate large amounts of the search space. A language like Prolog superficially looks like it can solve logic puzzles natively but the search strategy is usually too limited.
Except you first have to encode the problem as a logic expression over (potentially tens of thousands of) Boolean variables. Might be impractical, or might even be impossible for eg robotic planning with an unlimited number of planning steps, and doesn't necessarily extend well to solving with respect to an objective function (eg optimization). What you usually see is a domain-specific problem description/language that then gets compiled into a SAT problem using a custom transformator/generator program. In other words, the problem of a suitable problem description language isn't solved ;)
You can override/customize Prolog's default search strategy as desired, Prolog being Turing-complete. Prolog syntax and resolution in this case just provides a straightforward starting point; which is much needed as you explore the complexities and challenges of your problem domain to kick-off a project (you know, as opposed to prematurely optimizing a program for transforming you DSL into a SAT/SMT formulation). I think Prolog works very well for this.
Note Prolog also has libraries for offloading to SAT solvers and eg. z3 supports a Datalog subset as an alternative to SMTLIB/Lisp-like syntax.
GHC for Haskell for example has a graph-reduction runtime, and a lot of effort goes into optimizing that to use referential transparency and laziness effectively.
Prolog is similar (even more so) an example of a high-level declarative language with an optimizing runtime, but (I guess) isn't as optimized. Prolog runtime could incorporate a SAT or SMT solver internally, without changind the language, right?
Given a list of numbers L, use operations +,-,*,/ to obtain a result of Result. Use each number of L exactly one time.
Code:
j24([X],R,H) :- !, X =:= R,H=X.
j24(L,R,H) :- select(X1,L,L1), x2(X1,Op,X2,R),
j24(L1,X2,H1),H =..[Op,X1,H1],tj24(Op,X1,H1).
tj24(Op,X1,H1) :- member(Op,['+','*']),
H1 =..[Op,Y1,_],integer(X1),integer(Y1),!,X1 =< Y1.
tj24(_,_,_) :- true.
x2(X1,'+',X2,R) :- X2 is R - X1, X1 =< X2.
x2(X1,'-',X2,R) :- X2 is X1 - R.
x2(X1,'*',X2,R) :- X1 =\= 0, X2 is R / X1, X1 =< X2.
x2(X1,'/',X2,R) :- R =\= 0, X2 is X1/R.
Example:
?- j24([2,5,7,11,20],280,L).
L = 5+11*(7+(20-2)) ;
L = 5+11*(7-(2-20)) ;
L = 5+11*(20-(2-7)) ;
L = 5*(20+2*(7+11)) ;
L = 7-(2-11*(5+20)) and so on.
A little more complex:
?- j24([2,3,4,7,11,13,17,23,27,31],7239,Solution).
Solution = 2+(3+(11+31*(7-(17-27/(4/(13+23))))))
When L = [2,3,4,7,11,13,17,23,27,31,37,43,47] and Result is 1117239 one solution is
2+(3+(4+(27+(47+23*(43+13*(37+7*(11*(17+31))))))))
Edited: I made a small change to prune solutions:
x2(X1,'+',X2,R) :- X2 is R - X1, X1 =< X2.
x2(X1,'-',X2,R) :- X2 is X1 - R,X2 > 0.
x2(X1,'\*',X2,R) :- X1 =\= 0, X2 is R / X1, X1 =< X2.
x2(X1,'/',X2,R) :- integer(R),R =\= 0, 0 is X1 mod R, X2 is X1/R.
The program does not compute all the solution. When the parameter Result is relatively small is easier to obtain a problem with many solution, so the algorithm is fast.
The Prolog program computes a solution in one or two seconds, how would be a similar python program to solve this problem in less than 10 lines?
I’ve written programs to solve and create those grid-style logic puzzles (such as “Five men went to dinner. Mr Brown ordered fish. The man with the red hat ordered lamb. The person who ordered beef was not Mr. White… etc.)
Once you have a way of encoding the clues into a machine-readable syntax, it’s trivial to solve. But, the tricky part is creating a good puzzle that can be solved by a human without guessing. That’s an art form. A brute-force solver won’t suffice to analyze your randomly generated sets of clues; instead, you need a solver that is only capable of making the types of deductions a human could make. Very subjective.
> But the disadvantages of core.logic for solving logic puzzles don't end there. core.logic is very sensitive to the way that goals are ordered, in a bad way. Certain orderings, for example, will cause the program to go into an infinite loop, and the DSL is complex enough that this is not always readily apparent.
This is just not correct (or at least phrased VERY poorly). My understanding is that core.logic uses Minikanren under the hood which is guaranteed to find an answer if an answer exists regardless of goal ordering. Unlike Prolog, Minikanren (and presumably core.logic) do not search depth first (which is why Prolog can run into infinite loops very easily). Instead, it roughly does interleaving BFS. It can search inefficiently, but that's true of any combinatorial method.
As for practical use cases, I really like how easy it is to write analysis code for parse trees and type systems. The C# T-SQL parser (used in various tools) is very tedious to actually use for quick jobs because of how many different node types it has. Instead of handling that monstrosity, I wrote a much smaller app that just went over the whole tree and converted it to JSON (including the types and type hierarchy) via reflection. It was then way easier to load it into Prolog and query the tree.
This example shows that Clojure list comprehensions (the for... syntax) have much of the expressive power of logic programming (the first... syntax).
List comprehensions support a bundle of features including producing multiple values, and filtering them by testing and potentially failing.
Logic programming is that plus some more flexibility (more compositional forms of multiple value production) and some new features (such as using unification to solve for unknowns).
We should expect mainstream languages to evolve to support more logic features, since they provide a more general and more compositional way to do many things. Pattern matching expressions in most languages are a limited special case of logic programming that would benefit from using the more general case. Same with casting constructs, null propagation operators, boolean and/or/not, and the internal compiler logic supporting features like Haskell typeclasses and Rust traits. If we can replace lots of special-case language features a few general constructs, programming will be simpler as Niklaus Wirth advocates for.
I've only done a bit of prolog programming, but from what i've seen so far it feels like "logic" is the wrong lense.
It feels more like its based around describing a graph, where some verticies have side effects, and then releasing DFS on the graph. I'd call it DFS programming, but also im a noob at it, so maybe more experienced people would disagree.
Sure, to solve a 5x5x5 puzzle we can test all 5!*5!*5! possibilities; it's faster than thinking it out. To solve a 7x7x7 one you'll need logic programming though.
jerf|2 years ago
Certainly, people manage to make poorly performing things even so, but at least at the base level, your primitives may be stupid, but they are generally fast.
The logic programming world works in a default space of O(n) operations, that stack together more freely than the imperative world, and that gives easy access to O(2^n). Since this is essentially impossible, a great deal of work is done to try to get that down, but you're always intrinsically starting behind the eight-ball. It is easier to work up from O(1) operations than to write an exponential or super-exponential algorithm and then try to trim it back down to a decent complexity for a normal programmer.
I think this is the root cause as to why logic programming is generally not something we see a lot of. It's like a wild stallion; it may be powerful but the amount of effort you pour into just keeping it under control may exceed any benefit you could get.
It isn't useless, of course. Amazing work has been done in the field of SAT solvers, and there's certainly a niche for it. The problems that are intrinsically higher-order polynomials or (technically) exponential, well, they are what they are and if you're going to be stuck in that world, logic programming may offer you a much better toolset than conventional programming on its own. But there was a hope a long time ago, in the Prolog era, that it could become part of the normal toolkit of general purpose programming, and I don't think that will ever happen, because of this line of logic.
This is a bit tangential to the article, it's just what I happened to read that finally crystallized this in my mind.
[1]: https://accidentallyquadratic.tumblr.com/
PaulHoule|2 years ago
It had numerous setbacks. The Japanese thought they could parallelize Prolog programs for their Fifth Generation Computing Project in the 1980s but found out quickly that you couldn't. They made a language called KL1 which was parallelizable, but it wasn't as good as Prolog in other respects.
The ability to implement simple parsers in the Prolog using its evaluation process impressed me a lot when I didn't know much about parsers, now that I know how to implement parsers it doesn't impress me much.
You can write mixed logical/imperative problems in Prolog but boy is it awkward.
When I got interested in RDF circa 2010 I was interested in Datalog (a pure logical language) but found nobody else seemed interested in it and it was hard to find literature on it. That situation has changed a lot as Datalog is a pretty clear way to make database query code composable.
Production rules engines (say Drools) are another "old A.I." technology that is largely forgotten even though they are heavily used in banks and a few other corners of the business world. These implement “forward chaining” inference distinct from the “backward chaining” inference implemented in Prolog. Between RETE engines and effective indexing structures it is easy to handle 1000x's more rules in your knowledge base in the 1980s but production rules never got standardized like C, FORTRAN or COBOL and no really general answers have come up for questions like controlling the order of execution when that matters.
opportune|2 years ago
Exhaustively searching an exponential space with no heuristics for reducing the search space is obviously gonna be way easier in a familiar language than in a new language.
The one use case I do see for prolog and similar tools in the modern day is for Constraint Satisfaction programming. Obviously, exhaustively searching an exponential solution space won’t be better for that, but typically you can use concepts like Edge and Arc consistency (which are core to CSP solvers) to greatly reduce the actual search space. Implementing those algorithms from scratch in an imperative language is annoyingly hard, so reaching for optimized, generalized implementations can be a good choice.
For something like a Sudoku solver, edge/arc consistency can greatly improve compute times. If you’re good at sudoku, you are already implementing these techniques when solving puzzles, just like good chess players implement minimax search without having to be told what minimax search is.
https://www.sciencedirect.com/topics/computer-science/arc-co...
bmitc|2 years ago
manasij7479|2 years ago
The problem with logic programming is the 'logic' part.
Modern imperative and functional programming languages are constructive.
Logic programming is not, and the expressive power varies with the exact logic being used.
Elementary logic gives you the following intuition. Propositional logic -- easy. First order logic -- easy (with caveats). And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.
For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.
This is much more difficult than it seems like.
For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.
giraffe_lady|2 years ago
Having these tools close at hand the same way we have regex and (more recently) PEGs for grammars definitely makes it more likely that people will reach for them when appropriate, and exposure and familiarity to the practices can grow.
I suspect part of the problem is just that minikanren is primarily a learning tool and doesn't prioritize practical ergonomics. I haven't used core.logic but I've used minikanren in this embedded way and it's difficult to do cleanly in ways that have nothing to do with logic programming per se. I may be off but I think it's something that having a high quality professional strength implementation in the stdlib could actually help with.
coldtea|2 years ago
Huh? This isn't remotely true especially today, and for the type of programming most do, where people work with functions and classes from libraries, with each line doing all kinds of O(?) stuff under the hood, and applying those to collections of objects, and so on.
YeGoblynQueenne|2 years ago
If that's what core.logic is, then core.logic is not logic programming.
Logic programming refers to one of two things: either Prolog-like languages where programs are sets of definite clauses executed by SLD-Refutation of a goal, and evaluation returns all bindings of the variables in the goal that make the goal true; or Answer Set Programming (ASP), where programs are similar to definite programs, but the semantics are stable model semantics and evaluation finds all the stable models of a program.
Both of these approaches incorporate search, but neither is "just brute force search", in any way, shape or form.
grose|2 years ago
For example, see the solution to the Zebra Puzzle here: https://www.metalevel.at/prolog/puzzles which uses CLPZ[^1].
[^1]: https://github.com/triska/clpz
tannhaeuser|2 years ago
YeGoblynQueenne|2 years ago
:P
PaulHoule|2 years ago
tannhaeuser|2 years ago
You can override/customize Prolog's default search strategy as desired, Prolog being Turing-complete. Prolog syntax and resolution in this case just provides a straightforward starting point; which is much needed as you explore the complexities and challenges of your problem domain to kick-off a project (you know, as opposed to prematurely optimizing a program for transforming you DSL into a SAT/SMT formulation). I think Prolog works very well for this.
Note Prolog also has libraries for offloading to SAT solvers and eg. z3 supports a Datalog subset as an alternative to SMTLIB/Lisp-like syntax.
gowld|2 years ago
Prolog is similar (even more so) an example of a high-level declarative language with an optimizing runtime, but (I guess) isn't as optimized. Prolog runtime could incorporate a SAT or SMT solver internally, without changind the language, right?
kazinator|2 years ago
prologito|2 years ago
The Prolog program computes a solution in one or two seconds, how would be a similar python program to solve this problem in less than 10 lines?
alexdowad|2 years ago
lurquer|2 years ago
Once you have a way of encoding the clues into a machine-readable syntax, it’s trivial to solve. But, the tricky part is creating a good puzzle that can be solved by a human without guessing. That’s an art form. A brute-force solver won’t suffice to analyze your randomly generated sets of clues; instead, you need a solver that is only capable of making the types of deductions a human could make. Very subjective.
contingencies|2 years ago
If you have a suitably interested child, this could provide a fun bridge to constraint based programming.
slaymaker1907|2 years ago
This is just not correct (or at least phrased VERY poorly). My understanding is that core.logic uses Minikanren under the hood which is guaranteed to find an answer if an answer exists regardless of goal ordering. Unlike Prolog, Minikanren (and presumably core.logic) do not search depth first (which is why Prolog can run into infinite loops very easily). Instead, it roughly does interleaving BFS. It can search inefficiently, but that's true of any combinatorial method.
As for practical use cases, I really like how easy it is to write analysis code for parse trees and type systems. The C# T-SQL parser (used in various tools) is very tedious to actually use for quick jobs because of how many different node types it has. Instead of handling that monstrosity, I wrote a much smaller app that just went over the whole tree and converted it to JSON (including the types and type hierarchy) via reflection. It was then way easier to load it into Prolog and query the tree.
TimSweeneyEpic|2 years ago
List comprehensions support a bundle of features including producing multiple values, and filtering them by testing and potentially failing.
Logic programming is that plus some more flexibility (more compositional forms of multiple value production) and some new features (such as using unification to solve for unknowns).
We should expect mainstream languages to evolve to support more logic features, since they provide a more general and more compositional way to do many things. Pattern matching expressions in most languages are a limited special case of logic programming that would benefit from using the more general case. Same with casting constructs, null propagation operators, boolean and/or/not, and the internal compiler logic supporting features like Haskell typeclasses and Rust traits. If we can replace lots of special-case language features a few general constructs, programming will be simpler as Niklaus Wirth advocates for.
bawolff|2 years ago
It feels more like its based around describing a graph, where some verticies have side effects, and then releasing DFS on the graph. I'd call it DFS programming, but also im a noob at it, so maybe more experienced people would disagree.
i_am_a_peasant|2 years ago
tangus|2 years ago