top | item 47146989

(no title)

discarded1023 | 5 days ago

If we're going down that path: Ehud Shapiro got there back in 1984 [1]. His PhD thesis is excellent and shows what logic programming could do (/could have been).

He viewed the task of learning predicates (programs/relations) as a debugging task. The magic is in a refinement operator that enumerates new programs. The diagnostic part was wildly insightful -- he showed how to operationalise Popper's notion of falsification. There are plenty of more modern accounts of that aspect but sadly the learning part was broadly neglected.

There are more recent probabilistic accounts of this approach to learning from the 1990s.

... and if you want to go all the way back you can dig up Gordon Plotkin's PhD thesis on antiunification from the early 1970s.

[1] https://en.wikipedia.org/wiki/Algorithmic_program_debugging

discuss

order

No comments yet.