(no title)
discarded1023 | 5 days ago
He viewed the task of learning predicates (programs/relations) as a debugging task. The magic is in a refinement operator that enumerates new programs. The diagnostic part was wildly insightful -- he showed how to operationalise Popper's notion of falsification. There are plenty of more modern accounts of that aspect but sadly the learning part was broadly neglected.
There are more recent probabilistic accounts of this approach to learning from the 1990s.
... and if you want to go all the way back you can dig up Gordon Plotkin's PhD thesis on antiunification from the early 1970s.
[1] https://en.wikipedia.org/wiki/Algorithmic_program_debugging
No comments yet.