Ah, I was confused for a second—I'd thought Markov was a library, but you meant the markov assumption—the topics are actually loosely related and orthogonal. Your excellent looking library deals with reinforcement learning agents that model environment/agent interactions as a (PO)Markov Decision Process, where as the Alchemy library combines FOL with network representations of particular (satisfying certain markov properties) probability distributions to perform inference.More pertinent to your post, Sutton's working on an updated RL book here: http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.p...
If anyou have the time, Chapter 15 (pdf pg 273) of the above link is a fascinating read. In particular, TD-Gammon had already achieved impressive results using NNs in the early 90s; reaching world class levels in Backgammon with zero specialized knowledge.
No comments yet.