This is great fun. Once it has learned a little about you and your friends it sometimes spits out a lyrical home-run. It is of course you who provide the intelligence of interpretation, but it still feels mysterious when it 'talks' in a way relevant to the context.
This has been downvoted, but I think selecting more screen-friendly fonts is a valid concern nowadays. Personally I would also like to see a reflowable format (which I guess would mean HTML with MathJax).
Nothing wrong with either of those. Also, if you take the time to check out the book or its GitHub repo, then you will see that there is also C code and C "challenges" (projects) for the reader to go through.
Most of the time, these things are resource hogs arriving way before their time to shine, either needing Moore's law to catch up the hardware, or some nerd to wrestle with the combinatorial explosion and win. Transformers can be seen as a variation on Markov chains, but the innovation of attention mechanisms means you can use hundreds of thousands of tokens and thousands of tokens in sequences without the problem space going all Buzz Lightyear on you.
Ultra Hal was a best in class chat bot when fixed response systems like Alice/ AIML were the standard. Ultra Hal used Markov chains and some clever pruning, but it dealt with a few hundred tokens as words and sequences only 2 or 3 tokens out. It occasionally produced novel and relevant output, like a really shitty gpt-2.
I think we may see a resurgence of expert systems soon, as gpt-3 and transformers have proved capable of automating rule creation in systems like Cyc. They've already incorporated direct lookups into static databases gpt / RETRO type models. Incorporating predicate logic inference engines seems like the logical and potent next step. GPT could serve as a personality and process engine that eliminates the flaw (tedium) in massive, tedious, human level micro-tasking systems from GOFAI.
It's worth going through all the literature all the way back to the 1956 summer of code and hunt for ideas that just didn't work yet.
...Markov Chains (via MCMC) underly most Bayesian inference problems, and pretty much all stochastic dynamical systems models are based on Markov Chains.
[+] [-] HackOfAllTrades|4 years ago|reply
[+] [-] westcort|4 years ago|reply
[+] [-] ganzuul|4 years ago|reply
This is great fun. Once it has learned a little about you and your friends it sometimes spits out a lyrical home-run. It is of course you who provide the intelligence of interpretation, but it still feels mysterious when it 'talks' in a way relevant to the context.
[+] [-] m000|4 years ago|reply
- Opens PDF.
- Typeset in Computer Modern.
- Starts running, screaming in Comic Sans.
Jokes aside, CM is not the only game for math-heavy documents. Something like Libertinus [1] would probably be more screen-friendly.
[1] https://github.com/alerque/libertinus
[+] [-] layer8|4 years ago|reply
[+] [-] spekcular|4 years ago|reply
[+] [-] raister|4 years ago|reply
[+] [-] kjs3|4 years ago|reply
[+] [-] raister|4 years ago|reply
[+] [-] dddnzzz334|4 years ago|reply
[+] [-] raister|4 years ago|reply
[+] [-] jonititan|4 years ago|reply
[+] [-] b20000|4 years ago|reply
[+] [-] Jtsummers|4 years ago|reply
[+] [-] hvasilev|4 years ago|reply
Decades pass and you realize they either have little to no application or are incredibly niche :(
Too bad that "solution in a search of a problem" is generally bad approach to problem-solving. I wish our industry was more fun as a whole.
[+] [-] robbedpeter|4 years ago|reply
https://www.zabaware.com/ultrahal/
Ultra Hal was a best in class chat bot when fixed response systems like Alice/ AIML were the standard. Ultra Hal used Markov chains and some clever pruning, but it dealt with a few hundred tokens as words and sequences only 2 or 3 tokens out. It occasionally produced novel and relevant output, like a really shitty gpt-2.
I think we may see a resurgence of expert systems soon, as gpt-3 and transformers have proved capable of automating rule creation in systems like Cyc. They've already incorporated direct lookups into static databases gpt / RETRO type models. Incorporating predicate logic inference engines seems like the logical and potent next step. GPT could serve as a personality and process engine that eliminates the flaw (tedium) in massive, tedious, human level micro-tasking systems from GOFAI.
It's worth going through all the literature all the way back to the 1956 summer of code and hunt for ideas that just didn't work yet.
https://en.wikipedia.org/wiki/Dartmouth_workshop
[+] [-] Fomite|4 years ago|reply
[+] [-] klysm|4 years ago|reply
[+] [-] skykooler|4 years ago|reply