(no title)
cbutner | 4 years ago
Next steps could be using one of Lc0's backends for GPU scenarios, or taking the other side of the trade and using the C++ API for TPU.
There's also your typical CPU and memory optimizations that could be made - some baseline work there but not targeted.
lemonade5117|4 years ago
cbutner|4 years ago
Getting deep into RL specifically wasn't so necessary for me because I was just replicating AlphaZero there, although reading papers on other neural architectures, training methods, etc. helped with other experimentation.
You may be well past this, but my biggest general recommendation is the book, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" to quickly cover a broad range of statistics, APIs, etc., at the right level of practicality before going further into different areas (for PyTorch, I'm not sure what’s best).
Similarly, I was familiar with the calculus underpinnings but did appreciate Andrew Ng's courses for digging into backpropagation etc., especially when covering batching.
squirrelmaker|4 years ago