This AI toolkit works on popular Intel CPUs, and is a big step forward for the new Intel Nervana Neural Network Processor (NNP-I) hardware chip akin to a GPU.
One surprising research result for this NLP is that a simple convolutional architecture outperforms canonical recurrent networks, often. See: CMU lab, Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) https://github.com/locuslab/TCN
If you're interested in Nervana, here are some specifics: the chip is for hardware neural network acceleration, for inference-based workloads. Notable features include fixed-point math, Ice Lake cores, 10-nanometer fabs, on-chip memory management by software directly, and hardware-optimized inter-chip parallelism.
I've worked for Intel, and I'm stoked to see the AI NLP progress.
word2vec and fasttext are specialized tools for creating word embeddings, this is a more generalist library. It's more comparable to PyText, AllenNLP or Flair, the main difference appearing to be that the other three use PyTorch, not Tensorflow.
jph|7 years ago
The Intel AI Lab has an introduction to NLP (https://ai.intel.com/deep-learning-foundations-to-enable-nat...) and optimized Tensorflow (https://ai.intel.com/tensorflow/)
One surprising research result for this NLP is that a simple convolutional architecture outperforms canonical recurrent networks, often. See: CMU lab, Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) https://github.com/locuslab/TCN
If you're interested in Nervana, here are some specifics: the chip is for hardware neural network acceleration, for inference-based workloads. Notable features include fixed-point math, Ice Lake cores, 10-nanometer fabs, on-chip memory management by software directly, and hardware-optimized inter-chip parallelism.
I've worked for Intel, and I'm stoked to see the AI NLP progress.
modx07|7 years ago
azinman2|7 years ago
continuations|7 years ago
yorwba|7 years ago
___cs____|7 years ago