(no title)
mlucy | 7 years ago
Word embeddings have already transformed NLP. Most people I know, when they sit down to work on an NLP task, the first thing they do is use an off-the-shelf library to turn it into a sequence of embedded tokens. They don't even think about it; it's just the natural first step, because it makes everything so much easier.
In the last couple years, embeddings for other data types (images, whole sentences, audio, etc.) have started to enter mainstream practice too. You can get near-state-of-the-art image classification with a pretrained image embedding, a few thousand examples, and a logistic regression trained on your laptop CPU. It's astonishing.
(Note: I work on https://www.basilica.ai , an embeddings-as-a-service company, so I'm definitely a little bit biased.)
madhadron|7 years ago
minkzilla|7 years ago
DoctorOetker|7 years ago
jandrese|7 years ago