(no title)
entilzha | 1 year ago
If I understand your question right, this is one of the reasons BPE is nice and the parent liked it. For any character sequence, provided the characters are in the alphabet used to create the BPE vocab, there are no unknown words/sequences. One downside of some previous tokenization methods is you could have unknown/UNK tokens, EG dictionary based methods.
In our paper with bytes, we also avoid the UNK issue, since we can have an embedding for every possible byte, since it’s not that many (and for sequences of bytes we use hash embedding, although we did test n-gram lookups for the top K frequent byte n-grams in the training data).
cs702|1 year ago
Did you guys try using an RNN or some other kind of DNN to encode the patches?
entilzha|1 year ago