(no title)
bigzyg33k | 2 years ago
These numbers can be plotted as points in a space, and embeddings of things with similar meanings are plotted close to each other. So things like "exam preparation" would have embeddings close to things like "top study tips".
Say you have created embeddings for a large corpus of text (in this case all youtube captions) once. If you create embeddings for a user query, you can search for embeddings close to it, and these will be "semantically" similar to the query.
The advantage is that unlike traditional full-text search, the user doesn't need a query that includes words present in the text.
cced|2 years ago
fzliu|2 years ago
djhn|2 years ago
password4321|2 years ago
rahimnathwani|2 years ago