Embeddings are often used as features for these LLMs so before they were paying to generate embeddings and doing inference with these large models. Now they pay to generate embeddings, fine-tune them and do semantic search (probably approximate k-nearest neighbors). The hardware requirements for most LLMs make them much more expensive than approximate KNN with a vector database.
No comments yet.