I'm trying to wrap my head around embeddings and am not sure I understand how "realtime" embeddings will be in a real world application.
For example, if you use embeddings, do you need to know "a priori" what you want to search for? In other words, if you don't know your search queries up front, you have to generate the embeddings, store them inside the database, and then use them. The first step requires an API call to a commercial company (OpenAI here), running against a private model, via a service (which could have downtime, etc). (I imagine there are other embedding technologies but then I would need to manage the hardware costs of training and all the things that come with running ML models on my own.)
Compare that to a regular LIKE search inside my database: I can do that with just a term that a user provides without preparing my database beforehand; the database has native support for finding that term in whatever column I choose. Embedding seem much more powerful in that you can search for something in a much fuzzier way using the cosine distance of the embedding vector, but requires me to generate and store those embeddings first.
1. Create embeddings of your db entries by running through a nn in inference mode, save in a database in vector format.
2. Convert your query to an embedding by running it through a neural network in inference mode
3. Perform a nearest neighbor search of your query embedding with your db embeddings. There are also databases optimized for this, for example FAISS by meta/fb [1].
So if your network is already trained or you use something like OpenAI for embeddings, it can still be done in near real time just think of getting your embedding vector as part of the indexing process.
You can do more things too, like cluster your embeddings db to find similar entries.
> if you use embeddings, do you need to know "a priori" what you want to search for?
No. Your embedding is something that represents the data. You initially calculate the embedding for each datapoint with the API (or model) and store them in an index.
When a user makes a query, it is first embedded by a call to the API. Then you can measure similarity score with a very simple multiply and add operation (dot product) between these embeddings.
More concretely, an embedding is an array of floats, usually between 300 and 10,000 long. To compare two embeddings a and b you do sum_i(a_i * b_i), the larger this score is, the more similar a and b are. If you sort by similarity you have ranked your results.
The fun part is when you compare embeddings of text with embeddings of images. They can exist in the same space, and that's how generative AI relates text to images.
You need 3 things : a query, a corpus of text to be searched against and a language model that can map text to vectors ("compute the embedding").
Morally, when you get a query, you compute its embedding (through an API call) and return parts of your corpus whose embedding are cosine-close to the query.
In order to do that efficiently, you indeed have to pre-compute all the embeddings of your corpus beforehand and store them in your database.
I think what you would do is some long-running “index” process where you dynamically generate embeddings for all changes that get made to text in the database. Databases that support searching large amounts of text probably do a way simpler version of this (i.e. reverse keyword indexes) already. Granted, this does involve a very beefy database.
There’s a good leaderboard for embedding focused models here: https://huggingface.co/spaces/mteb/leaderboard. Some of the sentence-transformers models are even smaller than gr-t5 while still having very close performance for some use cases.
I’m working on making them 2-4x smaller with SmoothQuant int8 quantization and hoping to release standalone C++ a Rust implementation optimized for CPU inference.
What I really want to see is an HNSW index type for SQLite so finding similar vectors would be fast. I'm wondering if this can be done via an extension.
You could do it using an interface similar to the FTS tables in SQLite, where the “index” is actually a table that needs to be kept up to date with the underlying data (you can use triggers to keep them in sync). Querying is a little awkward compared to a normal index, but for these complex, inexact indexes you often want the extra flexibility (like computing the match score).
> The "embedding" for a text is an array of floating point numbers?
Yes, that's what an embedding is.
> How many?
It varies depending on the model. In this case 1536, it's in the original post.
> And how do you calculate the distance between two texts then?
You take their embedding vectors and calculate the distance between the two vectors using something like cosine similarity - https://en.wikipedia.org/wiki/Cosine_similarity (there are other distances too obviously)
I mean that sounds like a natural evolution; if generating a new response costs money, it makes sense to save them in a cheap database for future lookup / reuse.
But that's quite possibly OpenAI's model as well - cache results and reduce costs while keeping pricing.
They can put it in the T&Cs saying you can't do that though. Like what most geocoding APIs do - e.g. Google says you cannot cache/store the geocoding results.
This is interesting but less interesting than I assumed. I though they'd provide ChatGPT with a schema and ask it to formulate its responses as `INSERT` statements, etc.
I was able to feed it the following schema (actually I coached it through writing it for me).
Table: Questions
- id (Primary Key, Auto-Incremented)
- title (VARCHAR)
- description (TEXT)
Table: Answers
- id (Primary Key, Auto-Incremented)
- question_id (Foreign Key referencing the Questions table)
- answer (TEXT)
Then responded with:
Please copy all my questions into SQLite compatible INSERT statements, similarly create a new INSERT for your answer.
What color is the sky?
What I got was:
Here is the SQLite compatible INSERT statement for your question:
INSERT INTO Questions (title, description)
VALUES ("What color is the sky?", "");
And here is the SQLite compatible INSERT statement for my answer:
INSERT INTO Answers (question_id, answer)
VALUES (1, "The sky is typically blue during the day and black at night.");
Note: The question_id in the Answers INSERT statement is referencing the id of the question in the Questions table. In this example, the assumption is that the question with id of 1 is "What color is the sky?".
I look forward to other LLMs besides not-very-OpenAI's becoming available. This is a very clever tool, but what we all actually want is to be able to say 'can you read the paper/article at url:// and help me with XYZ' or highlight text/figures/tables/lists in an editing environment and say 'help parse/graph/translate/implement this', or conversely to say 'here is a buffer/array/heap for you to use, let's build some sort of user interface to make the feedback process more efficient.' This would be particularly useful for code-related tasks.
Right now it's like dealing with a very eager assistant that has staggering powers of information retrieval and really wants to help, but being forced to conduct all your conversations in a dark room which you and the AI take turns entering and leaving to examine what you've received. You can talk it into performing array and dictionary operations and maintaining text buffers in practice, but you're mutually forbidden from sharing/viewing any kind of semantic workspace directly.
Do you think you'll ever replace the search on your website with Semantic Search like this? I'm not sure how well Semantic Search gels with the ability to facet though, so that could be the hiccoup.
Otherwise it seems like you can get Semantic Search off SQLite that rivals e.g. the Typesenses and ElasticSearch's of the world
txtai is an alternative approach to this. It builds a FAISS (also supports HNSW) index alongside a SQLite database. It works with sentence-transformers models. For example, this model https://huggingface.co/sentence-transformers/all-MiniLM-L6-v... is 384 dimensions and works great for semantic search.
[+] [-] xrd|3 years ago|reply
For example, if you use embeddings, do you need to know "a priori" what you want to search for? In other words, if you don't know your search queries up front, you have to generate the embeddings, store them inside the database, and then use them. The first step requires an API call to a commercial company (OpenAI here), running against a private model, via a service (which could have downtime, etc). (I imagine there are other embedding technologies but then I would need to manage the hardware costs of training and all the things that come with running ML models on my own.)
Compare that to a regular LIKE search inside my database: I can do that with just a term that a user provides without preparing my database beforehand; the database has native support for finding that term in whatever column I choose. Embedding seem much more powerful in that you can search for something in a much fuzzier way using the cosine distance of the embedding vector, but requires me to generate and store those embeddings first.
Am I wrong about my assumptions here?
[+] [-] foooobaba|3 years ago|reply
1. Create embeddings of your db entries by running through a nn in inference mode, save in a database in vector format.
2. Convert your query to an embedding by running it through a neural network in inference mode
3. Perform a nearest neighbor search of your query embedding with your db embeddings. There are also databases optimized for this, for example FAISS by meta/fb [1].
So if your network is already trained or you use something like OpenAI for embeddings, it can still be done in near real time just think of getting your embedding vector as part of the indexing process.
You can do more things too, like cluster your embeddings db to find similar entries.
[1] https://engineering.fb.com/2017/03/29/data-infrastructure/fa...
[+] [-] visarga|3 years ago|reply
No. Your embedding is something that represents the data. You initially calculate the embedding for each datapoint with the API (or model) and store them in an index.
When a user makes a query, it is first embedded by a call to the API. Then you can measure similarity score with a very simple multiply and add operation (dot product) between these embeddings.
More concretely, an embedding is an array of floats, usually between 300 and 10,000 long. To compare two embeddings a and b you do sum_i(a_i * b_i), the larger this score is, the more similar a and b are. If you sort by similarity you have ranked your results.
The fun part is when you compare embeddings of text with embeddings of images. They can exist in the same space, and that's how generative AI relates text to images.
[+] [-] cpa|3 years ago|reply
Morally, when you get a query, you compute its embedding (through an API call) and return parts of your corpus whose embedding are cosine-close to the query.
In order to do that efficiently, you indeed have to pre-compute all the embeddings of your corpus beforehand and store them in your database.
[+] [-] odo1242|3 years ago|reply
[+] [-] simonw|3 years ago|reply
[+] [-] bufo|3 years ago|reply
I’m working on making them 2-4x smaller with SmoothQuant int8 quantization and hoping to release standalone C++ a Rust implementation optimized for CPU inference.
[+] [-] uh_uh|3 years ago|reply
[+] [-] tomd|3 years ago|reply
[+] [-] CGamesPlay|3 years ago|reply
[+] [-] TekMol|3 years ago|reply
And how do you calculate the distance between two texts then?
Just request the embeddings array "a" for text one and "b" for text two and then calculate "sqrt((a[0]-b[0])^2+(a[1]-b[1])^2+(a[2]-b[2])^2...)" ?
[+] [-] jstx1|3 years ago|reply
Yes, that's what an embedding is.
> How many?
It varies depending on the model. In this case 1536, it's in the original post.
> And how do you calculate the distance between two texts then?
You take their embedding vectors and calculate the distance between the two vectors using something like cosine similarity - https://en.wikipedia.org/wiki/Cosine_similarity (there are other distances too obviously)
[+] [-] viach|3 years ago|reply
[+] [-] mirekrusin|3 years ago|reply
[+] [-] Cthulhu_|3 years ago|reply
But that's quite possibly OpenAI's model as well - cache results and reduce costs while keeping pricing.
[+] [-] nbevans|3 years ago|reply
[+] [-] lhnz|3 years ago|reply
[+] [-] DoctorOW|3 years ago|reply
[+] [-] anigbrowl|3 years ago|reply
Right now it's like dealing with a very eager assistant that has staggering powers of information retrieval and really wants to help, but being forced to conduct all your conversations in a dark room which you and the AI take turns entering and leaving to examine what you've received. You can talk it into performing array and dictionary operations and maintaining text buffers in practice, but you're mutually forbidden from sharing/viewing any kind of semantic workspace directly.
[+] [-] articsputnik|3 years ago|reply
[+] [-] swyx|3 years ago|reply
[+] [-] canadiantim|3 years ago|reply
Do you think you'll ever replace the search on your website with Semantic Search like this? I'm not sure how well Semantic Search gels with the ability to facet though, so that could be the hiccoup.
Otherwise it seems like you can get Semantic Search off SQLite that rivals e.g. the Typesenses and ElasticSearch's of the world
[+] [-] aicharades|3 years ago|reply
Currently using LangChain
[+] [-] BLO716|3 years ago|reply
[+] [-] simonw|3 years ago|reply
[+] [-] txtai|3 years ago|reply
https://github.com/neuml/txtai
https://neuml.github.io/txtai/embeddings/query/
[+] [-] foooobaba|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] gokulkrishh09|3 years ago|reply
[deleted]