« AutoGPT engineers » seem to also generate their articles with LLM, making their documentation awful to grok. For instance, after showing 2 commands, we have to suffer this:
Forge your future! The forge is your innovation lab. All the boilerplate code is already handled, letting you channel all your creativity into building a revolutionary agent. It's more than a starting point, it's a launchpad for your ideas.
In our exploration today, we’ve covered the essentials of working with AutoGPT projects. We began by laying out the groundwork, ensuring you have all the right tools in place. From there, we delved into the specifics of building an effective AutoGPT agent. Trust me, with the right steps, it becomes a straightforward process.
The doc is littered of those paragraphs. Remove the fat! Go to the point! YOLO, that’s a freaking waste of life cycles!
Summary: They stopped using vector databases because the performance benefit simply didn't matter compared to how long the LLMs took to respond, and you should focus on using technology to solve problems and not pick the trendy option.
So it seems like they still use vectors - they just replaced the search (however that works) with a dot product operation? I mean from a vector point of view that makes total sense
Has anyone ever managed to generate anything useful with AutoGPT? I had several attempts and apart from wasting some money for GPT-4 API calls, it's never produced anything usable. Whereas if I manually enter prompts in ChatGPT I can often produce simpler project from the beginning till the end, if I partition them into logically independent parts.
Unfortunately I've not. For the tasks that I've used these types of tools on I found ChatGPT's "Advanced Data Analysis" mode significantly more useful.
I have been working on a system of agents over at https://github.com/agi-merge/waggle-dance - I already split problems up into subtasks for agents to work on independently. I give agents access to vector databases, using a simple global key for now, but soon a context/parent/child key. Access to the vector DBs is proxied via tools (agents have to “call” saveMemory or retrieveMemory). I also check for looping/repetition FREQUENTLY using in-memory vector databases of the langchain agent callback events.
My opinion on this: eh, who cares? AutoGPT and similar are non-standard use cases for Vector DBs right now, and Vector DBs are useful for RAG.
What's your assessment of the biggest blocking issues for something like this to be practically useful? From what I've seen of AutoGPT things seem to fall apart, in that the goals never quite seem to be achieved once anything more than basic research is requested.
It doesn't have to be a one or the other choice. For example, txtai supports a number of different ANN backends including a simple NumPy implementation (https://neuml.github.io/txtai/embeddings/configuration/ann/). There is value in the plumbing to vectorize data, normalize embeddings and find matches.
It's a good idea to find a solution that enables starting simple and scaling up as needed without having to fully rewrite the code.
Where vector databases would make more sense for a project like AutoGPT would be in centralizing distributed memory as a 3rd party service.
If a model on one computer could report memories to a centralized service that could be searched by new instances so work didn't need to be replicated, I'd fully expect that 3rd party service to be running a vector DB.
But in reality, the issues of trust and poisoning the well are too pertinent to see enough centralized consolidation of memory to justify it for a project like this.
I've seen some discussions around E2E encrypted LLM chains, and I could definitely see a 3rd party memory layer as part of that, though I'd suppose it would need to be a plug-in at the model provider and not at the client anyways.
Wouldn't a vector DB be nice to have, so you can use it directly for search?
I understand the argumentation of the article. But I can imagine that waiting so long for a LLM to react that I would actually prefer to do a search instead on a vector database on my "additional information layer" and find relevant information myself. In that case, having a vector DB would then serve two purposes and that could change the considerations whether it is worth the added complexity.
Not an expert here, just a question that came to mind - it might be based on wrong assumptions.
I think the article is a near miss on the right idea. The important point is that a _dedicated_ vector database is probably overkill and not justified for most real-world use cases.
But a multi-modal database that also supports embeddings in hybrid mode or _in addition_ to standard retrieval techniques is both still very useful, and probably sufficient.
What that means to me is that it is yet another vote in favor of less optimized but far more versatile and robust solutions like: OpenSearch, Elastic, and PostgreSQL. [when I say 'less optimized' I'm only referring to their current vectordb plugins, not the rest of the machinery]
OpenSearch and Postgre are phenonemal, robust, OSS tools and the only lingering downside seems to be that their vectordb implementations are still a bit less optimized for large collections - but that probably doesn't matter in practice.
The similarity operation is just a dot product, practically a sum over position wise products of two vectors. A for loop with an addition and a multiply inside. That's all you need after embedding. You can use np.dot() to get exact similarity scores and better retrieval with very fast times for under 100K vectors.
You only want the approximate nearest neighbour method for millions of vectors and above. Even that is easy to do with off the shelve libraries for local index. It only gets complicated when you want fast insertion and distributed access.
I think that just comes down to speed. The utility is identical, just when you're doing it for one user you can deal with a large problem (for you) in the most naive way possible in a few seconds. I would bet good money that there's a slightly more complex but still very basic step up that would get seconds down to something that feels instantaneous.
They do still carry out search, but using a brute force approach (dot product over all the embeddings) instead of vector db. Their point is that they aren't likely to generate enough messages for brute force to be an issue. (Even over 100s of thousands of embeddings brute force is pretty fast.)
I was expecting the authors to talk about superficial encoding of information (words and phrases) as opposed to their meaning or implications. For example the text "The general idea behind Q-learning and SARSA" won't embed the same with "reinforcement". Or "Count the letters in this phrase." won't embed the same with "27".
This can cause RAG systems to fail to retrieve, fail to connect fragmented information or to conclude the result of a process. My theory is that information needs to be digested by a LLM and augmented before indexing in a RAG system. Embeddings are just searching at surface level. That is why I thought AutoGPT had difficulties, one of the reasons at least.
Maybe we can have LLMs preprocess the material to expose the deeper layers of meaning. And we need to reindex everything when we discover we are interested in an aspect we didn't explicitly expose for retrieval. Study, then index, and sometimes study again.
I wouldn't say that. It seems analogous to saying a program uses a simple version of a document-oriented database built in to the system when in truth it's just using dictionaries. Sure, conceptually they do kind of the same thing sometimes and you might even persist it to disk, but it's a bit of a stretch to call it "something like a DB" imo.
Is there any implementation of open source vector db that is fast enough to say create embedding of 100M documents locally within few hours and find ranked matches in under a second? I tried ChromaDb and it is super slow, basically unusable.
Vector DBs don't create embeddings; they store them. As the article points out, the LLM's slowness to respond diminishes the performance that vector DB's can potentially add.
Generating the embeddings is by far the slowest part of that, but it’s embarrassingly parallel so if you have $ it can be done that quick.
When I worked for Dubai airport, I was tasked with building a vector similarity search that was highly optimised for query speed, in the end I ended up holding the vectors in memory (in a numpy array) and using scipy to do cosine similarity, I could get about 1.2 million vectors per second per core after tweaking and optimising, again this is embarrassingly parallel so if you have more vectors, chunk it to fit your hardware and you should get more or less linear scale with that per core.
If you want a hand writing this let me know.
(Also there’s a lot of caveats here, for example they did not need to update the vectors, it was an extremely read-heavy usage pattern)
I do about 100 million embeddings using around 50 GPU instances and feed them into Qdrant. Takes about 12 hours. Very happy with the result and performance as long as you have the option to have a very large memory instance running.
PostgreSQL will match your requirements. Although you won’t load that fast, just because generating the embeddings take longer than that, independently of your storage engine.
I asked them about this in their discord. they didnt give me straight-forward answers. that’s when I knew I was right from the start that this whole thing was a show
batmansmk|2 years ago
The doc is littered of those paragraphs. Remove the fat! Go to the point! YOLO, that’s a freaking waste of life cycles!
amelius|2 years ago
rowanG077|2 years ago
uoaei|2 years ago
datadrivenangel|2 years ago
But that never got anyone promoted.
cbsmith|2 years ago
coffeebeqn|2 years ago
rvz|2 years ago
benterix|2 years ago
coffeebeqn|2 years ago
kinlan|2 years ago
dartos|2 years ago
penjelly|2 years ago
jondwillis|2 years ago
My opinion on this: eh, who cares? AutoGPT and similar are non-standard use cases for Vector DBs right now, and Vector DBs are useful for RAG.
jasfi|2 years ago
dmezzetti|2 years ago
It's a good idea to find a solution that enables starting simple and scaling up as needed without having to fully rewrite the code.
Havoc|2 years ago
Surely though we’re going to see a fairly exponential increase in these requirements though?
The cheaper the compute gets/scales the more sense it makes to hammer problems with more agents/tries so scale of needed agent memory also goes up.
I’d have gone with “it’s already implemented so just leave it be”
kromem|2 years ago
If a model on one computer could report memories to a centralized service that could be searched by new instances so work didn't need to be replicated, I'd fully expect that 3rd party service to be running a vector DB.
But in reality, the issues of trust and poisoning the well are too pertinent to see enough centralized consolidation of memory to justify it for a project like this.
I've seen some discussions around E2E encrypted LLM chains, and I could definitely see a 3rd party memory layer as part of that, though I'd suppose it would need to be a plug-in at the model provider and not at the client anyways.
Fannon|2 years ago
I understand the argumentation of the article. But I can imagine that waiting so long for a LLM to react that I would actually prefer to do a search instead on a vector database on my "additional information layer" and find relevant information myself. In that case, having a vector DB would then serve two purposes and that could change the considerations whether it is worth the added complexity.
Not an expert here, just a question that came to mind - it might be based on wrong assumptions.
blackkettle|2 years ago
But a multi-modal database that also supports embeddings in hybrid mode or _in addition_ to standard retrieval techniques is both still very useful, and probably sufficient.
What that means to me is that it is yet another vote in favor of less optimized but far more versatile and robust solutions like: OpenSearch, Elastic, and PostgreSQL. [when I say 'less optimized' I'm only referring to their current vectordb plugins, not the rest of the machinery]
OpenSearch and Postgre are phenonemal, robust, OSS tools and the only lingering downside seems to be that their vectordb implementations are still a bit less optimized for large collections - but that probably doesn't matter in practice.
visarga|2 years ago
You only want the approximate nearest neighbour method for millions of vectors and above. Even that is easy to do with off the shelve libraries for local index. It only gets complicated when you want fast insertion and distributed access.
IanCal|2 years ago
montebicyclelo|2 years ago
visarga|2 years ago
This can cause RAG systems to fail to retrieve, fail to connect fragmented information or to conclude the result of a process. My theory is that information needs to be digested by a LLM and augmented before indexing in a RAG system. Embeddings are just searching at surface level. That is why I thought AutoGPT had difficulties, one of the reasons at least.
Maybe we can have LLMs preprocess the material to expose the deeper layers of meaning. And we need to reindex everything when we discover we are interested in an aspect we didn't explicitly expose for retrieval. Study, then index, and sometimes study again.
ilaksh|2 years ago
luke-stanley|2 years ago
pedrovhb|2 years ago
omneity|2 years ago
datadrivenangel|2 years ago
sytelus|2 years ago
mirzap|2 years ago
malux85|2 years ago
When I worked for Dubai airport, I was tasked with building a vector similarity search that was highly optimised for query speed, in the end I ended up holding the vectors in memory (in a numpy array) and using scipy to do cosine similarity, I could get about 1.2 million vectors per second per core after tweaking and optimising, again this is embarrassingly parallel so if you have more vectors, chunk it to fit your hardware and you should get more or less linear scale with that per core.
If you want a hand writing this let me know.
(Also there’s a lot of caveats here, for example they did not need to update the vectors, it was an extremely read-heavy usage pattern)
opdahl|2 years ago
batmansmk|2 years ago
iandanforth|2 years ago
https://twitter.com/karpathy/status/1647374645316968449
archibaldJ|2 years ago
omneity|2 years ago