top | item 37825155

Why AutoGPT engineers ditched vector databases

109 points| DSemba | 2 years ago |dariuszsemba.com

58 comments

order

batmansmk|2 years ago

« AutoGPT engineers » seem to also generate their articles with LLM, making their documentation awful to grok. For instance, after showing 2 commands, we have to suffer this: Forge your future! The forge is your innovation lab. All the boilerplate code is already handled, letting you channel all your creativity into building a revolutionary agent. It's more than a starting point, it's a launchpad for your ideas. In our exploration today, we’ve covered the essentials of working with AutoGPT projects. We began by laying out the groundwork, ensuring you have all the right tools in place. From there, we delved into the specifics of building an effective AutoGPT agent. Trust me, with the right steps, it becomes a straightforward process.

The doc is littered of those paragraphs. Remove the fat! Go to the point! YOLO, that’s a freaking waste of life cycles!

amelius|2 years ago

Don't read the intermediate representation. The idea is that you use an LLM to summarize those comments into human readable text.

rowanG077|2 years ago

I doubt it's generated with an LLM. LLMs are much too easy to get to generate much higher quality text then the one you quoted.

uoaei|2 years ago

Ironic that boilerplate code is removed but boilerplate copy is introduced to replace it.

datadrivenangel|2 years ago

Summary: They stopped using vector databases because the performance benefit simply didn't matter compared to how long the LLMs took to respond, and you should focus on using technology to solve problems and not pick the trendy option.

But that never got anyone promoted.

cbsmith|2 years ago

You'd be surprised... I've made a pretty good career of identifying cases where simpler, more efficient designs produce better or equivalent results.

coffeebeqn|2 years ago

So it seems like they still use vectors - they just replaced the search (however that works) with a dot product operation? I mean from a vector point of view that makes total sense

rvz|2 years ago

More like hype-driven development and magpies making decisions based on how shiny the bracelet is.

benterix|2 years ago

Has anyone ever managed to generate anything useful with AutoGPT? I had several attempts and apart from wasting some money for GPT-4 API calls, it's never produced anything usable. Whereas if I manually enter prompts in ChatGPT I can often produce simpler project from the beginning till the end, if I partition them into logically independent parts.

coffeebeqn|2 years ago

No. It always gets stuck on a wrong path and can’t get out

kinlan|2 years ago

Unfortunately I've not. For the tasks that I've used these types of tools on I found ChatGPT's "Advanced Data Analysis" mode significantly more useful.

dartos|2 years ago

Me neither.

jondwillis|2 years ago

I have been working on a system of agents over at https://github.com/agi-merge/waggle-dance - I already split problems up into subtasks for agents to work on independently. I give agents access to vector databases, using a simple global key for now, but soon a context/parent/child key. Access to the vector DBs is proxied via tools (agents have to “call” saveMemory or retrieveMemory). I also check for looping/repetition FREQUENTLY using in-memory vector databases of the langchain agent callback events.

My opinion on this: eh, who cares? AutoGPT and similar are non-standard use cases for Vector DBs right now, and Vector DBs are useful for RAG.

jasfi|2 years ago

What's your assessment of the biggest blocking issues for something like this to be practically useful? From what I've seen of AutoGPT things seem to fall apart, in that the goals never quite seem to be achieved once anything more than basic research is requested.

dmezzetti|2 years ago

It doesn't have to be a one or the other choice. For example, txtai supports a number of different ANN backends including a simple NumPy implementation (https://neuml.github.io/txtai/embeddings/configuration/ann/). There is value in the plumbing to vectorize data, normalize embeddings and find matches.

It's a good idea to find a solution that enables starting simple and scaling up as needed without having to fully rewrite the code.

Havoc|2 years ago

That does make sense for now.

Surely though we’re going to see a fairly exponential increase in these requirements though?

The cheaper the compute gets/scales the more sense it makes to hammer problems with more agents/tries so scale of needed agent memory also goes up.

I’d have gone with “it’s already implemented so just leave it be”

kromem|2 years ago

Where vector databases would make more sense for a project like AutoGPT would be in centralizing distributed memory as a 3rd party service.

If a model on one computer could report memories to a centralized service that could be searched by new instances so work didn't need to be replicated, I'd fully expect that 3rd party service to be running a vector DB.

But in reality, the issues of trust and poisoning the well are too pertinent to see enough centralized consolidation of memory to justify it for a project like this.

I've seen some discussions around E2E encrypted LLM chains, and I could definitely see a 3rd party memory layer as part of that, though I'd suppose it would need to be a plug-in at the model provider and not at the client anyways.

Fannon|2 years ago

Wouldn't a vector DB be nice to have, so you can use it directly for search?

I understand the argumentation of the article. But I can imagine that waiting so long for a LLM to react that I would actually prefer to do a search instead on a vector database on my "additional information layer" and find relevant information myself. In that case, having a vector DB would then serve two purposes and that could change the considerations whether it is worth the added complexity.

Not an expert here, just a question that came to mind - it might be based on wrong assumptions.

blackkettle|2 years ago

I think the article is a near miss on the right idea. The important point is that a _dedicated_ vector database is probably overkill and not justified for most real-world use cases.

But a multi-modal database that also supports embeddings in hybrid mode or _in addition_ to standard retrieval techniques is both still very useful, and probably sufficient.

What that means to me is that it is yet another vote in favor of less optimized but far more versatile and robust solutions like: OpenSearch, Elastic, and PostgreSQL. [when I say 'less optimized' I'm only referring to their current vectordb plugins, not the rest of the machinery]

OpenSearch and Postgre are phenonemal, robust, OSS tools and the only lingering downside seems to be that their vectordb implementations are still a bit less optimized for large collections - but that probably doesn't matter in practice.

visarga|2 years ago

The similarity operation is just a dot product, practically a sum over position wise products of two vectors. A for loop with an addition and a multiply inside. That's all you need after embedding. You can use np.dot() to get exact similarity scores and better retrieval with very fast times for under 100K vectors.

You only want the approximate nearest neighbour method for millions of vectors and above. Even that is easy to do with off the shelve libraries for local index. It only gets complicated when you want fast insertion and distributed access.

IanCal|2 years ago

I think that just comes down to speed. The utility is identical, just when you're doing it for one user you can deal with a large problem (for you) in the most naive way possible in a few seconds. I would bet good money that there's a slightly more complex but still very basic step up that would get seconds down to something that feels instantaneous.

montebicyclelo|2 years ago

They do still carry out search, but using a brute force approach (dot product over all the embeddings) instead of vector db. Their point is that they aren't likely to generate enough messages for brute force to be an issue. (Even over 100s of thousands of embeddings brute force is pretty fast.)

visarga|2 years ago

I was expecting the authors to talk about superficial encoding of information (words and phrases) as opposed to their meaning or implications. For example the text "The general idea behind Q-learning and SARSA" won't embed the same with "reinforcement". Or "Count the letters in this phrase." won't embed the same with "27".

This can cause RAG systems to fail to retrieve, fail to connect fragmented information or to conclude the result of a process. My theory is that information needs to be digested by a LLM and augmented before indexing in a RAG system. Embeddings are just searching at surface level. That is why I thought AutoGPT had difficulties, one of the reasons at least.

Maybe we can have LLMs preprocess the material to expose the deeper layers of meaning. And we need to reindex everything when we discover we are interested in an aspect we didn't explicitly expose for retrieval. Study, then index, and sometimes study again.

ilaksh|2 years ago

They are still using something like a vector DB when it is appropriate. It's just a very simple version built in to the system.

luke-stanley|2 years ago

In a just world, this would be added to the title before I clicked the link.

pedrovhb|2 years ago

I wouldn't say that. It seems analogous to saying a program uses a simple version of a document-oriented database built in to the system when in truth it's just using dictionaries. Sure, conceptually they do kind of the same thing sometimes and you might even persist it to disk, but it's a bit of a stretch to call it "something like a DB" imo.

omneity|2 years ago

At this point I wonder why they don’t simply use faiss. Insisting on using np is unnecessarily low level here imo.

sytelus|2 years ago

Is there any implementation of open source vector db that is fast enough to say create embedding of 100M documents locally within few hours and find ranked matches in under a second? I tried ChromaDb and it is super slow, basically unusable.

mirzap|2 years ago

Vector DBs don't create embeddings; they store them. As the article points out, the LLM's slowness to respond diminishes the performance that vector DB's can potentially add.

malux85|2 years ago

Generating the embeddings is by far the slowest part of that, but it’s embarrassingly parallel so if you have $ it can be done that quick.

When I worked for Dubai airport, I was tasked with building a vector similarity search that was highly optimised for query speed, in the end I ended up holding the vectors in memory (in a numpy array) and using scipy to do cosine similarity, I could get about 1.2 million vectors per second per core after tweaking and optimising, again this is embarrassingly parallel so if you have more vectors, chunk it to fit your hardware and you should get more or less linear scale with that per core.

If you want a hand writing this let me know.

(Also there’s a lot of caveats here, for example they did not need to update the vectors, it was an extremely read-heavy usage pattern)

opdahl|2 years ago

I do about 100 million embeddings using around 50 GPU instances and feed them into Qdrant. Takes about 12 hours. Very happy with the result and performance as long as you have the option to have a very large memory instance running.

batmansmk|2 years ago

PostgreSQL will match your requirements. Although you won’t load that fast, just because generating the embeddings take longer than that, independently of your storage engine.

archibaldJ|2 years ago

I asked them about this in their discord. they didnt give me straight-forward answers. that’s when I knew I was right from the start that this whole thing was a show

omneity|2 years ago

Is AutoGPT used in a production/productive setting? That would put some perspective on the situations where these insights are applicable.