(no title)
tgittos | 1 year ago
Basically you'll use any LLM and a vector DB of your choice (I like ChromaDB to date). Write a tool that will walk your source documents and chunk them. Submit the chunks to your LLM with a prompt that asks the LLM to come up with search retrieval questions for the chunk. Store the document and the questions in ChromaDB, cross-referencing the question to the document source (you can add the filename/path as metadata to the question) and the relevant chunk (by it's ID).
Run this tool whenever your docs change - you can automate this. Being intelligent about detecting new/changed content and how you chunk/generate questions can save you time and money and be a place to optimize.
To use it, you need to accept user input, run the input as a text query against your vector DB and submit both the results (with filenames and relevant chunks) and the user's query to a LLM with a prompt designed to elicit a certain kind of response based on input and the relevant chunks. Show the response to the user. Loop if you want.
You can build most of this with as few tools as `litellm`, `langchain` and `huggingface` libraries. You'll be surprised how far you can get with such a dumb setup.
Yes, this is basic RAG. That's how you do it without getting overwhelmed with all the tooling/libraries out there.
ulkidoo|1 year ago
Where do sloppyjoes get all of this unrestrained optimism?
OP asked if such a solution exists.
This documentation assistant is an oft requested tool in regards llms on this forum. If you can do it, you could start a business around it. OP could be your first customer!
The only other comment in this thread at this time is from someone who is also a breathlessly vocal supporter of contemporary machine learning systems on this forum and yet they are saying “I have yet to see a convincing demo”, but here you are saying it’s easy; if only these damned margins were larger!
I’ve checked your GitHub. I’m unable to find an implementation of this thing that you claim is so simple to implement.
I checked your blog. Your most recent article is about you wasting 45 minutes hoping such an “ai agent” can fix a bug in your code. It proved unable to do so. You even call the experiment a failure in your post.
So, where’s this optimism coming from?!
But you do say you are having fun. Which is great! I’m glad you’re having fun.
simonw|1 year ago
Here are a few of my own RAG implementations - getting a basic version working really is something that can be done in a few hours... but getting a GOOD version working takes a LOT longer than that.
- https://simonwillison.net/2023/Jan/13/semantic-search-answer... - my first attempt at RAG, before I knew it was called that, using custom SQLite SQL functions
- https://til.simonwillison.net/llms/embed-paragraphs#user-con... - a Bash script implementation of RAG
- https://simonwillison.net/2024/Jun/21/search-based-rag/ - an implementation of RAG using SQLite full-text search (as opposed to embedding vectors), built on https://www.val.town/