Show HN: Open-Source Colab Notebooks to Implement Advanced RAG Techniques
98 points| hbamoria | 1 year ago |github.com
We’ve seen developers spend a lot of time implementing advanced RAG techniques from scratch.
While these techniques are essential for improving performance, their implementation requires a lot of effort and testing!
To help with this process, our team (Athina AI) has released Open-Source Advanced RAG Cookbooks.
This is a collection of ready-to-run Google Colab notebooks featuring the most commonly implemented techniques.
Please show us some love by starring the repo if you find this useful!
Oras|1 year ago
Is there a tool/technique to achieve this? I’m aware that I can use LLMs to do so, or read all pages and find identical text (header/footer), but I want to keep the page number as part of the metadata to ensure better citation on retrieval.
prsdm|1 year ago
jonathan-adly|1 year ago
ColPali is the standard implementation & SOTA. Much better than OCR. We maintain a ready to go retrieval API that implements this: https://github.com/tjmlabs/ColiVara
throwup238|1 year ago
jonathan-adly|1 year ago
It is abstraction hell, and will set you back thousands of engineers hours the moment you want to do something differently.
RAG is actually very simple thing to do; just too much VC money in the space & complexity merchants.
Best way to learn is outside of notebooks (the hard parts of RAG is all around the actual product), and use as little frameworks as possible.
My preferred stack is a FastAPI/numpy/redis. Simple as pie. You can swap redis for pgVector/Postgres when ready for the next complexity step.
ellisv|1 year ago
My experience with LangChain has been a mixed bag. On the one hand it has been very easy to get up and running quickly. Following their examples actually works!
Trying to go beyond the examples to mix and match concepts was a real challenge because of the abstractions. As with any young framework in a fast moving field the concepts and abstractions seem to be changing quickly, thus examples within the documentation show multiple ways to do something but it isn't clear which is the "right" way.
jackmpcollins|1 year ago
[0] https://magentic.dev/examples/rag_github/
pchangr|1 year ago
Jet_Xu|1 year ago
Has anyone successfully implemented a language-agnostic approach that can: 1. Capture implicit code relationships without heavy LLM dependency? 2. Scale efficiently for large monorepos while preserving fine-grained semantic links? 3. Handle cross-module dependencies and version evolution?
Current solutions like AST-based analysis + traditional embeddings seem to miss crucial semantic contexts. Curious about others' experiences with hybrid approaches combining static analysis and lightweight ML models.
krawczstef|1 year ago
hbamoria|1 year ago
imworkingrn|1 year ago
chompychop|1 year ago
dmezzetti|1 year ago
If you want notebooks that do some of this with local open models: https://github.com/neuml/txtai/tree/master/examples and here: https://gist.github.com/davidmezzetti
prsdm|1 year ago
unknown|1 year ago
[deleted]