Could someone point me towards a good resource for learning how to build a RAG app without llangchain or llamaindex? It's hard to find good information.
My strategy has been to implement in / follow along with llamaindex, dig into the details, and then implement that in a less abstracted, easily understandable codebase / workflow.
Was driven to do so because it was not as easy as I'd like to override a prompt. You can see how they construct various prompts for the agents, it's pretty basic text/template kind of stuff
openai cookbook! Instructor is a decent library that can help with the annoying parts without abstracting the whole api call - see it’s docs for RAG examples.
turnsout|1 year ago
- Read in the user's input
- Use that to retrieve data that could be useful to an LLM (typically by doing a pretty basic vector search)
- Stuff that data into the prompt (literally insert it at the beginning of the prompt)
- Add a few lines to the prompt that state "hey, there's some data above. Use it if you can."
kolinko|1 year ago
krawczstef|1 year ago
[disclaimer I created Hamilton & Burr - both whitebox frameworks] See https://www.reddit.com/r/LocalLLaMA/comments/1d4p1t6/comment... for comment about Burr.
verdverm|1 year ago
Was driven to do so because it was not as easy as I'd like to override a prompt. You can see how they construct various prompts for the agents, it's pretty basic text/template kind of stuff
d13|1 year ago
https://developers.cloudflare.com/workers-ai/tutorials/build...
sveinek|1 year ago
fsndz|1 year ago
bestcoder69|1 year ago