top | item 37052114

(no title)

mtricot | 2 years ago

Great question :) We want to get to value as fast as possible. I am certain that at some point we will need to go deeper with those integrations and they will likely require to be separate destinations. It will also depend on how they differentiate from each others, we will need more granularity with configurations.

discuss

order

zarazas|2 years ago

I ak playing around with langchain the last days as well and when I checked right all langchain is really doing for you is giving you a guideline about recommended steps for a vector assisted LLM. In your example it actually just adds some text to the prompt like: "Answer the following question with the context provided here, If you dont find the right info dont make something up" sth. Along those lines

mritchie712|2 years ago

have you considered supporting pgvector? I'd imagine that'd be easier since you already have pg as a destination.

mtricot|2 years ago

On the roadmap! We want to get more clarity on how to fit the Embedding part in the ELT model. Once we figure it out we will add it to PG.