(no title)
trolan
|
1 year ago
For a few uni/personal projects I noticed the same about Langchain: it's good at helping you use up tokens. The other use case, quickly switching between models, is a very valid reason still. However, I've recently started playing with OpenRouter which seems to abstract the model nicely.
sroussey|1 year ago
lordofmoria|1 year ago
I think we now know, collectively, a lot more about what’s annoying/hard about building LLM features than we did when LangChain was being furiously developed.
And some things we thought would be important and not-easy, turned out to be very easy: like getting GPT to give back well-formed JSON.
So I think there’s lots of room.
One thing LangChain is doing now that solves something that IS very hard/annoying is testing. I spent 30 minutes yesterday re-running a slow prompt because 1 in 5 runs would produce weird output. Each tweak to the prompt, I had to run at least 10 times to be reasonably sure it was an improvement.
jsemrau|1 year ago