top | item 36327953

(no title)

BigElephant | 2 years ago

What would you use besides LangChain?

discuss

order

davepeck|2 years ago

I’ve found it preferable to build directly on top of OpenAI’s API. (I’ve also written a simple API wrapper for llama.cpp hosted LLMs.) Over time I’ve built a small library of utilities, including for summarization. It’s not that much code.

I don’t know if this is a spicy or a generally-agreed-upon take: my feeling is that, while LangChain was useful in that it helped the community codify some early intuitions about LLM invocation patterns, it’s basically a grab bag of partially complete somewhat disconnected utilities. It nods to composability but, in practice, its pieces often don’t fit together. On the Python side, it suffers from poor typing: when creating a chain, it’s often impossible to know what the full set of configuration options is without digging deep into LangChain’s code. It’s catch-as-can whether you can deeply configure specific sub-aspects of a chain.

There are other things I want in my own code at the moment, including keeping track of how many input/output tokens each of my actions takes, etc.

I dunno, maybe I’m the only one here. Curious what others think.

_fill|2 years ago

At the moment we're still using langchain but it is quite cumbersome in the long run. The library is developing quickly and a feature that you might expect to work one week might not the next. Have you had better luck with others?

thatcherthorn|2 years ago

I am also interested in the answer to this