Moving an internal ML project from "a quick demo on localhost", to "deployed in production", is hard. We think latency is one of the biggest problems. We built OneContext to solve that problem. We launched today. Would love your feedback + feature requests!
rossamurphy|1 year ago
harindirand|1 year ago
cwmdo|1 year ago
how does this fit in with barebones langchain/bedrock setup?
georgespencer|1 year ago
the_async|1 year ago