top | item 40515943

Most ML applications are just request routers

10 points| rossamurphy | 1 year ago |onecontext.ai

5 comments

order

rossamurphy|1 year ago

Moving an internal ML project from "a quick demo on localhost", to "deployed in production", is hard. We think latency is one of the biggest problems. We built OneContext to solve that problem. We launched today. Would love your feedback + feature requests!

harindirand|1 year ago

Looks super interesting! This could be super helpful for us. Will drop your team a note :)

cwmdo|1 year ago

“simply by cutting out the network latency between the steps, OneContext reduces the pipeline execution time by 57%)”

how does this fit in with barebones langchain/bedrock setup?

georgespencer|1 year ago

Amazing! Congrats on launching. Company motto: "dumb enough to actually have attempted this already".

the_async|1 year ago

Seems like a great product !