top | item 40515944

(no title)

rossamurphy | 1 year ago

Moving an internal ML project from "a quick demo on localhost", to "deployed in production", is hard. We think latency is one of the biggest problems. We built OneContext to solve that problem. We launched today. Would love your feedback + feature requests!

discuss

order

harindirand|1 year ago

Looks super interesting! This could be super helpful for us. Will drop your team a note :)