(no title)
Darshan009 | 6 days ago
I have experience building low latency backend and AI inference systems in Python and C++, including streaming pipelines and optimized model serving at scale. I have worked on sub 100ms inference systems and tuned LLM pipelines for throughput and reliability in production.
Very comfortable shipping MVPs fast, working async, and handling real time performance tradeoffs. Would love to connect and learn more.
darshangvaghasiya0@gmail.com
No comments yet.