top | item 45519476

Serverless RL: Faster, Cheaper and More Flexible RL Training

9 points| slewis | 4 months ago |openpipe.ai

3 comments

order

Arctic_fly|4 months ago

Interesting post. Did the difference in wall clock training time take the reduction in cold start time into account? Seems like that could be a significant factor for small jobs and negligible for large ones.

altryne1|4 months ago

Will the rate limits go higher? How about other models? Qwen 2.5 is nice but 3 is nicer

cmatrub|4 months ago

higher abstraction than Tinker, more flexible than OpenAI RFT. i like integration to production inference, so i can switch between training and inference for continuous learning.