top | item 46906040

(no title)

bt1a | 24 days ago

This is most likely an inference serving problem in terms of capacity and latency given that Opus X and the latest GPT models available in the API have always responded quickly and slowly, respectively

discuss

order

No comments yet.