top | item 38168584

(no title)

conorh | 2 years ago

We just changed a project we've been working on to try out the new gpt-4-turbo model and it is MUCH faster. I don't know if this is a factor of the number of people using it or not, but streaming a response for the prompts we are interested in went from 40-50 seconds to 6 seconds.

discuss

order

0xDEF|2 years ago

I noticed that too but I think it's because we are hitting new servers that just went online. They will probably get saturated and slower with time when other gpt-4 users start using gpt-4-turbo.