(no title)
conorh
|
2 years ago
We just changed a project we've been working on to try out the new gpt-4-turbo model and it is MUCH faster. I don't know if this is a factor of the number of people using it or not, but streaming a response for the prompts we are interested in went from 40-50 seconds to 6 seconds.
0xDEF|2 years ago