top | item 39509092

(no title)

RockCoach | 2 years ago

Have you tried tweaking parameters like temperature, top_p, or seed value when sending the API request?

Beyond that and due to the probabilistic nature of the LLM response I'm not sure how a reproducible "matching" between chat interface and API could be achieved.

I'm working primarily with the API through my own wrapper and I noticed that I tend to give less detailed instructions than when I'm using the OpenAI chat interface often resulting in a less accurate response.

discuss

order

mikerg87|2 years ago

Thanks. I will. I must confess, I have not tried the top_p or seed value. I guess my naive belief was that the the defaults in the api matched the OOTB experience of the web interface. shame on me