top | item 39087199

(no title)

airgapstopgap | 2 years ago

Note that we have no reason to believe that the underlying LLM inference process has suffered any setbacks. Obviously it has generated some logits. But the question is how is OpenAI server configured and what inference optimization tricks they're using.

discuss

order

febeling|2 years ago

The operation of this server is very uniform, in my imagination. Just emitting chunks of string. That this can be disrupted and an edge case occur, by the content of the strings - I find it puzzling.