top | item 44844574

GPT-5 System Prompt?

36 points| georgehill | 6 months ago |github.com

15 comments

order

dgreensp|6 months ago

> Never place rich UI elements within a table, list, or other markdown element.

> Place rich UI elements within tables, lists, or other markdown elements when appropriate.

crazygringo|6 months ago

How does a prompt this long affect resource usage?

Does inference need to process this whole thing from scratch at the start of every chat?

Or is there some way to cache the state of the LLM after processing this prompt, before the first user token is received, and every request starts from this cached state?

mdaniel|6 months ago

It's a good thing people were enamored of how inexpensive GPT-5 is, given that the system prompt is (allegedly) 54kb. I don't know how many tokens that is offhand, but what a lot of them to burn just on setup of the thing

Tadpole9181|6 months ago

54,000 bytes, one byte per character. 4 characters per token (more or less). Around 13,000 tokens.

These are NOT included in the model context size for pricing.

btdmaster|6 months ago

I might be wrong, but can't you checkpoint the post-system prompt model and restore from there, trading memory for compute? Or is that too much extra state?

TZubiri|6 months ago

These are always so embarassing

NewsaHackO|6 months ago

It's because they always put things that seem way to specific to certain issues, like riddles and arithmetic. Also, I am not a WS, but the mention of "proud boys" are things that can be used as fodder for LLM bias. I wonder why they even have to use a system prompt; why can't that have a separate fine-tuned model for ChatGPT specifically so that they don't need a system prompt?