top | item 46391473

(no title)

Valid3840 | 2 months ago

ChatGPT still does not display per-message timestamps (time of day / date) in conversations.

This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.

Do any of you could think of a reason (UX-wise) for it not to be displayed?

discuss

order

Workaccount2|2 months ago

Regular people hate numbers.

Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.

madeofpalk|2 months ago

Isn't it just simpler to believe that ChatGPT doesn't have timestamps because... they never added them? It wasn't in the original MVP prototype and they've just never gotten around to it?

Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.

johnfn|2 months ago

Do regular people not use any mainstream messaging app - Messenger, iMessage, etc?

dymk|2 months ago

This makes sense only if you don’t think about it at all.

smelendez|2 months ago

Make it a toggle then, like a lot of popular chat apps?

lofaszvanitt|2 months ago

UX/UI research if it exists at all is akin to religious healers who touch you on your head and bam you can suddenly walk after spending 25 years in a wheelchair.

Hogwash.

littlestymaar|2 months ago

It must be false, because if that was true, marketing people would not be putting numbers everywhere when naming products.

GaryBluto|2 months ago

Makes sense. ChatGPT is the McDonalds of LLMs.

drdaeman|2 months ago

I’m sorry but this really sounds like a made-up idea. Is there any actual repeatable research that could back this claim?

vasco|2 months ago

> Regular people hate numbers

What does this even mean

bobse|2 months ago

Should they be allowed anywhere near a computer?

qazxcvbnmlp|2 months ago

We humans use timestamps in conversations to reference a persons particular state of reference at a given point in time.

Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.

Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”

If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.

milowata|2 months ago

It’s better for them if you don’t know how long you’ve been talking to the LLM. Timestamps can remind you that it’s been 5 hours: without it you’ll think less about timing and just keep going.

sh4rks|2 months ago

Ah, the casino tactic

eth0up|2 months ago

My honest opinion, which may be entirely wrong but remains my impression, is:

User Engagement Maximization At Any Cost

Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.

I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.

Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.

I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.

Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic

CompuHacker|2 months ago

After whatever quota of free GPT-5 messages is exhausted, `mini` should answer most replies, unless they're policy sensitive, which get full-fat `GPT-5 large` with the Efficient personality applied, regardless of user settings, and not indicated. I'm fairly confident that this routing choice, the text of Efficient [1], and the training of the June 2024 base model to the model spec [2] is the source of all the sophistic behavior you observe.

[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>

[2] <https://model-spec.openai.com/2025-02-12.html>

intrasight|2 months ago

Sounds like an easy browser extension

bloqs|2 months ago

stop using the product until the products creators at least demonstrate they listen. they have never been in a riskier position