(no title)
Valid3840 | 2 months ago
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
Workaccount2|2 months ago
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
madeofpalk|2 months ago
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
johnfn|2 months ago
dymk|2 months ago
smelendez|2 months ago
lofaszvanitt|2 months ago
Hogwash.
littlestymaar|2 months ago
GaryBluto|2 months ago
drdaeman|2 months ago
vasco|2 months ago
What does this even mean
bobse|2 months ago
Qem|2 months ago
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
azinman2|2 months ago
qazxcvbnmlp|2 months ago
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
milowata|2 months ago
sh4rks|2 months ago
eth0up|2 months ago
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
CompuHacker|2 months ago
[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>
[2] <https://model-spec.openai.com/2025-02-12.html>
intrasight|2 months ago
unknown|2 months ago
[deleted]
soulofmischief|2 months ago
It's irresponsible for OpenAI to let this issue be solved by extensions.
QuantumNomad_|2 months ago
https://github.com/Hangzhi/chatgpt-timestamp-extension
https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...
bloqs|2 months ago