This is a bit of sidetrack, but in case someone is interested in reading their history more easily. My conversations.html export file was ~200 MiB and I wanted something easier to work with, so I've been working on a project to index and make it searchable.
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
Do you know if this is available in the actual web interface, and just not displayed, or is it just in the data export? If it is in the web, maybe a browser extension would be worth making.
My guess is that including timestamps in messages to the LLM will bias the LLMs responses in material ways, and ways they don't want, and showing timestamps to users but not the LLM will create confusion when the user assumes the LLM is aware of them but it isn't. So the simple product management decision was to just leave them out.
That's no excuse imho. I see 2 different endpoints, 1 for llm stream and 1 for msg history (with stamps).
New timestamps could be added FE as new messages start without polluting the user input for example
I could definitely see that being an issue, but like with so many UX decisions, I wish they would at least hide the option somewhere in a settings menu.
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
ChatGPT still does not display per-message timestamps (time of day / date) in conversations.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
We humans use timestamps in conversations to reference a persons particular state of reference at a given point in time.
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
It’s better for them if you don’t know how long you’ve been talking to the LLM. Timestamps can remind you that it’s been 5 hours: without it you’ll think less about timing and just keep going.
My honest opinion, which may be entirely wrong but remains my impression, is:
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit
Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
ChatGPT to this day does not have a single simplest feature -- fork chat from message.
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
This is a constant frustration for me with Gemini. Especially since things like Deep Research and Canvas mode lock you in, seemingly arbitrary. LLMs to my understanding are Markovian prompt-to-prompt, so I don't see why this is an issue at all.
Just a note to those adding the time to the personalization response. It’s inaccurate. If you have an existing chat, the time is near the last time you had that chat session active. If you open a new one, it can be off by + or - 15 minutes for some reason
I was using a continuous conversation with chatgpt to keep track of my lifts, and then I realize it never understand what day I'm talking to it, like there is no consistency, it might as well be the date of the first message you sent
Claude's web interface has an elegant solution. When you roll the mouse over one of your prompts, it has the abbreviated date in the row of Retry/Edit/Copy icons, e.g. "Dec 17". Then if you roll the mouse over that date, you get the full date and time, e.g. "Dec 17, 2025, 10:26 AM".
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
Dec 17, 2025, 10:26 AM [I added this here]
Copy Message
Select Text
Edit
ChatGPT could simply do the same thing for both web and mobile.
Beyond the lack of timestamps, ChatGPT produces oddly formatted text when you copy answers. It’s neither proper markdown nor rich text. The formatting is consistently off: excessive newlines between paragraphs, strangely indented lists, and no markdown support whatsoever.
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
My biggest complaint about ChatGPT is how slow their interface is when the conversations get log. This is surprising to me given that it's just rendering chats.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
They must have a small team for the UI and probably don't consider it part of their goals for long-term profitability? UI enhancements like this are surprisingly slow for a company with this much funding
The only (silly) reason I can think of is that a non trivial number of people copy pasta directly from chatgpt responses and having the timestamp there would be annoying.
I built a single page website that copies the current time to my clipboard and I paste it into my messages. It's inconvenient and I don't do it irregularly.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
Time stamps? lol
They still don’t have the option to search your previous history.
Luckily I built an extension that stores all chats locally to a database so I can reference and view offline if I want too. Time stamps included.
The lack of visible timestamps feels small, but it actually creates a subtle fidelity problem. Conversations imply continuity that may not exist. Minutes, hours, or days collapse into the same narrative flow.
When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
You only need that info if you know you need it in your rag. Over the last two years of usage I don't recall where I'd need those timestamps but I know there are cases. Still, this would have to be an option because otherwise it would be waste of tokens. However, we have to consider they are competing for the quality AND length of the response even if a shorter response is better. There's a pretzel of considerations when talking about this.
Imagine you started having back pain months ago and you remember asking ChatGPT questions when it first started.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
It's ugly, why it isn't at least exposed as an option to enable for power users would make me look at some advantage time stamps would give to an inference scraper or possibly their service APIs don't have contemporaneous access to the metadata available from the web interface.
Just like on a piece of hardware that doesn't have a RTC, we rely on NTP. Maybe we just need an NTP MCP for the agents. Looks like there are several open-source projects already but I'm not linking to them because I don't know their quality or trust.
Other than the potential liability, cost may also be a factor.
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
I think "thank you" are used for inference in follow-up messages, but not necessarily timestamps.
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
Altman was being dumb; being polite to LLMs makes them produce higher quality results which results in less back-and-forth, saving money in the long run.
What annoys me even more is that ChatGPT doesn't alert you, when you near the context window limit. I have a chat which I've worked on for a year and now hit the context window. I've worked around this by doing a GDPR download of all messages, re-constructed the conversation inside a markdown file and then gave that file to claude to create a summarized / compacted version of that chat...
zbycz|2 months ago
The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:
gnyman|2 months ago
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
https://github.com/gnyman/llm-history-search
caminanteblanco|2 months ago
stuaxo|2 months ago
FloorEgg|2 months ago
Kailhus|2 months ago
qweiopqweiop|2 months ago
caminanteblanco|2 months ago
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
Valid3840|2 months ago
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
Workaccount2|2 months ago
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
Qem|2 months ago
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
qazxcvbnmlp|2 months ago
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
milowata|2 months ago
eth0up|2 months ago
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
intrasight|2 months ago
bloqs|2 months ago
thway15269037|2 months ago
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Gemini btw too.
Leynos|2 months ago
https://twitter.com/OpenAI/status/1963697012014215181?lang=e...
vimy|2 months ago
Just edit a message and it’s a new branch.
cyral|2 months ago
caminanteblanco|2 months ago
firesteelrain|2 months ago
baby|2 months ago
Stratoscope|2 months ago
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
ChatGPT could simply do the same thing for both web and mobile.submeta|2 months ago
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
vendiddy|2 months ago
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
throw03172019|2 months ago
bravetraveler|2 months ago
diziet|2 months ago
joquarky|2 months ago
baggy_trough|2 months ago
micromacrofoot|2 months ago
isuckatcoding|2 months ago
Kailhus|2 months ago
tomComb|2 months ago
I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.
firesteelrain|2 months ago
romperstomper|2 months ago
abadar|2 months ago
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
alexthehurst|2 months ago
subscribed|2 months ago
danbakcan|2 months ago
stuckkeys|2 months ago
stainablesteel|2 months ago
realitydrift|2 months ago
When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
phyzix5761|2 months ago
unknown|2 months ago
[deleted]
throwfaraway135|2 months ago
isege|2 months ago
journal|2 months ago
gukoff|2 months ago
cj|2 months ago
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
callamdelaney|2 months ago
joquarky|2 months ago
The painful slowness of long chats (especially in thinking mode for some reason) demonstrates this.
chasing0entropy|2 months ago
kingforaday|2 months ago
mv4|2 months ago
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
cj|2 months ago
I would be very surprised if they don’t already store date/time metadata. If they do, it’s just a matter of exposing it.
g947o|2 months ago
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
mikkupikku|2 months ago
stainablesteel|2 months ago
if response == 'thank you': print('your welcome')
nacozarina|2 months ago
unknown|2 months ago
[deleted]
wrs|2 months ago
tom1337|2 months ago
hangzhi|1 month ago
[deleted]
bobse|2 months ago
[deleted]
itwillnotbeasy|2 months ago
roschdal|2 months ago
wltr|2 months ago
PunchTornado|2 months ago
unknown|2 months ago
[deleted]
baby|2 months ago
serf|2 months ago
As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.
logicallee|2 months ago