top | item 40082081

(no title)

dhon_ | 1 year ago

I've noticed Gemini exhibiting similar behaviour. It will start to answer, for example, a programming question - only to delete the answer and replace it with something along the lines of "I'm only a language model, I don't know how to do that"

discuss

order

extraduder_ire|1 year ago

This seems like a bizarre way to handle this. Unless there's some level of malicious compliance, I don't see why they wouldn't just hide the output until the filtering step is completed. Maybe they're incredibly concerned about it appearing responsive in the average case.

Would not be surprised if there were browser extensions/userscripts to keep a copy of the text when it gets deleted and mark it as such.

visarga|1 year ago

They have both pre and post-LLM filters.

flakiness|1 year ago

The linked article mentions these safeguards as the post-processing step.

Breza|1 year ago

I've seen the exact same thing! Gemini put together an impressive bash one liner then deleted it.

baby|1 year ago

Always very frustrating when it happens.