top | item 39575745

(no title)

dimfeld | 2 years ago

LLMS tend to be pretty bad at answering questions about which one it is, what version, etc. You can put stuff into the system prompt to try to help it answer better, but otherwise the LLM has little to no intrinsic knowledge about itself and whatever happens to be in the training data shows up instead (which now is a bunch of ChatGPT output all over the internet).

discuss

order

daveguy|2 years ago

LLMs definitely would not pass the mirror test at this point.

electrograv|2 years ago

I actually fed two GPT4’s into each other as an experiment and they very quickly devolved into just saying things like “It’s clear you’re just feeding my answers into ChatGPT and posting the replies. Is there anything else I can help you with?”

hanniabu|2 years ago

I feel any LLM that wa trained on data post GPT is unreliable due to contamination