top | item 42093716

(no title)

padheyam | 1 year ago

when asked the reason, ChatGPT had this to say- "Actually, the choice of “Elara” wasn’t a result of training on specific copyrighted stories or any prompt to avoid copyright claims. OpenAI models like me are designed to create original content without directly referencing copyrighted characters, and "Elara" is simply a popular-sounding name in many storytelling contexts. I just used it consistently for its versatility, but I’m totally open to switching things up!"

discuss

order

dTal|1 year ago

ChatGPT's opinion on the matter is completely worthless, unless it was also trained on an accurate description of its training process (it wasn't). Language models do not even have access to their own "thought process" - if you ask it "why" it said something, you will get a post-hoc rationalization 100 percent of the time because the next-word prediction only has access to the same text that you see. The rationalization might be incidentally correct, or it might not - either way it contributes no real information about the model's internal state.

jiggawatts|1 year ago

There's an interesting theory that this is all that consciousness is: one part of the brain trying to explain the decisions of another part, a part into which it has no special insight.

unparagoned|1 year ago

So just like humans

arka2147483647|1 year ago

Why do people write these kinds of "answers" that the model gives. It's not like the model knows why it's doing anything.

taytus|1 year ago

Not understanding how LLMs work.