top | item 46496961

(no title)

florkbork | 1 month ago

Did you actually argue this?

Or did you place about 2-5 paragraphs per heading, with little connection between the ideas?

For example:

> Perhaps what some users are trying to express with concerns about ‘sycophancy’ is that when they paste information, they'd like to see the AI examine various implications rather than provide an affirming summary.

Did you, you personally, find any evidence of this? Or evidence to the opposite? Or is this just a wild guess?

Wait; nevermind that we're already moving on! No need to do anything supportive or similar to bolster.

> If so, anti-‘sycophancy’ tuning is ironically a counterproductive response and may result in more terse or less fluent responses. Exploring a topic is an inherently dialogic endeavor.

Is it? Evidence? Counter evidence? Or is this simply feelpinion so no one can tell you your feelings are wrong? Or wait; that's "vibes" now!

I put it to you that you are stringing together (to an outside observer using AI) a series of words in a consecutive order that feels roughly good but lacks any kind of fundamental/logical basis. I put it to you that if your premise is that AI leads to a robust discussion with a back and forth; the one you had that resulted in "product" was severely lacking in any real challenge to your prompts, suggestions, input or viewpoints. I invite you to show me one shred of dialogue where the AI called you out for lacking substance, credibility, authority, research, due dilligence or similar. I strongly suspect you can't.

Given that; do you perhaps consider that might be the problem when people label AI responses as sycophancy?

discuss

order

firasd|1 month ago

Well I do have a chat log somewhere where I say potential energy seems like a fake concept and GPT and/or Gemini got around to explaining that it can actually be expressed in equations reliably.. does that count?

"called you out for lacking substance, credibility, authority, research, due dilligence or similar" seems like level of emotional angst that LLMs don't usually tend to show

Actually amusingly enough the Gemini/Verhoeven example in my doc is an example where the AIs seem to have a memorably strong opinion