top | item 44086718

(no title)

FieryTransition | 9 months ago

Turns out tuning LLMs on human preferences leads to sycophantic behavior, they even wrote about it themselves, guess they wanted to push the model out too fast.

discuss

order

mike_hearn|9 months ago

I think it was OpenAI that wrote about that.

Most of us here on HN don't like this behaviour, but it's clear that the average user does. If you look at how differently people use AI that's not a surprise. There's a lot of using it as a life coach out there, or people who just want validation regardless of the scenario.

tankenmate|9 months ago

> or people who just want validation regardless of the scenario.

This really worries me as there are many people (even more prevalent in younger generations if some papers turn out to be valid) that lack resilience and critical self evaluation who may develop narcissistic tendencies with increased use or reinforcement from AIs. Just the health care costs involved when reality kicks in for these people, let alone other concomitant social costs will be substantial at scale. And people think social media algorithms reinforce poor social adaptation and skills, this is a whole new level.

markovs_gun|9 months ago

This is a problem with these being marketed products. Being popular isn't the same as being good, and being consumer products means they're getting optimized for what will make them popular instead of what will make them good.