top | item 47136351

(no title)

StilesCrisis | 5 days ago

What do you know, the human results line up exactly with ChatGPT. What are the odds! Surely the human responders are highly ethical individuals and they wouldn't even dream of copy-pasting all the questions into ChatGPT without reading them.

Realistically, this mostly tells me that the "human answers" service is dead. People will figure out a way to pass the work off to an AI, regardless of quality, as long as they can still get paid.

discuss

order

felix089|5 days ago

Yea funny coincidence, but this is not at all how the human answers were collected.

Rapidata answered this in another comment below. They integrate micro-surveys into mobile apps (like Duolingo, games, etc) as an optional opt-in instead of watching ads. The users are vetted and there's no incentive to answer correctly.

cortesoft|5 days ago

Yeah, I always intentionally choose a wrong answer when I get one of those kinds of ads. Little acts of rebellion.

schmidtc|4 days ago

But, there is a clear incentive to answer the question incorrectly. The wrong answer is funny and will give the human some level of pleasure thinking about it. I would certainly reply with "walk" just for fun and apparently 28.5% of people agree with me.

Normal_gaussian|5 days ago

In which case the %age is notable as it aligns very closely to the effect size on cookie accept/reject ratios. Cookie dialogs tend to fall 70/30 either way.

tantalor|4 days ago

> there's no incentive to answer correctly

Answering correctly is not in question here. This is essentially opinion polling anyway, there is no single correct answer.

The incentive is exactly what you said: to skip ads.

How are the users actually vetted? We have no information on this, just have to take rapidata on faith.

raincole|5 days ago

The default model of ChatGPT is GPT 5.2 Instant, not the one lines up with human results (which is GPT 5).

However, it does tell us something about human answers as the above commenter confidently reached such a strong but baseless conclusion.

htrp|5 days ago

thats almost always been the case with 3rd party human task services

StilesCrisis|5 days ago

Yup. I was surprised that the article author took the results at face value. Having results that match the most commonly-known AI platform's results perfectly seemed worthy of a mention!