(no title)
SemioticStandrd | 2 years ago
If a researcher comes out and says, “Surveys show that people want X, and they do not like Y,” and then others ask the researcher if they surveyed people, the answer would be “no.”
Fundamentally, people wanting feedback from humans will not get that by using your product.
The best you can say is this: “Our product is guessing people will say X.”
famouswaffles|2 years ago
Out of One, Many: Using Language Models to Simulate Human Samples (https://arxiv.org/abs/2209.06899)
There's been some research in this vain. To answer your question, seemingly very valid.
puppy_nap|2 years ago
timshell|2 years ago
Internal purposes include stuff like optimally rewording questions and getting priors.
A hybrid approach would be something like - hey let's not ask someone 100 questions because we can accurately predict 80%. Let's just ask them the hard-to-estimate 20 questions
tcgv|2 years ago
quadrature|2 years ago
This kind of concerns me because you could use this to bias surveys in different directions. This obviously already happens, so maybe it just part of the status quo.
tchock23|2 years ago
I suspect people would use this product as a quick gut check to decide if it is warranted to spend the time and money on a full scale quant study.
DriverDaily|2 years ago
This is like a 10/10.
Shrezzing|2 years ago
[1] https://www.youtube.com/watch?v=G0ZZJXw4MTA
helsinkiandrew|2 years ago
I see the problem as although you can create lots of examples that are correct/follow real world opinions, you can never prove that a particular question is correct/follows real world opinion. I'm not sure who would trust the output enough to rely on it for decision making.
digitcatphd|2 years ago