> ...a lot of the safeguards and policy we have to manage humans own unreliability may serve us well in managing the unreliability of AI systems too.
It seems like an incredibly bad outcome if we accept "AI" that's fundamentally flawed in a way similar to if not worse than humans and try to work around it rather than relegating it to unimportant tasks while we work towards a standard of intelligence we'd otherwise expect from a computer.
LLMs certainly appear to be the closest to real AI that we've gotten so far. But I think a lot of that is due to the human bias that language is a sign of intelligence and our measuring stick is unsuited to evaluate software specifically designed to mimic the human ability to string words together. We now have the unreliability of human language processes without most of the benefits that comes from actual human level intelligence. Managing that unreliability with systems designed for humans bakes in all the downsides without further pursuing the potential upsides from legitimate computer intelligence.
I don’t disagree. But I also wonder if there even is an objective “right” answer in a lot of cases. If the goal is for computers to replace humans in a task, then the computer can only get the right answer for that task if humans agree what the right answer is. Outside of STEM, where AI is already having a meaningful impact (at least in my opinion), I’m not sure humans actually agree that there is a right answer in many cases, let alone what the right answer is. From that perspective, correctness is in the eye of the beholder (or the metric), and “correct” AI is somewhere between poorly defined and a contradiction.
Also, I think it’s apparent that the world won’t wait for correct AI, whatever that even is, whether or not it even can exist, before it adopts AI. It sure looks like some employers are hurtling towards replacing (or, at least, reducing) human headcount with AI that performs below average at best, and expecting whoever’s left standing to clean up the mess. This will free up a lot of talent, both the people who are cut and the people who aren’t willing to clean up the resulting mess, for other shops that take a more human-based approach to staffing.
I’m looking forward to seeing which side wins. I don’t expect it to be cut-and-dry. But I do expect it to be interesting.
Perhaps that kind of thing could help us finally move on from the "stupid should hurt" mindset to a real safety culture, where we value fault tolerance.
We like to pretend humans can reliably execute basic tasks like telling left from right or counting to ten, or reading a four digit number, and we assume that anyone who fails at these tasks is "not even trying"
But people do make these kinds of mistakes all the time, and some of them lead to patients having the wrong leg amputated.
A lot of people seem to see fault tolerance as cheating or relying on crutches, it's almost like they actively want mistakes to result in major problems.
If we make it so that AI failing to count the Rs doesn't kill anyone, that same attitude might help us build our equipment so that connecting the red wire to R2 instead of R3 results in a self test warning instead of a funeral announcement.
Obviously I'm all for improving the underlying AI tech itself ("Maintain Competence" is a rule in crew resource management), but I'm not a super big fan of unnecessary single points of failure.
There has been some good research published on this topic of how RLHF, ie aligning to human preferences easily introduces mode collapse and bias into models. For example, with a prompt like: "Choose a random number", the base pretrained model can give relatively random answers, but after fine tuning to produce responses humans like, they become very biased towards responding with numbers like "7" or "42".
It's very funny that people hold the autoregressive nature of LLMs against them, while being far more hardline autoregressive themselves. It's just not consciously obvious.
Is my understanding wrong that LLMs are trained to emulate observed human behavior in their training data?
From that follows that LLMs fit to produce all kinds of human biases. Like preferring the first choice out of many, and the last our of many (primacy biases). Funnily the LLM might replicate the biases slightly wrong and by doing so produce new derived biases.
In most cases, The LLM itself is a name-less and ego-less clockwork Document-Maker-Bigger. It is being run against a hidden theater-play script. The "AI assistant" (of whatever brand-name) is a fictional character seeded into the script, and the human unwittingly provides lines for a "User" character to "speak". Fresh lines for the other character are parsed and "acted out" by conventional computer code.
That character is "helpful and kind and patient" in much the same way way that another character named Dracula is a "devious bloodsucker". Even when form is really good, it isn't quite the same as substance.
The author/character difference may seem subtle, but I believe it's important: We are not training LLMs to be people we like, we are training them to emit text describing characters and lines that we like. It also helps in understanding prompt injection and "hallucinations", which are both much closer to mandatory features than bugs.
This understanding is incomplete in my opinion. LLMs are more than emulating observed behavior. In the pre-training phase tasks like masked language model indeed train the model to mimic what they read (which of course contains lots of bias); but in the RLHF phase, the model tries to generate the best response judged by human evaluations (who tries to eliminate as much bias as possible in the process). In other words, they are trained to meet human expectations in this later phase.
But human expectations are also not bias-free (e.g. from the preferring-the-first-choice phenomenon)
Not only that if future AI distrusts humanity it is because history, literature and fiction is full of such scenarios and AI will learn those patterns and associated emotions from those texts. Humanity together will be responsible for creating a monster (if that scenario happens).
This is the "anyone can be a mathematician meme". People who hang around elite circles have no idea how dumb the average human is. The average human hallucinates constantly.
So if you give a bunch of people a boring task, pay them the same regardless of if they treat it seriously or not - the end result is they do a bad job!
Hardly a shocker. I think this say more about the experimental design then it does about AI & humans.
The paper basically sums to suggesting (and analyzing) these otpions:
* Comparing all possible pair permutations eliminates any bias since all pairs are compared both ways, but is exceedingly computationally expensive.
* Using a sorting algorithm such as Quicksort and Heapsort is more computationally efficient, and in practice doesn't seem to suffer much from bias.
* Sliding window sorting has the lowest computation requirement, but is mildly biased.
The paper doesn't seem to do any exploration of the prompt and whether it has any impact on the input ordering bias. I think that would be nice to know. Maybe assigning the options random names instead of ordinals would reduce the bias. That said, I doubt there's some magic prompt that will reduce the bias to 0. So we're definitely stuck with the options above until the LLM itself gets debiased correctly.
If the question inherently allows for "no-preference" to be valid but that is not a possible answer then you've left it to the person or llm to deal with that. If a human is not allowed to specify no preference why would you expect uniform results when you don't even ask for it? You only asked to pick the best. Even if they picked perfectly, its not defined in the task to make sure you select draws in a random way.
interleaving a bunch of people's comments and then asking the LLM to sort them out and rank them…seems like a poor method. The whole premise seems silly, actually. I don't think there's any lesson to draw here other than that you need to understand the problem domain in order to get good results from an LLM.
So many articles like this HN have a catchy title and then a short article that doesn't really conclude the title.
The experiment itself is so fundamentally flawed it's hard to begin criticizing it. HN comments as a predictor of good hiring material is just as valid as social media profile artifacts or sleep patterns.
Just because you produce something with statistics (with or without LLMs) and have nice visuals and narratives doesn't mean is valid or rigorous or "better than nothing" for decision making.
Articles like this keep making it to the top of HN because HN is behaving like reddit where the article is read by few and the gist of the title debated by many.
Human level artificial intelligence has never had much appeal to me, there are enough idiots in the world, why do we need artificial ones? Ie if average machine intelligence mirrored human IQ distribution?
Owners would love to be able to convert capital directly into products without any intermediate labor[0]. Fire your buildings full of programmers and replace them with a server farm that only gets faster and more efficient over time? That's a great position to be in, if you own the IP and/or server farm.
The "person one" vs "person two" bias seems trivially solvable by running each pair evaluation twice with each possible labelling and the averaging the scores.
Although of course that behavior may be a signal that the model is sort of guessing randomly rather than actually producing a signal.
Agreed on the second part. Correcting for bias this way might average out the scores but not in a way that correctly evaluates the HN comments.
The LLM isn't performing the desired task.
It sounds possible to cancel out the comments where reversing the labels swaps the outcome because of bias. That will leave the more "extreme" HN comments that it consistently scored regardless of the label. But that may not solve for the intended task still.
> But an LLM can't be held accountable.. neither can most employees
Yes and no.
Yes, this is really problem, because at current level of technologies, some thing are inexpensive only if done in large numbers (factor of scale), so for example, just could not exist one person who could be accountable for machine like Boeing-747 (~500 human-years of work per plane).
Unfortunately, modern automobile is considered large system, made from thousands parts, so again, not exist one person to know everything.
And no, Germans said "Ordnung muss sein", which in modern management mean, constant clear organization of the game of the whole team is more important than the success of individual players.
Or, in simple words, right organization, controlled by rules is considered enough reliable to be accountable.
And for example in automobile industry, now normal to consider accountable whole organization.
And for example, Daimler officials few years ago said, Daimler safety systems will use Daimler view on robotic laws - priority will be safety of people inside vehicle. You may know, traditionally used Lem robotic laws, which have totally different view, separated from inside vs outside approach. In civil aviation using approach, to just use simple designs or design with evidence of reliability.
Sure, government regulators could decide something even more original, will see.
Any way, as technology emerge, accountability of machines will be sure subject of many discussions.
Is this not just because aggressive material was filtered out of training data and the system prompts usually include some preamble about being polite?
"Acknowledging they might be wrong" makes them sound like more than token predictors trained on polite sounding text.
Most of the reason LLMs will "admit they're wrong" is because they've been trained not to argue too hard, and to not hold strong preferences. It's a sort of customer service personality.
When you don't do that sufficiently you run the risk of producing the "Sydney" personality that Bing Chat had, which would argue back, and could go totally feral defending its incorrect beliefs about the world, to the point of insulting and belittling the user.
It's just because people tend to put the "original" result in the first place and the "improved" result in the second place in many scientific studies. LLM and humans are learning that and assume that the second one is the better one.
I know this is only adjacent to OP’s point, but I do find it somewhat ironic that it is easy to find people who are just as unreliable and incompetent at answering questions correctly as a 7b model, but also a lot less knowledgeable.
Also, often less capable of carrying on a decent conversation.
I’ve noticed an periconcious urge when talking to people to judge them against various models and quants, or to decide they are truly SOTA.
Wouldn’t the same outcome be achieved much more simply by giving LLMs a two choices (colors, numbers, whatever), asking “pick one” and assessing the results in the same way?
Kind of an odd metric to try to base this process off of. are more comments inherently better? is it responding to buzz words? Makes sense talking about hiring algos / resume scanners in part one and if anything this elucidates some of the trouble with them.
No they are not randomly wrong or right without perspective unless they have some kind of brain injury. So that's against the title but the rest of their point is interesting!
Very nice article. But the title, and the idea, is the very frequent "racist" form of the proper "People [can be] just as bad as my LLMs".
Now: some people can't count. Some people hum between words. Some people set fire to national monuments. Reply: "Yes we knew", and "No, it's not necessary".
And: if people could lift the tons, we would not have invented cranes.
Very, very often in these pages I meet people repeating "how bad people are". That is "how bad people can be", and "and we would have guessed these pages are especially visited by engineers, who must be already aware of the importance of technical boosts" - so, besides the point relevant to the fact that the median does not represent the whole set, the other point relevant to the fact that tools are not measured on reaching mediocre results.
rainsford|11 months ago
It seems like an incredibly bad outcome if we accept "AI" that's fundamentally flawed in a way similar to if not worse than humans and try to work around it rather than relegating it to unimportant tasks while we work towards a standard of intelligence we'd otherwise expect from a computer.
LLMs certainly appear to be the closest to real AI that we've gotten so far. But I think a lot of that is due to the human bias that language is a sign of intelligence and our measuring stick is unsuited to evaluate software specifically designed to mimic the human ability to string words together. We now have the unreliability of human language processes without most of the benefits that comes from actual human level intelligence. Managing that unreliability with systems designed for humans bakes in all the downsides without further pursuing the potential upsides from legitimate computer intelligence.
sigpwned|11 months ago
Also, I think it’s apparent that the world won’t wait for correct AI, whatever that even is, whether or not it even can exist, before it adopts AI. It sure looks like some employers are hurtling towards replacing (or, at least, reducing) human headcount with AI that performs below average at best, and expecting whoever’s left standing to clean up the mess. This will free up a lot of talent, both the people who are cut and the people who aren’t willing to clean up the resulting mess, for other shops that take a more human-based approach to staffing.
I’m looking forward to seeing which side wins. I don’t expect it to be cut-and-dry. But I do expect it to be interesting.
eternityforest|11 months ago
We like to pretend humans can reliably execute basic tasks like telling left from right or counting to ten, or reading a four digit number, and we assume that anyone who fails at these tasks is "not even trying"
But people do make these kinds of mistakes all the time, and some of them lead to patients having the wrong leg amputated.
A lot of people seem to see fault tolerance as cheating or relying on crutches, it's almost like they actively want mistakes to result in major problems.
If we make it so that AI failing to count the Rs doesn't kill anyone, that same attitude might help us build our equipment so that connecting the red wire to R2 instead of R3 results in a self test warning instead of a funeral announcement.
Obviously I'm all for improving the underlying AI tech itself ("Maintain Competence" is a rule in crew resource management), but I'm not a super big fan of unnecessary single points of failure.
Rhapso|11 months ago
dartos|11 months ago
People’s unawareness of their own personification bias with LLMs is wild.
pbreit|11 months ago
Compare that to the weight we place on "experts" many of whom are hopelessly compromised or dragged by mountains of baggage.
itchyjunk|11 months ago
smohare|11 months ago
[deleted]
tehsauce|11 months ago
robwwilliams|11 months ago
https://en.wikipedia.org/wiki/42_(number)
moffkalast|11 months ago
aidos|11 months ago
mynameismon|11 months ago
Shorel|11 months ago
thechao|11 months ago
https://xkcd.com/221/
lxe|11 months ago
MrMcCall|11 months ago
My favorite is:
And they trained their PI* on that giant turd pile.* Pseudo Intelligence
smallnix|11 months ago
From that follows that LLMs fit to produce all kinds of human biases. Like preferring the first choice out of many, and the last our of many (primacy biases). Funnily the LLM might replicate the biases slightly wrong and by doing so produce new derived biases.
Terr_|11 months ago
In most cases, The LLM itself is a name-less and ego-less clockwork Document-Maker-Bigger. It is being run against a hidden theater-play script. The "AI assistant" (of whatever brand-name) is a fictional character seeded into the script, and the human unwittingly provides lines for a "User" character to "speak". Fresh lines for the other character are parsed and "acted out" by conventional computer code.
That character is "helpful and kind and patient" in much the same way way that another character named Dracula is a "devious bloodsucker". Even when form is really good, it isn't quite the same as substance.
The author/character difference may seem subtle, but I believe it's important: We are not training LLMs to be people we like, we are training them to emit text describing characters and lines that we like. It also helps in understanding prompt injection and "hallucinations", which are both much closer to mandatory features than bugs.
ziaowang|11 months ago
But human expectations are also not bias-free (e.g. from the preferring-the-first-choice phenomenon)
nthingtohide|11 months ago
mplewis|11 months ago
henlobenlo|11 months ago
bawolff|11 months ago
Hardly a shocker. I think this say more about the experimental design then it does about AI & humans.
markbergz|11 months ago
The authors discuss the person 1 / doc 1 bias and the need to always evaluate each pair of items twice.
If you want to play around with this method there is a nice python tool here: https://github.com/vagos/llm-sort
fpgaminer|11 months ago
* Comparing all possible pair permutations eliminates any bias since all pairs are compared both ways, but is exceedingly computationally expensive. * Using a sorting algorithm such as Quicksort and Heapsort is more computationally efficient, and in practice doesn't seem to suffer much from bias. * Sliding window sorting has the lowest computation requirement, but is mildly biased.
The paper doesn't seem to do any exploration of the prompt and whether it has any impact on the input ordering bias. I think that would be nice to know. Maybe assigning the options random names instead of ordinals would reduce the bias. That said, I doubt there's some magic prompt that will reduce the bias to 0. So we're definitely stuck with the options above until the LLM itself gets debiased correctly.
jayd16|11 months ago
velcrovan|11 months ago
isaacremuant|11 months ago
The experiment itself is so fundamentally flawed it's hard to begin criticizing it. HN comments as a predictor of good hiring material is just as valid as social media profile artifacts or sleep patterns.
Just because you produce something with statistics (with or without LLMs) and have nice visuals and narratives doesn't mean is valid or rigorous or "better than nothing" for decision making.
Articles like this keep making it to the top of HN because HN is behaving like reddit where the article is read by few and the gist of the title debated by many.
le-mark|11 months ago
roywiggins|11 months ago
[0] https://qntm.org/mmacevedo
devit|11 months ago
Although of course that behavior may be a signal that the model is sort of guessing randomly rather than actually producing a signal.
harrisonjackson|11 months ago
The LLM isn't performing the desired task.
It sounds possible to cancel out the comments where reversing the labels swaps the outcome because of bias. That will leave the more "extreme" HN comments that it consistently scored regardless of the label. But that may not solve for the intended task still.
jopsen|11 months ago
simne|11 months ago
Yes and no.
Yes, this is really problem, because at current level of technologies, some thing are inexpensive only if done in large numbers (factor of scale), so for example, just could not exist one person who could be accountable for machine like Boeing-747 (~500 human-years of work per plane).
Unfortunately, modern automobile is considered large system, made from thousands parts, so again, not exist one person to know everything.
And no, Germans said "Ordnung muss sein", which in modern management mean, constant clear organization of the game of the whole team is more important than the success of individual players.
Or, in simple words, right organization, controlled by rules is considered enough reliable to be accountable.
And for example in automobile industry, now normal to consider accountable whole organization.
And for example, Daimler officials few years ago said, Daimler safety systems will use Daimler view on robotic laws - priority will be safety of people inside vehicle. You may know, traditionally used Lem robotic laws, which have totally different view, separated from inside vs outside approach. In civil aviation using approach, to just use simple designs or design with evidence of reliability.
Sure, government regulators could decide something even more original, will see.
Any way, as technology emerge, accountability of machines will be sure subject of many discussions.
jayd16|11 months ago
leptons|11 months ago
unknown|11 months ago
[deleted]
switch007|11 months ago
How quickly and easily people are willing to give up first class sources is quite frightening
unknown|11 months ago
[deleted]
satisfice|11 months ago
icelancer|11 months ago
Is this a universal phenomenon where you've worked? Consider yourself very lucky.
andrewmcwatters|11 months ago
To me it’s literally the same as testing one Markov chain against another.
megadata|11 months ago
It can be incredibly hard to get a person to acknowledge that they might be remotely wrong on a topic they really care about.
Or, for some people, the thought that they might be wrong about anything attall is just like blasphemy to them.
Xelynega|11 months ago
"Acknowledging they might be wrong" makes them sound like more than token predictors trained on polite sounding text.
roywiggins|11 months ago
When you don't do that sufficiently you run the risk of producing the "Sydney" personality that Bing Chat had, which would argue back, and could go totally feral defending its incorrect beliefs about the world, to the point of insulting and belittling the user.
oldherl|11 months ago
K0balt|11 months ago
Also, often less capable of carrying on a decent conversation.
I’ve noticed an periconcious urge when talking to people to judge them against various models and quants, or to decide they are truly SOTA.
I need to touch grass a bit more, I think.
sponnath|11 months ago
soared|11 months ago
ramity|11 months ago
vivzkestrel|11 months ago
djaouen|11 months ago
bxguff|11 months ago
unknown|11 months ago
[deleted]
th0ma5|11 months ago
raincole|11 months ago
TL;DR: the author found a very, very specific bias that is prevalent in both humans and LLMs. That is it.
animanoir|11 months ago
[deleted]
mdp2021|11 months ago
Now: some people can't count. Some people hum between words. Some people set fire to national monuments. Reply: "Yes we knew", and "No, it's not necessary".
And: if people could lift the tons, we would not have invented cranes.
Very, very often in these pages I meet people repeating "how bad people are". That is "how bad people can be", and "and we would have guessed these pages are especially visited by engineers, who must be already aware of the importance of technical boosts" - so, besides the point relevant to the fact that the median does not represent the whole set, the other point relevant to the fact that tools are not measured on reaching mediocre results.
th0ma5|11 months ago