top | item 44039904

(no title)

mk_chan | 9 months ago

Going by this: https://www.aeaweb.org/conference/2025/program/paper/3Y3SD8T... which states “… founding teams comprised of all men are most common (75% in 2022)…” it might actually make sense that the LLM is reflecting real world data because by the point a company begins to use an LLM over personal network-based hiring, they are beginning to produce a more gender-balanced workforce.

discuss

order

giantg2|9 months ago

Aiming for a gender balanced workforce might be biased if the candidate pool isn't gender balanced as well.

mk_chan|9 months ago

Following the paper, if you end up with a gender balanced workforce, it implies there is surely a bias in one of the variables - the candidate pool (like you say) or the evaluation of a candidate or other related things. However the bias must also reverse to equalize once the balance tips the other way or actually disappear once the desired ratio is achieved.

Edit: it should go without saying that once you hire enough people to dwarf the starting population of the startup + consider employee churn, the bias should disappear within the error margin in the real world. This just follows the original posted results and the paper.

billyp-rva|9 months ago

If this were true, the LLMs would favor male candidates in female-dominated professions.

mk_chan|9 months ago

That should happen if the training dataset (which is presumably based on the real world) reflects that happening.

darkwater|9 months ago

The bias found by this research is towards females.

xenocratus|9 months ago

And the comment says that, since companies start out with more males, it presumably makes sense to favour females to steer towards gender balance.

gitremote|9 months ago

An LLM doesn't have any concept of math or statistics. There is no need to defend using a black box like generative AI in hiring decisions.