top | item 40048655

(no title)

calebrob6 | 1 year ago

It looks like it needs to go through toxicity testing -- https://twitter.com/WizardLM_AI/status/1780101465950105775

discuss

order

haolez|1 year ago

I can't imagine how toxicity in a LLM can be harmful. Can't we write stories with toxic characters, for example?

It feels like the greatest minds of our era are creating an amazing piece of technology. And we are hindering it in the name of corporate cover ass and bullshit jobs.

mistrial9|1 year ago

if you read the large theory papers on human-computer LLM interfaces, the people who build these are genuinely worried about "harm" . Impolitely, it appears from the outside that the kind of researchers that they have attracted over a decade to do this fantastically tedious and abstract work, are covered in emotional illness symptoms personally, and developed a culture of incessantly declaring "harm" in every shadow of every corner; at the same time, corporate black-hearts have money on the mind, and are genuinely worried about "harm" in the form of consumer retaliation in the marketplace, massive legal liability for civil rights blunders, and losing the sweet spot to a competitor; then government at the executive level, at the Nation-State, are obsessing about obtaining and implementing AI for competitive advantage against just about every other group of people you can name -- as long as no one can prove that they implemented "harm" while getting unprecedented competitive advantages at scale over populations of unwitting civilians, their geopolitical rivals, and probably other political types of a different stripe.

So no, it is not "bs jobs" at all .. but worse

95014_refugee|1 year ago

If you think that you want to live in a world where your life is heavily influenced by machines that were trained on the idea that you don’t deserve to exist, then yes, “toxicity” isn’t a problem.

But you don’t think that. Even if you think you do.

latexr|1 year ago

> It feels like the greatest minds of our era are

This is the new “we could put a man on the moon, yet…”. No, “the greatest minds of our era” are not working in adtech or building LLMs. It’s easy to forget, but there is a whole world outside of computers, and being good at it does not equate to being “a great mind”. It is absurd to believe the greatest minds are all working around the same problem spaces.

benterix|1 year ago

So this is a good thing for people like me who don't care if some piece of software is "toxic" with me or not.

perrygeo|1 year ago

It was trained on content from the internet. I'd be massively surprised if it somehow wasn't toxic. Humanity (or a small portion of it) is full of assholes. As much as that sucks, shouldn't the embeddings reflect the reality of the training content? If you want fluffy bunnies and flowers and happy people holding hands, shouldn't you just train it on that content?