top | item 44510982 (no title) shadowfacts | 7 months ago ... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad. discuss order hn newest busterarm|7 months ago Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot) immibis|7 months ago Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it. load replies (1) mjmsmith|7 months ago It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though. Covzire|7 months ago [deleted] mingus88|7 months ago I’m going to say that is also bad. Hot take?
busterarm|7 months ago Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot) immibis|7 months ago Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it. load replies (1)
immibis|7 months ago Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it. load replies (1)
mjmsmith|7 months ago It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though.
busterarm|7 months ago
immibis|7 months ago
mjmsmith|7 months ago
Covzire|7 months ago
[deleted]
mingus88|7 months ago