top | item 46492881 (no title) dandersch | 1 month ago > Small quantities of poisoned training data can significantly damage a language model.Is this still accurate? discuss order hn newest embedding-shape|1 month ago Probably always be true, but also probably not effective in the wild. Researchers will train a version, see results are off, put guards against poisoned data, re-train and no damage been done to whatever they release. d-lisp|1 month ago How would they put guards against poisoned data ? How would they identify poisoned data if there are a lot/obfuscated ?
embedding-shape|1 month ago Probably always be true, but also probably not effective in the wild. Researchers will train a version, see results are off, put guards against poisoned data, re-train and no damage been done to whatever they release. d-lisp|1 month ago How would they put guards against poisoned data ? How would they identify poisoned data if there are a lot/obfuscated ?
d-lisp|1 month ago How would they put guards against poisoned data ? How would they identify poisoned data if there are a lot/obfuscated ?
embedding-shape|1 month ago
d-lisp|1 month ago