top | item 42052981

(no title)

fjdjshsh | 1 year ago

I think it's useful because many people consume quantized models (most models that fit in your laptop will be quantized and not because people want to uncensor or un-unlearn anything). If you're training a model it makes sense to make the unlearning at least robust to this very common procedure.

This reminds of this very interesting paper [1] that finds that it's fairly "easy" to uncensor a model (modify it's refusal thingy)

[1] https://www.reddit.com/r/LocalLLaMA/comments/1cerqd8/refusal...

discuss

order

No comments yet.