(no title)
NumberWangMan | 1 year ago
In the case of these models, you can fine-tune the model all you want to be moral and not do harmful things like scam the elderly, carry out disinformation campaigns, harass people to the point of suicide. But as soon as you release the model weights, you are giving anyone the ability to fine-tune out all of those restrictions, with orders of magnitude less cost that it took to develop the model in the first place.
Regulating AI, especially as it becomes AGI and beyond, is going to be very tricky, and if everyone has the ability to create their own un-restricted, potentially sociopathic intelligences by tweaking the safe models created under careful conditions by big labs, we're in for a lot of trouble. That assumes we put the proper regulations on the big labs, and that they have the ability to make them "safe", which is hard, yes. But as AI turns into AGI and beyond, things are going to go pretty nuts, so it's important to start laying groundwork now.
No comments yet.