The bright side is that it should eventually be technically feasible to create much more powerful and effective guardrails around neural nets. At the end of the day, we have full access to the machine running the code, whereas we can't exactly go around sticking electrodes into everyone's brains, and even "just" constant monitoring is prohibitively expensive for most human work. The bad news is that we might be decades away from an understanding of how to create useful guardrails around AI, and AI is doing stuff now.
No comments yet.