(no title)
azath92 | 1 month ago
I am genuinely curious to understand the incentives for companies who have the power to mitigate risk to actually do so. Are there good examples in the past of companies taking action that is harmful to their bottom line to mitigate societal risk of harm their products on society? My premise being that their primary motive is profit/growth, and that is revenue or investment dictated for mature and growth companies respectively (collectively "bottom line").
Im only in my mid 30s so dont have as much perspective on past examples of voluntary action of this sort with respect to tech or pre-tech corporates where there was concern of harm. Probably too late to this thread for replies, but ill think about it for the next time this comes up.
ACCount37|1 month ago
The rest is up to the companies themselves.
Anthropic seems to walk the talk, and has supported some AI regulation in the past. OpenAI and xAI don't want regulation to exist and aren't shy about it. OpenAI tunes very aggressively against PR risks, xAI barely cares, Google and Anthropic are much more balanced, although they lean towards heavy-handed and loose respectively.
China is its own basket case of "alignment is when what AI says is aligned to the party line", which is somehow even worse than the US side of things.