I'm not sure why they're so often so bad. I wonder if it's the Upton Sinclair effect; to paraphrase slightly: "It is difficult to get a person to understand something, when their hoped-for future wealth depends on not understanding it."
There are far far more dollars available to people that are on the "AI Safety" bandwagon than to those pushing back against it.
The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.
> There are far far more dollars available to people that are on the "AI Safety" bandwagon than to those pushing back against it.
> The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
> Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.
IMO your comment doesn't substantively address michael_nielsen's comment, but I might be wrong. The following is how I understand your exchange with michael_nielsen.
The two of you are talking about three sets of people:
Let A be AI notkilleveryoneism people.
Let B be AI capabilities developers/supporters.
Let C be people concerned with regulatory capture and centralization by AI firms.
A and B are disjoint.
A and C have some overlap.
B and C have considerable overlap.
michael_nielsen is suggesting that the people of B are refusing to take AI risk seriously because they are excited about profiting from AI capabilities and its funding. (eg, a senior research engineer at OpenAI who makes $350k/year might be inclined to ignore AIXR and the same with a VC who has a portfolio full of AI companies)
And then you are pointing out that people of C are getting less money to investigate AI centralization than people of A are getting to investigate/propagandize AI notkilleveryoneism.
So, your claim is probably true, but it doesn't rebut what michael_nielsen suggested.
And I believe it's also critical to keep in mind that the actual funding is like this:
capabilities development >>>>>>>>>> ai notkilleveryoneism > ai centralization investigation
jph00|2 years ago
The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.
foo3a9c4|2 years ago
> The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
> Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.
IMO your comment doesn't substantively address michael_nielsen's comment, but I might be wrong. The following is how I understand your exchange with michael_nielsen.
The two of you are talking about three sets of people:
michael_nielsen is suggesting that the people of B are refusing to take AI risk seriously because they are excited about profiting from AI capabilities and its funding. (eg, a senior research engineer at OpenAI who makes $350k/year might be inclined to ignore AIXR and the same with a VC who has a portfolio full of AI companies)And then you are pointing out that people of C are getting less money to investigate AI centralization than people of A are getting to investigate/propagandize AI notkilleveryoneism.
So, your claim is probably true, but it doesn't rebut what michael_nielsen suggested.
And I believe it's also critical to keep in mind that the actual funding is like this:
capabilities development >>>>>>>>>> ai notkilleveryoneism > ai centralization investigation