(no title)
hexaga | 1 month ago
It's not a moral argument, but a practical one: agency is being extracted on massive scale, and being used for what?
Human beings might as well abstract away into point sources of agency for all it matters to the argument being made. If you can extract 0.1% of the agency of anyone who looks at a thing, and you show it to 3 billion people, _you have a lot of agency_. If you then sell it to the highest bidder, you find yourself quickly removing "don't be evil" from the set of any principles you may once have had.
My overarching point is that value-as-decision-mediator is meaningless in this calculus. It's the part of the equation that doesn't matter, the part you can't manipulate, the part that _is not a source of manipulable agency_. It's not relevant. I'm not saying it doesn't exist, or that it doesn't affect peoples' decisions: I'm saying it _doesn't matter_. It can be 99.99% of how you make your decisions, and it _still doesn't matter_. As long as that 0.01% gap exists.
> The only reason the ML model can predict whether someone will buy a product is because people have bought it in the past.
Yes. This is how you gather evidence that something works. It is not the reason it works. The ML model _knows about the spell_ because people have let it affect them in the past. But the spell works because it's magic. It doesn't need anything other than: Y follows X.
> The ML prediction is descriptive, not prescriptive. I can similarly create an ML model to predict the weather, that does not mean my model causes the weather which is basically what you're saying.
Not all models describe actions which are possible for you to take. Weather models are basically not like that. Advertising models _are_.
You aren't in a position where you can meaningfully manipulate the weather, if only you knew how exactly to manipulate it to maximize your profit. It's a vacuous argument in general. Models are just knowledge. Obviously some knowledge is useful, some isn't, some is dangerous, some isn't, some can be used by specific people, some can be used by any, etc.
It's not the model that is causing things to happen. It's a machine that uses the knowledge in the model, where the model describes actions possible for the machine to take. It is automated greed.
The fundamental concern is not that knowledge is bad, or that ML models are bad. It is that someone is in the position of having a tap on vast, diffuse sources of agency, and have automated the gathering of knowledge in using it to maximize profit, causing untold damage to everything, with the responsibility laundered through intermediary actors.
No comments yet.