(no title)
hexaga | 1 month ago
It might as well be a magic spell. You show the user the thing, and they buy/subscribe/click-through with some probability according to massive ML model that knows everything there is to know about them.
Yes - people are capable of making decisions in their own self interest. But there exists a gap where not _all_ of peoples' decision making process is the aforementioned. And that gap can be exploited, systematically.
The existence of that gap is the actual problem. At scale, you can own a nontrivial quantity of human agency because that agency is up for grabs. Google / similar make their money by charging rent on that 'freely exploitable agency'. Not by providing value to people. The very idea is ridiculous. Value? How are you going to define a loss function over value?
ML models on click-through or whatever else don't figure out how to provide value. They find the gap. The gap is made of things like: 'sharp, contrasting borders _here_ increase P by 0.0003', 'flashing text X when recently viewed links contain Y increase P by 0.031', etc and so on.
satvikpendem|1 month ago
You have cause and effect reversed. The only reason the ML model can predict whether someone will buy a product is because people have bought it in the past. Why did they buy it? Because it provides them value. The ML prediction is descriptive, not prescriptive. I can similarly create an ML model to predict the weather, that does not mean my model causes the weather which is basically what you're saying.
robotpepi|1 month ago
And ML models are not only based on what you've already bought. On instragram, for instance, I have ads for bird toys/vets/etc because I follow bird owners.
hexaga|1 month ago
It's not a moral argument, but a practical one: agency is being extracted on massive scale, and being used for what?
Human beings might as well abstract away into point sources of agency for all it matters to the argument being made. If you can extract 0.1% of the agency of anyone who looks at a thing, and you show it to 3 billion people, _you have a lot of agency_. If you then sell it to the highest bidder, you find yourself quickly removing "don't be evil" from the set of any principles you may once have had.
My overarching point is that value-as-decision-mediator is meaningless in this calculus. It's the part of the equation that doesn't matter, the part you can't manipulate, the part that _is not a source of manipulable agency_. It's not relevant. I'm not saying it doesn't exist, or that it doesn't affect peoples' decisions: I'm saying it _doesn't matter_. It can be 99.99% of how you make your decisions, and it _still doesn't matter_. As long as that 0.01% gap exists.
> The only reason the ML model can predict whether someone will buy a product is because people have bought it in the past.
Yes. This is how you gather evidence that something works. It is not the reason it works. The ML model _knows about the spell_ because people have let it affect them in the past. But the spell works because it's magic. It doesn't need anything other than: Y follows X.
> The ML prediction is descriptive, not prescriptive. I can similarly create an ML model to predict the weather, that does not mean my model causes the weather which is basically what you're saying.
Not all models describe actions which are possible for you to take. Weather models are basically not like that. Advertising models _are_.
You aren't in a position where you can meaningfully manipulate the weather, if only you knew how exactly to manipulate it to maximize your profit. It's a vacuous argument in general. Models are just knowledge. Obviously some knowledge is useful, some isn't, some is dangerous, some isn't, some can be used by specific people, some can be used by any, etc.
It's not the model that is causing things to happen. It's a machine that uses the knowledge in the model, where the model describes actions possible for the machine to take. It is automated greed.
The fundamental concern is not that knowledge is bad, or that ML models are bad. It is that someone is in the position of having a tap on vast, diffuse sources of agency, and have automated the gathering of knowledge in using it to maximize profit, causing untold damage to everything, with the responsibility laundered through intermediary actors.