top | item 33895720

(no title)

jesselawson | 3 years ago

I appreciate how optimistic people are about interesting tools like this. I personally am concerned about production use of models in any form that do not have strict oversight rules and accountability of training data -- especially in digital social spaces.

It feels like we need international, strict, transparent controls over the data used to train ML models and the algorithms through which content, recommendations, and inferences are provided to the general public, otherwise what is bound to happen is commercial interests (which a U.S. president has already admitted is more important than peace[1]) will create massive amounts of pseudo-signal in digital spaces, on the one hand capitalizing on psychological effects of exposure and social proof to sell products, and on the other hand, carrying out and exacerbating the outcomes of political disinformation campaigns.

But strict controls and transparency over training data wont be enough, since the general public is unlikely to ever have the requisite time and energy to inspect the data and recognize when models have been trained for lawful evil purposes and then petition their government for a redress of these grievances in a way that will lead to positive legislative action for healthy digital communities. (I think this task will be relegated to the fringes of society just like it is now, with journalists from big corporate outlets really only interested in these topics as a means of capitalizing on controversy.)

So what do we do? How do we prevent information pollution in digital spaces when commercial interests and state actors have both the means and motive to carry out widespread campaigns of social influence? Would we need to reconsider how we as people, corporations, and governments treat digital spaces -- perhaps considering them as "the means of connectedness" to drive home the distinction between human digital connectedness as a tool for interpersonal communication versus a tool for mass influence? (Is that even possible under our current socioeconomic systems?)

I've always wondered what would be different if we treated online public spaces like national parks. What would we allow and not allow? What could people count on -- and what could they trust (and why) about existing in that space and sharing information with each other?

As these models mature and grow in utility, I'm both excited and hesitant about what is possible -- because I know good people with great imaginations and I also know really bad people with great imaginations.

[1]: https://www.youtube.com/watch?v=CC0VTbGqioM

discuss

order

No comments yet.