top | item 42495681

(no title)

exprofmaddy | 1 year ago

I'm with you. Interpreting a problem as a problem requires a human (1) to recognize the problem and (2) to convince other humans that it's a problem worth solving. Both involve value, and value has no computational or mechanistic description (other than "given" or "illusion"). Once humans have identified a problem, they might employ a tool to find the solution. The tool has no sense that the problem is important or even hard; such values are imposed by the tool's users.

It's worth considering why "everyone seems all too ready to make ... leaps ..." "Neural", "intelligence", "learning", and others are metaphors that have performed very well as marketing slogans. Behind the marketing slogans are deep-pocketed, platformed corporate and government (i.e. socio-rational collective) interests. Educational institutions (another socio-rational collective) and their leaders have on the whole postured as trainers and preparers for the "real world" (i.e. a job), which means they accept, support, and promote the corporate narratives about techno-utopia. Which institutions are left to check the narratives? Who has time to ask questions given the need to learn all the technobabble (by paying hundreds of thousands for 120 university credits) to become a competitive job candidate?

I've found there are many voices speaking against the hype---indeed, even (rightly) questioning the epistemic underpinnings of AI. But they're ignored and out-shouted by tech marketing, fundraising politicians, and engagement-driven media.

discuss

order

No comments yet.