(no title)
orobus
|
1 year ago
I've got a background in philosophy and I'm constantly asking myself these questions too. There seems to be a two way failure - first, the ML folks failing to engage with the extant literature, and second, academic philosophy's failure to produce anything remotely resembling concrete, practical, or really even relevant philosophical work. The former is pretty much par for the course (in academic philosophy you get used to being ignored early on), but it's the latter I find egregious. Especially given (as others here have pointed out) the almost universally lazy and magical thinking in the ML space.
For example, there's much hand-wringing about the benefits and perils of "AGI" with barely any attempt to establish that "AGI" is even a coherent concept. I'm skeptical that it is, but I'd be happy to entertain arguments to the contrary---if there were any!
"AI" has become a marketing term for increasingly sophisticated statistical methods for approximating functions. I think some sober discussion about whether such brute-force induction is the right sort of thing to warrant the term "AI" would be a welcome addition.
No comments yet.