It's not hard to formulate why "AI" is bad, at least in its current form. It destroys the education system, is dangerous for environment, things like deepfakes drive us further towards post-truth, it decreases product quality, AI is replacing artists and similar professions rather than technical ones without creating new jobs in the same area, it increases inequality, and so on.Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.
reedf1|7 months ago
OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.
onetokeoverthe|7 months ago
[deleted]