(no title)
NateEag | 1 month ago
As a programmer with a philosophical bent, I have thought a lot about the implications and ethics of toolmaking.
I concluded long before genAI was available that it is absolutely possible to build tools that dehumanize the users and damage the world around them.
It seems to me that LLMs do that to an unprecedented degree.
Is it possible to use them to help you make worthwhile, human-focused output?
Sure, I'd accept that's possible.
Are the tools inherently inclined in the opposite direction?
It sure looks that way to me.
Should every tool be embraced and accepted?
I don't think so. In the limit, I'm relieved governments keep a monopoly on nuclear weapons.
The people saying "All AI is bad" may not be nuanced or careful in what they say, but in my experience, they've understood rightly that you can't get any of genAI's upsides without the overwhelming flood of horrific downsides, and they think that's a very bad tradeoff.
I agree with them.
echelon|1 month ago
They've made dozens of essays and done tons of experiments showing that they think AI is going to be great for our field:
https://www.youtube.com/watch?v=DSRrSO7QhXY (scrub through the timelines to the end of these videos to see)
https://www.youtube.com/watch?v=iq5JaG53dho
https://www.youtube.com/watch?v=mUFlOynaUyk
https://www.youtube.com/watch?v=GVT3WUa-48Y
Listen to them.
Our entire industry pays attention to them, and they're right!
[1] https://en.wikipedia.org/wiki/Corridor_Digital
CyberDildonics|1 month ago
They are literally "react" youtubers who have never worked a single day as professional vfx artists.
This is like saying Jake Paul is the heavyweight boxing champion of the world.