(no title)
DeepSeaTortoise | 1 month ago
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
rsynnott|1 month ago
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
nicbou|1 month ago
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
senordevnyc|1 month ago
Granted, there's lots that's dystopian about that picture, I'm not advocating for it, but it does start to feel like the main value of the "curator" is actually just data capture. Then they put their own subjective take on that data, but I'm not totally convinced that's better than something that could tell me a data-driven story of: "Here are the top three banana breads in the city that customers keep coming back to have a taste orgasm for".
I don't know though, it's a brave new world and I'm skeptical of anyone who thinks they know how all this will play out.