top | item 47136944

(no title)

poisonborz | 5 days ago

Read and think about what you wrote. How can an ai, completing specific scoped tasks, be in any way comparable to the scale of a human life? Maybe the same thing these execs forgot.

discuss

order

palmotea|5 days ago

> Read and think about what you wrote.

A lot of software engineers are bad at value judgements, and often feel smart by confidently taking propaganda at face value.

It's kind of mind boggling that, if someone genuinely believes what the GP wrote, that they don't immediately follow the statement with "smash it!."

password54321|5 days ago

I am comparing competency not the "scale of a human life" or whatever that is supposed to mean. AI still lacks taste so it is still hard to replace human originality or creativity but that's almost it when it comes to work that can be done on a computer. It will very clearly surpass everyone in verifiable domains and already has surpassed most people.

We are already at that point where we just don't fully know what to do with what we already have and simply haven't fully internalised it. But all it will take is one economic shakeup to redistribute human intelligence from what we are familiar with.

fernandopj|5 days ago

That is the crux of the problem we're facing as a society: many, many leaders have this idea that they are better served by an AI that is 70% (?), 80% (?) correct when helping them make decisions about their business, than trusting humans - consultants, employees, pundits - that they don't even trust their judgments, bias, own goals, much less paying them.

For those people, an AI better (much better?) than a coin toss is the goal, if it means not relying on people.

Personally, I already deal weekly with people that veemently antagonizes every line of thinking if it isn't what ChatGPT told them before a meeting.

ethbr1|5 days ago

The root issue is epistemological.

If one puts their faith in answers that come out of a black box, then one must justify the black box's omniscience, specifically by prioritizing it above human intellect and deprioritizing attempts to reason through its logic.

You saw it with older people blindly following sat navs because they'd forgotten how to navigate. And those were much less believable sounding devices!

It's not going to stop until/if the first execs are thrown in jail because the 'I just trusted AI' defense fails.