top | item 47206324

(no title)

Archer6621 | 15 hours ago

What makes you think that AI cannot become significantly better than humans at "understanding" and modelling the world? If the AI is always more likely to be right than you or me due to being able to take more variables/knowledge into account by default, then why ever listen to a human, or even to yourself when it comes to an economic decision?

My honest and rather pessimistic take is that in the long-term any craft that purely lives in the abstract is likely to be doomed.

discuss

order

mjdiloreto|14 hours ago

It's not that it won't be better at understanding, it's that there's too many possibilities to understand. This is true for humans too, but I can use the output to make money in a particular scenario.

Take even 1 simple example - software applications on a smart watch. How many dimensions of reality are relevant? Maybe I'm a busy person, so I need a personal assistant for my calendar. Maybe my wife needs access too. Maybe I'm a bird watcher and I'd like to track the birds I see. Maybe I'm a bird researcher and those observations need to integrate with my research.... ad nauseum forever.

AI will write all the code, and make all the meaningful decisions, but the backstop of the whole thing has to be some non-virtual reality with a paying user, otherwise there is no value to extract.

I personally only care about the outcome, I don't even really care if I understand how anything else works, or any of the decisions made. My dollars go in, working code comes out to suit me.

Archer6621|13 hours ago

I agree with your overall perspective here. You need the human in the loop to ground the request/direction in a reality with human needs, but that's about it.

What I was getting at is that nothing stops you from asking AI what would be the next best smartwatch app to build, and based on all its aggregated knowledge and other inputs (e.g. search) it has, it can potentially make a better estimation than you or any human of a product that would sell.

Of course whether that is actually true depends on how well its training data is able to model/mimic reality, and how grounded its inputs (e.g. internet) are. You can always help it a bit by steering it into the right direction, providing additional grounding. Was mainly wondering for how long this "additional" guidance would be a necessity, fearing that it won't be for as long as we think.