top | item 31359429

(no title)

Hellicio | 3 years ago

They are blackboxes for the normal user the same way as a smartphone is a blackbox.

Non of my close family understands the technical detail from bits to an image.

There are also plenty of expert systems were plenty of developers see them as blackboxes. Even normal databases and query optimizations are often enough blackboxes.

As long as those systems perform better as existing systems, thats fine by me. Take auto pilot: As long as we can show/proofe good enough that it drives better than an untrained 18 year old or 80 year old (to take extremes, i'm actually quite an avg driver myself), all is good.

And one very very big factor in my point of view: We never ever had the software equivilent of learning. When you look at Nvidia Omniverse, we are able to simulate those real life things so well, so often in such different scenarios, that we are already out of the loop.

I can't drive 10 Million KM in my lifetime (i think). The cars from Google and Tesla already did that.

Yesterday at the google io, they showed the big 50x Billion parameter network and for google this is the perfect excuse to gather and put all of this data they always had into something they now can monetarize. No one can ask google for money now like the press did (Same with Dall-E 2)

I think its much more critical that we enforce/force corporations to make/keep those models free for everyone to use. unfortunate i have no clue how much hardware you need to run those huge models.

discuss

order

esjeon|3 years ago

> They are blackboxes for the normal user the same way as a smartphone is a blackbox.

You can't take that approach. The current NN techniques are blackbox-by-nature, and are blockbox to everyone including the devs. Proprietary software is only blackbox to consumers, and large complex software still have insides that can be observed when things go wrong. For NNs, nothing can describe how exactly they work, and each network has to be reverse engineered independently, which is, AFAIK, a separate research field.

> I can't drive 10 Million KM in my lifetime (i think). The cars from Google and Tesla already did.

The length (nor the amount of data) alone doesn't decide the quality of AI. Actually, ROI diminishes rather quickly in the early stage of development. The rest is about picking up corner cases. They can drive 1 parsec, and still would not perform better than the ones we have now.

Also, again, because NN is a complete blackbox, even the devs can't be sure if those corner cases are properly reflected to a newly trained network, nor if the new training data didn't impact the performance in other corners. We just don't know for sure. We just take chances. That's the limitation.