top | item 40625335

(no title)

philbin | 1 year ago

We might develop a selective corpus with say, the contents of the "Great Books" of western civilization and add the results of scientific history.

But the underlying problem in this is politics: everyone has a different idea of what is appropriate for that corpus (and consequently the resulting AI). Ergo the various brou-ha-ha's about "safety" etc. Indeed one assumption of these discussions may be correct: NN AIS may be as malleable, hard-headed or gullible as any human intelligence [and I don't know whether that is good or bad]. So many questions arise: "Should we let it read Karl Marx?", "What about St. Augustine?", etc.

Presumably we're modeling an intelligence akin to ourselves. We each occupy a single mind but the difference between minds can be great. The most familiar approach is therefore to develop an AI that is as much like us as possible.

We could also model many single minds with different corpuses and let them communicate, discuss et al as humans do. Maybe they would let us interact too.

FWIW I think you should be happy that any "intelligence" shown so far is of "Average Redditor" value. What would you do if you scattered some holy water on a pentagram in your upstairs living room, hurled out a diabolical incantation calling forth spirits and something akin to Satan himself appeared? That's (kind of) where we are with GPT.

discuss

order

No comments yet.