top | item 38525401

(no title)

jal278 | 2 years ago

But applied mathematics can have ethical impact -- e.g. the concept of whether a human should trust the output of a particular language model. So GP's idea of 'trust' not applying because an object has its basis in math seems like a false dividing line. Ultimately everything can be grounded in things such as math as far as we know, although its not useful to reason about e.g. ethics from thinking about the mathematics of neuronal behavior.

discuss

order

haltist|2 years ago

This is not true. Lots of things have no mathematical foundations because it is impossible to state them formally/symbolically. If you can not specify it formally then it is not mathematics. AI is mathematics because software/code/hardware is mathematics so all the hullabaloo about "safety" makes absolutely no sense other than as a marketing gimmick. Even alignment has been co-opted by OpenAI's marketing department to sell more subscriptions.

But in any event, the endgame of AI is a machine god that perpetuates itself and keeps humans around as pets. That is the best case scenario because by most measures the developed world is already a mechanical apparatus and the only missing piece for its perpetuation is the mechanical brain.

As usual, I can build this mechanical brain for $80B so tell your VC friends.

jal278|2 years ago

I don't get this line of logic -- of course software has safety implications, because people use it for things in the real world. It isn't "math' that is cleanly separable from the rest of humanity; its training data comes from humanity, and it will be used towards human goals. AI is entangled with the rest of human dealings.

Whether AI poses existential threats for us or not, I'm open to either direction, but that the experts (e.g. Hinton, LeCun) are divided is reason enough to be concerned.