(no title)
fiatmoney | 9 years ago
The hard part is actually figuring out what you care about, particularly in the context of a truly universal optimizer that can decide to trade off anything in the pursuit of its objectives.
This has been a core problem of philosophy for 3000 years - that is, putting some amount of rigorous codification behind human preferences. You could think of it as a branch of deontology, or maybe aesthetics. It is extremely unlikely that a group sponsored by Sam Altman, whose brilliant idea was "let's put the government in charge of it" [1], will make a breakthrough there.
I don't actually doubt that AIs would lead to philosophical implications, and philosophers like Nick Land have actually explored some of that area. But I severely doubt the ability of AI researchers to do serious philosophy and simultaneously build an AI that reifies those concepts.
argonaut|9 years ago
> The hard part is actually figuring out what you care about, particularly in the context of a truly universal optimizer that can decide to trade off anything in the pursuit of its objectives.
This seems basically equivalent to what they are saying. A reward function that rewards "what we actually care about." This might seem vague, but that's fine because these are only proposed problems.
akvadrako|9 years ago
The goal is avoiding unsafe AI. The reason such pointless efforts are wasted on this approach is we don't have a good alternative. The only thing I can think of is delaying it's creation indefinitely, but that's also a difficult challenge. For example, in the Dune books, the government outlaws all computers. That might work for a while.
GuiA|9 years ago
One might wish to point out that the emperor has no clothes and yet have no desire to plan his majesty's outfits for the next 6 months.
fiatmoney|9 years ago
"How do I make a program make beautiful music" is a CS problem, but only after you have some notion of aesthetics in the first place.
In the context of a universal optimizer, "how do we make this program behave reasonably without bad side effects" is maybe a CS problem, but it's predicated on "how do we codify our notion of reasonable behavior", which is analytic philosophy with probably a bit of social science thrown in.
Problem-posing is itself difficult and how a lot of philosophical breakthroughs are made. If you want rigorous problem-posing where the solution would be handy for AI, hiring a philosopher might be a good start. Very few of us are equipped to do this kind of work, certainly not here in the comments section.
marvin|9 years ago
In fact, I'm surprised that there doesn't seem to be any reference in the article to previous work on these philosophical implications, e.g. the stuff that has been written by Nick Bostrom or MIRI. Perhaps there are some in the paper?
I think that for the forseeable future, we will inevitably end up with two of the problems that various philosophers have outlined over the last few years:
(1) How do we ensure that an AI agent does exactly what we want it to do and
(2) What do we ultimately want if we can desire anything?
I think that any developer trying to approach this will be doomed to hack around these two issues. We can probably come a long way in AI capabilities without having the optimal solution to this, but the core problem will remain for a long time and haunt those who are cautious.
argonaut|9 years ago