top | item 39045339

(no title)

foo3a9c4 | 2 years ago

> 1. I am saying that the claim "it is easy to find goals that are extinction-level bad" with regards to the AI tech that we can see today is incorrect. LLMs can understand context, and seem to generally understand that when you give them a goal of e.g. "increase revenue," that also includes various sub-goals like "don't kill everyone" that are implicit and don't need stating. Scaling LLMs to be smarter, to me, does not seem like it would reduce their ability to implicitly understand sub-goals like that.

I agree with both of these claims (A) it is hard to find goals that are extinction-level bad for current SOTA LLMs, and (B) current SOTA LLMs understand at least some important context around the requests made to them.

But I'm also skeptical that they understand _all_ of the important context around requests made to them. Do you believe that they understand _all_ of the important context? If so, why?

> P2 states that "superhuman" AI will be uncontrollable — once again, I do not think that is obvious, and depends on your definition of superhuman. Does "superhuman" mean dramatically better at every mental task, e.g. a human compared to a slug? Does it mean "average at most tasks, but much better at a few?" Well, then it depends what few tasks it's better at.

I take "superhuman" to mean dramatically better than humans at every mental task.

> Similarly, it anthropomorphizes these systems and assumes they want to "escape" or not be controlled; it is not obvious that a superhumanly-intelligent system will "want" anything; Stockfish is superhuman at chess, but does not "want" to escape or do anything at all: it simply analyzes and predicts the best next chess move. The idea of "desire" on the part of the programs is a large unstated assumption that I think does not necessarily hold.

Would you have less of a problem with this premise if instead it talked about "Superhuman AI agents"? I agree that some systems seem more like oracles rather than agents, that is, they just answer questions rather than pursuing goals in the world.

Consider self-driving cars, regardless of whether or not self-driving cars 'really want' to avoid hitting pedestrians, they do in fact avoid hitting pedestrians. And then P2 is roughly asserting, regardless of whether or not a superhuman AI agent 'really wants' to escape control by humans, it will in fact not be controllable by humans.

> Finally, P3 asserts that AI will be "misaligned by default" and that "misaligned" means that it will produce extinction or extinction-level results, which to me feels like a very large assumption. How much misalignment is required for extinction? Yud has previously made very off-base claims on this, e.g. believing that instruction-following would mean that an AI would kill your grandmother when tasked with getting a strawberry (if your grandmother had a strawberry), whereas current tech can already implicitly understand your various unstated goals in strawberry-fetching like "don't kill grandma." The idea that any degree of "misalignment" will be so destructive that it would cause extinction-level events is a) a stretch to me, and b) not supported by the evidence we have today.

I'm often unsure whether you are making claims about all future AI systems or just future LLMs.

> In fact a pretty simple thought experiment in the converse is: a superhumanly-intelligent system that is misaligned on many important values, but is aligned on creating AI that aligns with human values, might help produce more-intelligent and better-aligned systems that would filter out the misaligned goals — so even a fair degree of misalignment doesn't seem obviously extinction-creating.

Maybe. Or the misaligned system will just disinterestedly and indirectly kill everyone by repurposing the Earth's surface into a giant lab and factory for making the aligned AI.

> Furthermore, it is not obvious that we will produce misaligned AI by default. If we're training AI by giving it large corpuses of human text (or images, etc), and evaluating success by the model producing human-like output that matches the corpus, that... is already a form of an alignment process: how well does the model align to human thought and values in the training corpus?

I believe it is likely that this process does some small amount of alignment work. But I would still expect the system to be mostly confused about what humans want.

Is this roughly the argument that you are making?

  (P1) Current SOTA LLMs are good at understanding implicit context.
  (P2) A system must be extremely misaligned in order to cause a catastrophe.
  (C) So, it will be easy to sufficiently align future more powerful LLMs.

discuss

order

reissbaker|2 years ago

My arguments are:

(P1) Current SOTA AI is good at understanding implicit context, and improved versions will likely be better at understanding implicit context (much like gpt-4 is better at understanding context than gpt-3, and llama2 is better than llama1, and mixtral is better than gpt-3 and better than claude, etc).

(P2) Most misalignments within the observable behavior of current AI do not produce extinction-level goals, and given (P1), it is unclear why someone would believe it's likely going to in the future, since they'll be even better at understanding implicit human context of goals (e.g. implicit goals like do not make humanity extinct, don't turn the entire surface of the planet into an AI lab, etc).

(C) Future AI will not likely be extinction-level misaligned with human goals.

I think there are several other arguments, though, e.g.:

(P1) Progress on AI capabilities is evolutionary, with dumber models slowly being replaced by derivative-but-better models, in terms of architectural evolutionary improvements (e.g. new attention variants), dataset evolutionary improvements as they grow larger and as finetuning sets grow higher quality, and in terms of benchmark and alignment evolutionary progress.

(P2) Evolutionary steps towards evil-AI will likely be filtered out during training, since it will not yet be generalized superhuman intelligence and will give away its misalignment during training, whereas legitimately-aligned AI model evolutions will be rewarded for better performance.

(P3) Generalized superhuman intelligence will likely be an evolutionary step from a well-aligned ordinary intelligence, which will be an evolutionary step from sub-human intelligence that is reasonably well aligned.

(C) Superhuman intelligence will have been evolutionarily refined to be reasonably well-aligned.

Or:

(P1) LLMs have architectural issues that will prevent them from quickly becoming generalized superintelligence of the "human vs slug" variety (bad/inefficient at math, tokenization issues, likelihood of hallucinations, limited ability to learn new facts without expensive and slow training runs, difficulty backtracking from incorrect chains of reasoning, etc).

(C) LLM research is not likely to soon produce a superhuman AI able to cause an extinction event for humanity, and should not be illegal.

However, ultimately my most strongly-believed personal argument is:

(P1) The burden of proof for making something illegal due to apocalyptic predictions lies on the prognosticator.

(P2) There is not much hard evidence of an impending apocalypse due to LLMs, and philosophical arguments for it are either self-referential and require belief in the apocalypse as a prerequisite, or are highly speculative, or both.

(C) LLM research should not be illegal.

foo3a9c4|2 years ago

(I don't currently have the energy to engage with each argument, so I'm just responding to the first.)

> (P1) Current SOTA AI is good at understanding implicit context, and improved versions will likely be better at understanding implicit context (much like gpt-4 is better at understanding context than gpt-3, and llama2 is better than llama1, and mixtral is better than gpt-3 and better than claude, etc).

I believe that (P1) is probably true.

> (P2) Most misalignments within the observable behavior of current AI do not produce extinction-level goals, and given (P1), it is unclear why someone would believe it's likely going to in the future, since they'll be even better at understanding implicit human context of goals (e.g. implicit goals like do not make humanity extinct, don't turn the entire surface of the planet into an AI lab, etc).

I'm confused about what exactly you mean by "goals" in (P2). Are you referring to (I) the loss function used by the algorithm that trained GPT4, or (II) goals and sub-goals which are internal parts of the GPT4 model, or (III) the sub-goals that GPT4 writes into a response when a user asks it "What is the best way to do X?"