top | item 37621391

(no title)

airgapstopgap | 2 years ago

Tegmark's thinking here is extremely shallow, discards the costs (opportunity costs and risks of stable dystopia) associated with this grandiose global project of dubious feasibility, and indeed I suspect he does not so much believe his own arguments as he generally prefers global technocratic regulation "for good measure". We don't really have good reasons to expect uncontrollable AGI in a way they posit (increasingly less as we advance down this path of ML model scaling), but we are well acquainted with unaccountable human power structures.

discuss

order

antonkar|2 years ago

He proposes to make things provably secure and some basic regulation to promote that. It’s a straw man argument to dismiss it as him proposing “unaccountable human power structures”.

In my opinion governments (UK was the recent example with their crackdown on encryption) don’t want security - they want backdoors to eavesdrop on you. Why would governments promote provably secure systems? And how such systems will help them with their evil plans?

He does address the costs: "The 2023 global nominal GDP is estimated to be $105 trillion. How much is it worth to ensure human survival? $1 trillion? $50 trillion?"

airgapstopgap|2 years ago

Provable safety (not to confuse with security as in normal discussion of vulnerabilities) for general intelligence is a pipe dream because, putting things simply, undesirable reasoning in full generality is not a meaningful class of computations. The end result of this line of thinking is centralization of AI development in a state-approved and military-associated facility.

> Why would governments promote provably secure systems?

Promote? The state demonstrably wants provably secure systems for themselves, in the military but also in the civilian sphere, see Matrix/Element, see DoD, see massive state interest in cryptography. This is an incredibly disingenuous argument, you talk as if people discuss tuning a generalized Safety Dial without any distinctions down the line.