top | item 47166203

(no title)

shubhamjain | 3 days ago

I was wondering if it was because of heavy-handedness of the administration, but apparently:

> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

discuss

order

ACCount37|3 days ago

That's because it is.

AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.

If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.

observationist|3 days ago

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

-Irving John Good, 1965

If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.

If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.

joshribakoff|3 days ago

You wouldn’t say that rolling dice is dangerous. You would say that the human who decides to take an action, depending on the value of the dice is the danger. I don’t think AI is dangerous. I think people are dangerous.

cael450|3 days ago

Tbh, I find this argument really stupid. The word prediction machine isn’t going to destroy humanity. Sure, humans can do some dumb stuff with it, but that’s about it.

Stop mistaking science fiction for science.

overgard|3 days ago

True of AGI, but what we have right now doesn't fit that bill. (I would encourage people that disagree with this to go talk to ChatGPT about how LLMs and reasoning models work. Seriously! I'm not being snarky. It's very good at explaining itself. If you understand how reasoning works and what an LLM is actually doing it's hard to believe that our current models are going to do much more than become iteratively more precise at mimicking their training datasets.)

paradox242|3 days ago

It needs to go well every single day, and only needs to go very poorly once. Not to conflate LLMs with actual super intelligence, but for this (and many other reasons related to basic human dignity), this is not a technology that a responsible society should be attempting to build. We need our very own Butlerian Jihad

PowerElectronix|3 days ago

Same with everything, right? You could say the same with nukes, electricity, internet, the computer, etc... But if you look at it without paying attention to the "ultimate tool for humanity" hype, it doesn't really look that much of a threat or a salvation.

It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)

There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.

tokyobreakfast|3 days ago

> If AI tech goes very poorly, it can be the end of human history.

"Just unplug the goddamn thing!"

Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.

SecretDreams|3 days ago

> If AI tech goes very well

The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.

HardCodedBias|3 days ago

"If AI tech goes very well, it can be the greatest invention of all human history"

As has been said at many all hands:

Let's all work on the last invention needed by humans.

tyre|3 days ago

“A source familiar with the matter” is almost certainly a company spokesperson.

If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.

Rapzid|3 days ago

Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.

With the latest competing models they are now realizing they are an "also" provider.

Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.

tumdum_|3 days ago

Hello sama

tenthirtyam|3 days ago

I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.

N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)

zerkten|3 days ago

We delegate power already. Is unleashing AI in some place different from unleashing JSOC on an insurgency in a particular place? One is code and other is a bunch of humans.

You expect the humans to follow laws, follow orders, apply ethics, look for opportunities, etc. That said, you very quickly have people circling the wagons and protecting the autonomy of JSOC when there is some problem. In my mind it's similar with AI because the point is serving someone. As soon as that power is undermined, they start to push back. Similarly, they aren't motivated to constrain their power on their own. It needs external forces.

edit: missed word.

tim333|3 days ago

We are currently giving them similar power to the average human idiot because I figure they won't do much worse than those. Letting either launch nukes is different.

jdross|3 days ago

Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)

DoughnutHole|3 days ago

Nuclear advancements slowed down due to PR problems from clear and sometimes catastrophic failure of commercial power plants (Three Mile Island, Chernobyl, Fukushima) and the vastly higher costs associated with building safer plants.

If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.

raincole|3 days ago

Nuclear energy hasn't been slowed down much, let alone stopped. China has been building new reactors every year for more than a decade and there are >30 ones under construction.

The same will go with AI, btw. Westerners' pearl clenching about AI guardrails won't stop China from doing anything.

turtlesdown11|3 days ago

> Seems like a path we should have kept running down, but stopped bc of the weapons.

you mean like the tens of billions poured into fusion research?

shafyy|3 days ago

It's a path we should have never started going down.

whywhywhywhy|3 days ago

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons

They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.

whatshisface|3 days ago

Shouldn't we be a little more skeptical about these abstract arguments when a very concrete sale is on the line?

goodmythical|3 days ago

Isn't curing cancer just as dangerous as a nuclear bomb? Especially considering some of the gene-therapies under consideration? Because you can bet that a non-negligable portion of research in this space is being funded by governments and groups interested in application beyond curing cancer. (Autism? Whiteness? Jewishness? Race in general? Faith in general? Could china finally cure western greed? Maybe we can slip some extra compliancy in there so that the plebia- ah- population is easier to contr- ah- protect.)

Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.

We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.

Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.

mikkupikku|3 days ago

To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.

scottLobster|3 days ago

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.

Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"

Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die

coffeefirst|3 days ago

Let's suppose I believe them, that's still a bad idea.

The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.

I would rather have more reliable models than more powerful models that screw up all the time.

toss1|3 days ago

Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].

It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.

[0] https://news.ycombinator.com/item?id=47145963

kelnos|3 days ago

"It's not because of the Pentagon deal", says company that has just greased the wheels for said Pentagon deal to move forward.

Riiiiiight.

francisofascii|3 days ago

It is a "reasonable" argument to keep yourself in the game, but it is sad nonetheless. You sacrifice your morals and do bad things, so if things get way worse, maybe you will be in a position to stop something from really bad from happening. Of course, you might just end up participating in the really bad thing.

nextaccountic|3 days ago

> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.

austinjp|3 days ago

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Amd they alone are responsible enough to govern it.

cmrdporcupine|3 days ago

We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.

But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.

LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.

Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.

Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.

Turns out this is useful stuff. In some domains.

It ain't SkyNet.

I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?

There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.

I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.

overgard|3 days ago

I think playing with the API's is something I'd encourage people excited about these technologies to do. I think it'll lead to the "magic" wearing off but more appreciation for what they actually can accomplish.

austinjp|3 days ago

I always feel this argument misses a point. SkyNet may still be a long way off, but autonomous killer drones are here. That is a bad situation my dudes.

Every step on the journey towards SkyNet is worse than the preceding step. Let's not split hairs about which step we're on: it's getting worse, and we should stop that.

sonusario|3 days ago

I wonder if it stems from any of the "AI uprising" stories where humanity is viewed as the cancer to be eradicated.

ajross|3 days ago

It's absolutely wild that the Big Moral Question of our time is informed as much by mid-20th-century pop science fiction as it is by a existing paradigm from academia or genuine reckoning with the technology itself.

If anything that makes me more hopeful and not less. It's asking too much that major decisionmakers, even expert/technical/SV-backed ones, really understand the risks with any new technology, and it always has been.

To take an example: our current mostly-secure internet authentication and commerce world was won as a hard-fought battle in the trenches. The Tech CEOs rushed ahead into the brave new world and dropped the ball, because while "people" were telling them the risks they couldn't really understand them.

But now? Well, they all saw War Games growing up. They kinda get it in the way that they weren't ever going to grok SQL injection or Phishing.

amelius|3 days ago

> Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible" ones.

Reminds me of:

https://en.wikipedia.org/wiki/Paradox_of_tolerance

which has the same kind of shitty conclusion.

skeptic_ai|3 days ago

OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.

Claude only talks about safety, but never released anything open source.

All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.

Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.

Let’s all be honest and just say you only care about the money, and whomever pays you take.

They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.

tehjoker|3 days ago

> Claude only talks about safety, but never released anything open source.

im still working through this issue myself but hinton said releasing weights for frontier models was "crazy" because they can be retrained to do anything. i can see the alignment of corporate interest and safety converging on that point.

from the point of view of diminishing corporate power i do think it is essential to have open weights. if not that, then the companies should be publicly owned to avoid concentration of unaccountable power.

https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s

motbus3|3 days ago

It is hard to understand why other ai companies are still providing models weights at this point

My guess is that they know they are not competitors so they make it cheaper or free to hinder the surge of a super competitor.

pixl97|3 days ago

I mean, if you have a bunch of guns, it's not really helpful for humanity to dump them on the street, but it does bring up the question of what you're doing building guns in the first place.

oatmeal1|3 days ago

90% of the people cancer kills are over 50. Old people who start believing everything they see on Facebook, but continue voting, with even greater confidence in their opinions. Old people who voted in Trump. Curing cancer would be just about the worst thing AI could do.

cnd78A|3 days ago

Unless Ai could cure the Flynn effect you are talking about, it result from the cultural evolution. Natural evolution is dumb unlike the one AI could create (I bet it will either destroy us or make us smarter)

afavour|3 days ago

It's exhausting to keep with mainstream AI news because of this. I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.

ACCount37|3 days ago

It's a fairly mainstream position among the actual AI researchers in the frontier labs.

They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.

But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.

At which point the question becomes: is it them who are deluded, or is it you?

grayhatter|3 days ago

> I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.

You can never figure out if the people selling something are lying about it's capabilities, or if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?

I'd like to introduce you to Occam Razor

api|3 days ago

The fear mongering always struck me as mostly a bid for regulatory capture and a moat, because without that the moat is small and transient.

moogly|3 days ago

"Those other companies are totally going to build the Torment Nexus, so we have no choice but to also build the Torment Nexus."