(no title)
shubhamjain | 3 days ago
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
ACCount37|3 days ago
AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.
If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.
observationist|3 days ago
-Irving John Good, 1965
If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.
If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.
joshribakoff|3 days ago
cael450|3 days ago
Stop mistaking science fiction for science.
overgard|3 days ago
paradox242|3 days ago
PowerElectronix|3 days ago
It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)
There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.
tokyobreakfast|3 days ago
"Just unplug the goddamn thing!"
Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.
SecretDreams|3 days ago
The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.
HardCodedBias|3 days ago
As has been said at many all hands:
Let's all work on the last invention needed by humans.
tyre|3 days ago
If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.
metalliqaz|3 days ago
Rapzid|3 days ago
With the latest competing models they are now realizing they are an "also" provider.
Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.
tumdum_|3 days ago
tenthirtyam|3 days ago
N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)
zerkten|3 days ago
You expect the humans to follow laws, follow orders, apply ethics, look for opportunities, etc. That said, you very quickly have people circling the wagons and protecting the autonomy of JSOC when there is some problem. In my mind it's similar with AI because the point is serving someone. As soon as that power is undermined, they start to push back. Similarly, they aren't motivated to constrain their power on their own. It needs external forces.
edit: missed word.
tim333|3 days ago
jdross|3 days ago
DoughnutHole|3 days ago
If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.
raincole|3 days ago
The same will go with AI, btw. Westerners' pearl clenching about AI guardrails won't stop China from doing anything.
turtlesdown11|3 days ago
you mean like the tens of billions poured into fusion research?
shafyy|3 days ago
whywhywhywhy|3 days ago
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
whatshisface|3 days ago
goodmythical|3 days ago
Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.
We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.
Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.
mikkupikku|3 days ago
scottLobster|3 days ago
Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.
Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"
Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die
coffeefirst|3 days ago
The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.
I would rather have more reliable models than more powerful models that screw up all the time.
toss1|3 days ago
It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.
[0] https://news.ycombinator.com/item?id=47145963
kelnos|3 days ago
Riiiiiight.
francisofascii|3 days ago
unknown|3 days ago
[deleted]
nextaccountic|3 days ago
This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.
austinjp|3 days ago
Amd they alone are responsible enough to govern it.
cmrdporcupine|3 days ago
But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.
LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.
Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.
Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.
Turns out this is useful stuff. In some domains.
It ain't SkyNet.
I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?
There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.
I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.
overgard|3 days ago
austinjp|3 days ago
Every step on the journey towards SkyNet is worse than the preceding step. Let's not split hairs about which step we're on: it's getting worse, and we should stop that.
sonusario|3 days ago
ajross|3 days ago
If anything that makes me more hopeful and not less. It's asking too much that major decisionmakers, even expert/technical/SV-backed ones, really understand the risks with any new technology, and it always has been.
To take an example: our current mostly-secure internet authentication and commerce world was won as a hard-fought battle in the trenches. The Tech CEOs rushed ahead into the brave new world and dropped the ball, because while "people" were telling them the risks they couldn't really understand them.
But now? Well, they all saw War Games growing up. They kinda get it in the way that they weren't ever going to grok SQL injection or Phishing.
amelius|3 days ago
Reminds me of:
https://en.wikipedia.org/wiki/Paradox_of_tolerance
which has the same kind of shitty conclusion.
skeptic_ai|3 days ago
Claude only talks about safety, but never released anything open source.
All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.
Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.
Let’s all be honest and just say you only care about the money, and whomever pays you take.
They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.
tehjoker|3 days ago
im still working through this issue myself but hinton said releasing weights for frontier models was "crazy" because they can be retrained to do anything. i can see the alignment of corporate interest and safety converging on that point.
from the point of view of diminishing corporate power i do think it is essential to have open weights. if not that, then the companies should be publicly owned to avoid concentration of unaccountable power.
https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s
motbus3|3 days ago
My guess is that they know they are not competitors so they make it cheaper or free to hinder the surge of a super competitor.
pixl97|3 days ago
oatmeal1|3 days ago
cnd78A|3 days ago
afavour|3 days ago
ACCount37|3 days ago
They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.
But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.
At which point the question becomes: is it them who are deluded, or is it you?
grayhatter|3 days ago
You can never figure out if the people selling something are lying about it's capabilities, or if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?
I'd like to introduce you to Occam Razor
3acctforcom|3 days ago
api|3 days ago
moogly|3 days ago