I would honestly guess that this is just a small amount of tweaking on top of the Sonnet 4.x models. It seems like providers are rarely training new 'base' models anymore. We're at a point where the gains are more from modifying the model's architecture and doing a "post" training refinement. That's what we've been seeing for the past 12-18 months, iirc.
> Claude Sonnet 4.6 was trained on a proprietary mix of publicly available information from
the internet up to May 2025, non-public data from third parties, data provided by
data-labeling services and paid contractors, data from Claude users who have opted in to
have their data used for training, and data generated internally at Anthropic. Throughout
the training process we used several data cleaning and filtering methods including
deduplication and classification. ... After the pretraining process, Claude Sonnet 4.6 underwent substantial post-training and fine-tuning, with the intention of making it a helpful, honest, and harmless1 assistant.
I think it does matter how much power it takes but, in the context of power to "benefits humanity" ratio. Things that significantly reduce human suffering or improve human life are probably worth exerting energy on.
However, if we frame the question this way, I would imagine there are many more low-hanging fruit before we question the utility of LLMs. For example, should some humans be dumping 5-10 kWh/day into things like hot tubs or pools? That's just the most absurd one I was able to come up with off the top of my head. I'm sure we could find many others.
It's a tough thought experiment to continue though. Ultimately, one could argue we shouldn't be spending any more energy than what is absolutely necessary to live. (food, minimal shelter, water, etc) Personally, I would not find that enjoyable way to live.
Ofc it matters. Who pays for the power? Does the AI pay for the data or the power they use for training? Nope, they dont.
Consumers pay for the power in rising enerfy bills, while the AI datacenters get huge gov subsidies. At the same time people get booted because some CTO has gone full blown AI blind.
The biggest issue is that the US simply Does Not Have Enough Power, we are flying blind into a serious energy crisis because the current administration has an obsession with "clean coal"
freeqaz|12 days ago
squidbeak|12 days ago
phplovesong|11 days ago
Stuff from last year will be outdated today.
brutalc|12 days ago
[deleted]
unknown|12 days ago
[deleted]
neural_thing|12 days ago
bronco21016|12 days ago
However, if we frame the question this way, I would imagine there are many more low-hanging fruit before we question the utility of LLMs. For example, should some humans be dumping 5-10 kWh/day into things like hot tubs or pools? That's just the most absurd one I was able to come up with off the top of my head. I'm sure we could find many others.
It's a tough thought experiment to continue though. Ultimately, one could argue we shouldn't be spending any more energy than what is absolutely necessary to live. (food, minimal shelter, water, etc) Personally, I would not find that enjoyable way to live.
phplovesong|11 days ago
Consumers pay for the power in rising enerfy bills, while the AI datacenters get huge gov subsidies. At the same time people get booted because some CTO has gone full blown AI blind.
Its a bad situation for the consumer.
vablings|12 days ago