top | item 41441111

(no title)

atty | 1 year ago

Rather cynical comments so far. I personally am very interested to see how this line of chips does, both in terms of performance (really efficiency, for this sort of chip), and market performance. Hopefully things like Lunar Lake, Arrow Lake, etc, and their 18a node all turn out to be as good as some of the early leaks and press releases make them would indicate, because Intel needs some big wins to get back on track.

discuss

order

akira2501|1 year ago

We spent decades taking leaps and bounds with every chip release. We've now seemingly settled into the incremental improvement phase. The chip makers have responded by burning tons of transistors on extra crap that spends most of it's life powered down.

It's hard not to be cynical.

doublepg23|1 year ago

? since 2016 Ryzen made intel have to compete again and the 2020 M-series from Apple made day long battery life a reality.

CPUs have been very interesting the past 8 years or so.

Sohcahtoa82|1 year ago

I think it's all about being realistic.

We made leaps and bounds before because clock speeds were going up 50% or more between generations. Add in architecture improvements and it was easy to see actual performance double from one generation to the next.

But we're struggling to get clocks faster now, and I always imagined that it's because the speed of electricity isn't fast enough. At 6 Ghz, in one clock cycle, light travels only about 6 inches/15 cm. Electricity moves slower than the speed of light, depending on the medium it's going through. At the frequencies we're operating at, I figure that transistor switching speed and clock skew just within the CPU can start to be an issue.

We already have tons of CPU optimizations. Out-of-order execution, branch prediction, register renaming, I could go on. There's probably not much more we can do to improve single-threaded performance. Every avenue for optimizing x86 has been taken.

And so we go multi-core, but that ends up making heat a primary concern. It also relies on your task being parallel.

Or we go ARM, but now some of your software that has had x86-specific optimizations like using AVX-512 has to be rewritten.

dangus|1 year ago

Being realistic doesn’t mean a need to be cynical. Leaps and bounds of progress never lasts forever. Incremental improvement is still worth celebrating.

I totally disagree that specialized processing units are wasteful because they spending most of their life powered down. Your iPhone uses the neural engine every time you open the camera app. The announced AI features for the next iOS version will be using on-device AI a lot of times you use Siri - which is used a lot by a lot of people.

The old school version of this would be like if you were dissing multimedia instructions like hardware encoders/decoders. How do you think your laptop so effortlessly plays back 4K video and somehow get better battery life than when you’re working on a Word document? It’s that part of your processor that usually “sits there doing nothing.”

You just don’t realize how much these segments of the chip are accelerating your experience.

knowitnone|1 year ago

You want a chip that never powers down? Boy, have I got a deal for you. zero transistor waste, zero extra crap just like you asked for. It's a 286. Limited availability so gonna have to ask $5000 per chip.