Delivering double digit IPC improvements (looks like the industry is still competitive).
> The Arm C1 Ultra CPU aims for +25% single-threaded performance and double-digit IPC gains
The new Mali GPU's look not bad too with +20% performance while 9% more power efficient.
And SME2-enabled Armv9.3 cores for on device AI doesn’t sound bad either
Curious to see how much of this new arch will actually be adopted by Qualcomm, or whether they will diverge further with their (Nuvia-acquired) Architecture.
Either way, I hope the result is not causing fragmentation in the market (e.g. developers not making use of next-gen ARM features because Qualcomm doesn't support them)
Given the litigation I don’t see Qualcomm adopting any new cores whilst keeping on with developing theirs it’s going to be too risky as regardless of how many firewalls they put in place ARM could claim that their IP spilled over.
I can only reach the "meh" level of enthusiasm. RK3588 was released in 2022 and AFAIK it still doesn't have video decoding acceleration in mainline kernel/mesa/ffmpeg.
I generally find ARMs non-delivery and then lack of drivers to be super grating as well. That said, I believe this video package is non-arm, is 3rd party.
Maybe also worth mentioning that the rk3588 uses Cortex A76 cores, which arm announced in 2018, so this was a 4 year old design at time of release. At this pace it seems to take the better part of a decade to get an arm core out & generally usable.
I really really hope some of this video encoding work helps lay some foundation for further mainline vpus to be easier. I bought a cute small rk3566 board hoping to make a cheap low power wifi video transmitter, and of course it requires a truly prehistoric vendor provided kernel to take advantage of the vpu, alas. Scant hope for this ever improving but maybe some decade drivers won't be a scythian nightmare.
Its nice seeing a second player come to the GPU/video space at least. Imagination GPU's are in the new Pixel phone! And a bunch of various designs here & there. Maybe they can get religion & work a little harder than others have at up streaming. There were some promising early mainlineings, but I've not seeing much in kernelnewbies release logs for a while now: troubling silence.
What exactly is this on-device AI stuff that everybody is talking about? I'm a mere Sysadmin, so probably I'm missing something here.
The last time I tried to run local LLMs via my 7900XT using LMStudio, even with 20gb of VRAM, they were borderline usable. Fast enough, but quality of the answers and generated code was complete and utter crap. Not even in the same ballpark as ClaudeCode or GPT4/5. I'd love to run some kind of supercharged commandline-completion on there, though.
Edit: I guess my question is: What exactly justifies the extra transistors that ARM here and also AMD with their "AI MAX" keep stuffing onto their chips?
I guess AI is not just LLM. Image processing, speech to text etc would fall under the use case.
Regarding GenAI, Pixel phones already run nano model on the phone with decent performance and utility.
this feels more like Arm giving its partners the homework to catch up with Apple, rather than a true innovation leap. Apple integrates hardware and software seamlessly. This just provides the raw ingredients.
HeyMeco|5 months ago
The new Mali GPU's look not bad too with +20% performance while 9% more power efficient.
And SME2-enabled Armv9.3 cores for on device AI doesn’t sound bad either
znpy|5 months ago
phoronixrly|5 months ago
rickdeckard|5 months ago
Curious to see how much of this new arch will actually be adopted by Qualcomm, or whether they will diverge further with their (Nuvia-acquired) Architecture.
Either way, I hope the result is not causing fragmentation in the market (e.g. developers not making use of next-gen ARM features because Qualcomm doesn't support them)
dogma1138|5 months ago
M95D|5 months ago
jauntywundrkind|5 months ago
Maybe also worth mentioning that the rk3588 uses Cortex A76 cores, which arm announced in 2018, so this was a 4 year old design at time of release. At this pace it seems to take the better part of a decade to get an arm core out & generally usable.
I really really hope some of this video encoding work helps lay some foundation for further mainline vpus to be easier. I bought a cute small rk3566 board hoping to make a cheap low power wifi video transmitter, and of course it requires a truly prehistoric vendor provided kernel to take advantage of the vpu, alas. Scant hope for this ever improving but maybe some decade drivers won't be a scythian nightmare.
Its nice seeing a second player come to the GPU/video space at least. Imagination GPU's are in the new Pixel phone! And a bunch of various designs here & there. Maybe they can get religion & work a little harder than others have at up streaming. There were some promising early mainlineings, but I've not seeing much in kernelnewbies release logs for a while now: troubling silence.
HeyMeco|5 months ago
daft_pink|5 months ago
avhception|5 months ago
The last time I tried to run local LLMs via my 7900XT using LMStudio, even with 20gb of VRAM, they were borderline usable. Fast enough, but quality of the answers and generated code was complete and utter crap. Not even in the same ballpark as ClaudeCode or GPT4/5. I'd love to run some kind of supercharged commandline-completion on there, though.
Edit: I guess my question is: What exactly justifies the extra transistors that ARM here and also AMD with their "AI MAX" keep stuffing onto their chips?
theuppermiddle|5 months ago
HeyMeco|5 months ago
untrimmed|5 months ago
0points|5 months ago
Arm doesn't build operating systems. But you already knew that. So your post is merely troll bait.