top | item 46343412

(no title)

animal531 | 2 months ago

It's funny how ideas come and go. I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time (albeit that I was thinking of computers in general).

It would take a lot of work to make a GPU do current CPU type tasks, but it would be interesting to see how it changes parallelism and our approach to logic in code.

discuss

order

goku12|2 months ago

> I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time

HN isn't always very rational about voting. It will be a loss if you judge any idea on their basis.

> It would take a lot of work to make a GPU do current CPU type tasks

In my opinion, that would be counterproductive. The advantage of GPUs is that they have a large number of very simple GPU cores. Instead, just do a few separate CPU cores on the same die, or on a separate die. Or you could even have a forest of GPU cores with a few CPU cores interspersed among them - sort of like how modern FPGAs have logic tiles, memory tiles and CPU tiles spread out on it. I doubt it would be called a GPU at that point.

zozbot234|2 months ago

GPU compute units are not that simple, the main difference with CPU is that they generally use a combination of wide SIMD and wide SMT to hide latency, as opposed to the power-intensive out-of-order processing used by CPU's. Performing tasks that can't take advantage of either SIMD or SMT on GPU compute units might be a bit wasteful.

Also you'd need to add extra hardware for various OS support functions (privilege levels, address space translation/MMU) that are currently missing from the GPU. But the idea is otherwise sound, you can think of the 'Mill' proposed CPU architecture as one variety of it.

Den_VR|2 months ago

As I recall, Gartner made the outrageous claim that upwards of 70% of all computing will be “AI” in some number of years - nearly the end of cpu workloads.

sharpneli|2 months ago

Is there any need for that? Just have a few good CPUs there and you’re good to go.

As for how the HW looks like we already know. Look at Strix Halo as an example. We are just getting bigger and bigger integrated GPUs. Most of the flops on that chip is the GPU part.

amelius|2 months ago

I still would like to see a general GPU back end for LLVM just for fun.

PunchyHamster|2 months ago

It would just make everything worse. Some (if anything, most) tasks are far less paralleliseable than typical GPU loads.

deliciousturkey|2 months ago

HN in general is quite clueless about topics like hardware, high performance computing, graphics, and AI performance. So you probably shouldn't care if you are downvoted, especially if you honestly know you are being correct.

Also, I'd say if you buy for example a Macbook with an M4 Pro chip, it is already is a big GPU attached to a small CPU.

philistine|2 months ago

People on here tend to act as if 20% of all computers sold were laptops, when it’s the reverse.