top | item 47079438 (no title) billconan | 10 days ago I'm curious how a gpu language's syntax design can be different from CUDA kernel?Because I think there is no way to avoid concepts like thread_id.I'm curious how GPU programming can be made (a lot) simpler than CUDA. discuss order hn newest mr_octopus|10 days ago Most GPU work boils down to a few patterns — map, reduce, scan. Each one has a known way to assign threads.So instead of writing a kernel with thread_id: let c = gpu_add(a, b) let total = gpu_sum(c) The thread indexing is still there — just handled by the runtime, like how Python hides pointer math.
mr_octopus|10 days ago Most GPU work boils down to a few patterns — map, reduce, scan. Each one has a known way to assign threads.So instead of writing a kernel with thread_id: let c = gpu_add(a, b) let total = gpu_sum(c) The thread indexing is still there — just handled by the runtime, like how Python hides pointer math.
mr_octopus|10 days ago
So instead of writing a kernel with thread_id:
The thread indexing is still there — just handled by the runtime, like how Python hides pointer math.