top | item 45665215

(no title)

sidkshatriya | 4 months ago

> Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

I have some theories. Firstly, Nvidia was smart enough to have a unified compute GPU architecture across all its architectures -- consumer and commercial. AMD has this awkward split between CDNA and RDNA. So while AMD is scrambling to get CDNA competitive, RDNA is not getting as much attention as it should. I'm pretty sure its ROCm stack has all kinds of hacks trying to get things working across consumer Radeon devices (which internally are probably not well suited/tuned for compute anyways). AMD is hamstrung by its consumer hardware for now in the AI space.

Secondly, AMD is trying to be "compatible" to Nvidia (via HIP). Sadly this is the same thing that AMD did with Intel in the past. Being compatible is really a bad idea when the market leader (Nvidia) is not interested in standardising and actively pursues optimisations and extensions. AMD will always play catch up.

TL;DR AMD made some bad bets on what the hardware would look like in the future and never thought software was critical like nvidia.

AMD now realizes that software is critical and what future hardware should look like. However it is difficult to catch up with Nvidia, the most valuable company in the world with almost limitless resources to invest in further improving its hardware and software. Even while AMD improves, it will continue to look bad in comparison to Nvidia as state of art keeps getting pushed forward.

discuss

order

alessandru|4 months ago

radeon historically gimped the double precision less badly than nvidia, one might say radeons were more suited for scientific compute. actual scientific compute that cares about numbers and precision.

idk about bad bets, they were just slow to release rdna for desktop when they had it already for consoles. there wasn't conflict between cdna and rdna, cdna was product for their data center. they slow-walked rdna chips because they were busy selling them to consoles. and they never invested in software like nvidia did. they wanted outside people to make openCL work when nvidia was directly investing.

these kind of amateur takes are like a poor distillation of whatever you read in the hardware news. sorda muddying the waters a bit with your confusion.

z3ratul163071|4 months ago

we can blame individual bad decisions, but imo it all stems from the culture of viewing software as a cost center and messing it up from there.

positron26|4 months ago

While Nvidia's strategic foresight explains why Nvidia is ahead, it doesn't quite capture why the challenge is not something that only AMD can or should tackle alone.

The 7,484+ companies who stand to benefit do not have a good way to split the bill and dogpile a problem that is nearly impossible to progress on without lots of partners adding their perspective via a breadth of use cases. This is why I'm building https://prizeforge.com.

Nvidia didn't do it alone. Industry should not expect or wait on AMD to do it alone. Waiting just means lighting money on fire right now. In return for support, industry can demand more open technology be used across AMD's stack, making overall competition better in response for making AMD competitive.

alessandru|4 months ago

who is waiting? amd and apple were part of opencl consortium. cuda simply ran away with the prize. amd needs to match nvidia on software spend. that is/was the difference.

FuriouslyAdrift|4 months ago

AMD announced they are unifying their GPU compute (Instinct) and GPU (RDNA) architectures in the next generation.