(no title)
rdos
|
5 months ago
There is no point in using a low-bandwidth card like the B50 for AI. Attempting to use 2x or 4x cards to load a real model will result in poor performance and low generation speed. If you don’t need a larger model, use a 3060 or 2x 3060, and you’ll get significantly better performance than the B50—so much better that the higher power consumption won’t matter (70W vs. 170W for a single card). Higher VRAM wont make the card 'better for AI'.
bsder|5 months ago
People actually use loaded out M-series macs for some forms of AI training. So, total memory does seem to matter in certain cases.
robotnikman|5 months ago
Are there any performance bottlenecks with using 2 cards instead of a single card? I don't think any one the consumer Nvidia cards use NVlink anymore, or at least they haven't for a while now.
vid|5 months ago
Plenty of people use eg 2, 4 or 6 3090s to run large models at acceptable speeds.
Higher VRAM at decent (much faster than DDR5) speeds will make cards better for AI.
hadlock|5 months ago
wqaatwt|5 months ago
Intel and even AMD can’t compete or aren’t bothering. I guess we’ll see how the glued 48GB B60 will do, but that’s a still relatively slow GPU regardless of memory. Might be quite competitive with Macs, though.