(no title)
m15i
|
4 years ago
It’s all about the RAM. 64GB would allow input of larger image sizes and/or nets with more parameters. Right now, the consumer card with the most RAM is the rtx 3090 which is only 24GB, and in my opinion overpriced and inefficient in terms of wattage (~350W). Even the ~$6000 RTX A6000 cards are only 48GB.
cschmid|4 years ago
Also, software support for accelerated training on Apple hardware is extremely limited: Out of the main frameworks, only tensorflow seems to target it, and even there, the issues you'll face won't be high on the priority list.
I know that nvidia GPUs are very expensive, but if you're really serious about training a large model, the only alternative would be paying rent to Google.
m15i|4 years ago