(no title)
sdlion
|
3 years ago
For training, more GPU RAM will allow you train with greater resolutions in less time and better performance.
Before feeding data to a model, it needs to be resized to a "network dimension" (YOLOv4 default is 416x416 px if I recall correctly). For training, it will group several samples and train with them at the same time, in "batches". For better generalization you want bigger batches (so more different images are feed at the same time).
With a 3060 (non-Ti) you'll have 12GB of GPU RAM, with that I think you can run the default settings (network size, batch size and subdivision of batches) for the YOLOv4 model. If you want to go to 512px, you might have to increase the subdivision (create more subbatches) or reduce the batch size.
If I recall correctly, you could find 3070 with less than 12GB of RAM, so in trying to purse faster training times (I'm not talking about inference, using the model to actually recognize something) you might not be able to train with a broader range of options that can improve your accuracy.
No comments yet.