top | item 25326072

(no title)

somurzakov | 5 years ago

not sure I get your point, both DNNs and SVMs require one forward pass for inference, so there is no difference. if SVM model can converge in one epoch, how is it not less efficient than the status quo with DNNs?

discuss

order

eugenhotaj|5 years ago

For kernel SVMs, one needs to keep around part of the training data (the support vectors) right? With DNNs, after training, all you need are the model parameters. For very large datasets, keeping around even a small part of your training data may not be feasible.

Furthermore, number of parameters do not (necessarily) grow with the size of the training data, can be reused if you get more data, can be quantized/pruned/etc. There's not really an easy way to do these things with SVMs as far as I understand.