(no title)
995533 | 7 years ago
Besides that, the accuracy gains are not marginal anymore (BoW can't compete like it used to, especially with pre-trained models).
995533 | 7 years ago
Besides that, the accuracy gains are not marginal anymore (BoW can't compete like it used to, especially with pre-trained models).
Cybiote|7 years ago
This isn't true. It depends on your priorities and goals. Machine learning that spends most of its time unable to learn is not real AI. Some of us are interested in sample and energy efficient learning capable of on-line incremental updates immune to catastrophic forgetting. Not just because this is truer to actual learning but because it moves away from being dependent on a handful of companies to do the actual training.
Anticipating some replies: no, transfer learning or meta-learning methods don't really avoid this. In the case of transfer learning, you still have that high coupling between a handful of sources. The down-sides of this is its own discussion. In addition, there are times where the ability to extract local relations can be dulled by the dominant wikipedia and common-crawl representations. Meta-learning gets you fast updates but you still cannot stray too far away from the domains that were met at training time.
> What matters is prediction speeds
I'm not a fan of bag of words models either but a simple dot product is always going to be faster than many matrix multiplies and or convolutions. The implementor should always try these as a base-line and decide if the performance accuracy trade-off is worth it for them.
unknown|7 years ago
[deleted]
995533|7 years ago
Online learning, sample - and energy efficiency are unrelated to training times. Like said: nobody cares if you ran Vowpal Wabbit for 1 hour or 100 hours, as long as you are not constantly babysitting it and calling that paid work (or have the unusual requirement of daily retraining while using an online model).
> simple dot product is always going to be faster than many matrix multiplies
If you care about this (because it is profitable), you rewrite in lower-level language or predict with cloud GPU (which will be at least comparable to simple dot product, while adding performance)
rundigen12|7 years ago
LOTS of people care how long it takes to train a model. A few minutes, vs. a day, vs. a week, vs. a month? Yea, that matters.
Think about how long it takes to try out different hyperparameters or make other adjustments while conducting research...
If you're Google maybe you don't care as much because you can fire off a hundred different jobs at once, but if you're a resource-limited mere mortal, yea, that wait time adds up.
sandeepeecs|7 years ago
Another important aspect is training and incremental training on edge device.
At the time when privacy is becoming very important and you cannot export data from mobile devices etc. Training time on mobile is an important factor
995533|7 years ago
If we are talking days or hours: start parameter search on Friday and return best parameters on Monday.
Do research and iteration on heavily subsampled datasets.
If you are building models for yourself, or for Kaggle, you may care in as much as your laptop gets uncomfortably hot.
darkpuma|7 years ago
Consider for instance an RSS reader that classifies articles to determine whether or not to interrupt the user with a notification. This should be fast to train and update the model on the fly every time the user enters a correction (e.g. 'this article actually isn't interesting', or 'interrupt me with articles like this in the future'.)
995533|7 years ago
If you are deploying on resource-constrainted devices (IE: low-end PC's without GPU), it is not unusual to take a lot of time training a model on a very powerful computer (which nobody cares about), then distilling or transfering the result for test time.
Fomite|7 years ago
995533|7 years ago
digitalzombie|7 years ago
That's a reckless generalization. I care.
My thesis would take forever if I didn't do any optimization. Also my data is 20 rows with ~6000 predictors.
There are models out there that can take months! I worked on one that took months. We had to tweak it and optimize it to see if we can get it to acceptable training time.
autokad|7 years ago
In kaggle some competitions it takes over 7 hours to train a model, and I can generally think of 10 things a day to try. prediction only takes about a minute.
> "especially with pre-trained models" if the corpus are different, pre-trained models do not help much, if not hurt.