top | item 21696039

(no title)

fiddlewin | 6 years ago

You will need very powerful hardware to deploy the deep learning models on-prem for incremental learning.

And most of the time, while not indexing, the hardware would be sitting there sleeping. Probably not very cost-effective for enterprises.

discuss

order

hueving|6 years ago

> the hardware would be sitting there sleeping. Probably not very cost-effective for enterprises.

Not to be condescending, but idle hardware isn't even on the radar as far as waste goes in enterprises. An on-prem solution that is idle for 364 days of the year is completely fine for most of these companies.

For the ones that do care, that's what they make virtual machines and over-subscription for if they even care the slightest about that.

crawdog|6 years ago

Also - see the rise in popularity of OpenShift/PCS/PKS - flexible infrastructure is also catching on.

nl|6 years ago

You will need very powerful hardware to deploy the deep learning models on-prem for incremental learning.

This isn't true.

I've build (neural-network) vector based search extensions for search. You don't train the model - you use a pretrained model (that understands English in your domain) and then use it as an encoder.

Sometimes there is once-off pretraining process for domain adaptation, but honestly this isn't a big deal. Even on a CPU based machine you could do this overnight or over a weekend, and since it is once off that time doesn't really matter.

crawdog|6 years ago

For large (mature) enterprises, I believe at this point it's safe to expect some level of hybrid cloud architecture. I also agree it would be very difficult/impossible to support this for "realtime" indexing.