Sorry for being a nay-sayer here, but I'm failing to see the value in this service. I've been doing machine learning for about a year and in every case my algorithms had to have been uniquely tied to my data. The performance of various algorithms is tied to the dataset which is reflected in the code through (sometimes many) parameters. It's a symbiotic relationship. So I'm not sure how you can offer disparate services, unless you expect the client to upload both the dataset and the code, and then offer a flexible compute structure akin to EC2. I'd be careful though, because judging the performance of algorithms on datasets not fitting the model can be a little disingenuous.
I think you would use MLComp as a baseline. You have a set of data that you can run generic algorithms on to show you what the best results are without any further considerations.
It is also useful for researchers who develop new algorithms for a specific dataset but then make it a generic algorithm. No research has the time to sit there manually testing every dataset that they can find to see what their algorithm works well on.
So for data set providers it gives them a quick look at what machine learning can offer without a lot of development.
For researchers it gives them a chance to see a surprising result, maybe their algorithm works well on a dataset they never considered.
There are many applications that do not require unique algorithms and can be solved using simple regressions. Also not everybody has access to a ML expert to code a solution (even if this only requires a simple ANN using an opensource library).
Machine learning as a service, very interesting. I found a similar site some months ago. I looked over my bookmarks but I could not find it. The idea was similar, you upload a data-set and they execute different learning algorithms over it. I do not recall if you also provide an evaluation or they automatically split the given data-set.
[+] [-] physcab|16 years ago|reply
[+] [-] xel02|16 years ago|reply
It is also useful for researchers who develop new algorithms for a specific dataset but then make it a generic algorithm. No research has the time to sit there manually testing every dataset that they can find to see what their algorithm works well on.
So for data set providers it gives them a quick look at what machine learning can offer without a lot of development.
For researchers it gives them a chance to see a surprising result, maybe their algorithm works well on a dataset they never considered.
[+] [-] the_real_r2d2|16 years ago|reply
[+] [-] the_real_r2d2|16 years ago|reply