machineman44
|
8 years ago
|
on: Gluon – Deep Learning API from AWS and Microsoft
Why AWS and Microsoft?
Why not Amazon and Microsoft?
machineman44
|
8 years ago
|
on: Big brother is here, and his name is Facebook
I know right... Everywhere I turn, I hear people preaching about "blockchain technology" and it's ability to "increase privacy" ... I die a little inside every time...
machineman44
|
9 years ago
|
on: Beringei: A high-performance time series storage engine
Haha, I thought jdonaldson was making a joke :P
machineman44
|
9 years ago
|
on: Introduction to Machine Learning for Developers
Hi Stephanie. Sorry if my comment sounded harsh and nit picky. I actually passed it off to a fellow software engineer at work and he found it really insightful and useful for the work he is doing. Not everybody makes the effort to share their knowledge and I really appreciate you doing so. Have a good day :)
machineman44
|
9 years ago
|
on: Introduction to Machine Learning for Developers
Honestly, this is a good run through of resources and examples of different machine learning algorithms/techniques be it supervised, unsupervised, or model validation... however, the wording used and mistakes made when describing supervised learning or Naive Bayes shows that this is an attempt at taking an O'Rielly book and trying to summarize it in a short article... while making errors... How did it get so many points on ycombinator?
machineman44
|
9 years ago
|
on: Introduction to Machine Learning for Developers
I have only seen a maximum entropy model as part of the supervised realm where it is a discriminative model. In other words, given some labeled data, we can draw a decision boundary. Maximum entropy in this context is almost certainly associated with the information theory definition, where the entropy of a collection of data based on the distribution of classes is measured. High entropy if each class is equally probable. Lower Entropy otherwise.
machineman44
|
9 years ago
|
on: Introduction to Machine Learning for Developers
I agree. Most supervised learning classifiers are derived based on the independent and identically distributed assumption for each (x,y) pair.
To be more specific about the Naive Bayes assumption, the features of a data point are conditionally independent instead of simply independent. This means that given a certain label, these set of features are independent.