top | item 40855090

(no title)

tschumacher | 1 year ago

I also burned myself on an overambitious machine learning project in the past. I had and still have little practical experience but I think I learned a common beginner lesson. Existing ML architectures apply worse to new problems than we think. The only sane way to ML is to reproduce something that works and then make small incremental changes.

discuss

order

somenameforme|1 year ago

I think it's because most people's mental models for machine learning isn't great. They're not like brains or neurons, they're like a really convoluted quadratic regression calculator over an arbitrary number of variables. So if you stick to domains that this is appropriate for, you can actually spin together some pretty neat stuff. I think once one understands the XOR Problem [1], it all starts to mentally come together pretty quickly.

[1] - https://www.educative.io/answers/xor-problem-in-neural-netwo...

TheCog|1 year ago

I think one of the general takeaways about ML is that barring a few experts, its really challenging to reason about your system. Like, yes, you might expect that a convolutional layer will behave in a specific way under ideal conditions, but the way that behavior manifests is often wildly hard to predict during the early days.

I agree that step 1 for most beginner projects should be to start with something that works and then tweak.