Right, I see. That's not really possible imo. For things like mlops, sure. But model development, selection, evaluation? From what I've seen, it's exactly when engineers reach for standard tools without giving thought to how it applies to the given problem that they do a bad job.
Intuition is always important, but it shouldn't be the last word in an engineering problem. I think there is room for a lot more rigour in how we build, optimize, and validate ML model performance, so less of it gets left to intuition. The discipline is becoming mature enough that this is possible, I think there is a lot of room to build out "standards" and a "body of knowledge" that can be applied to building ML models. We're seeing it in pockets, but in so many cases, it is still a dark art.
And then from an actual software engineering perspective, so much ML code is just run-once jupyter notebook stuff... there is a lot we can do. I need to give this more thought, but I think it's acknowledged there is a big opportunity here
Even in non-ML software engineering you still have architectural tradeoffs, that while you can in part make reasoned arguments about, you still are relying on the intuition of your technical leadership.
yldedly|4 years ago
version_five|4 years ago
And then from an actual software engineering perspective, so much ML code is just run-once jupyter notebook stuff... there is a lot we can do. I need to give this more thought, but I think it's acknowledged there is a big opportunity here
jacobr1|4 years ago