Ask HN: How do you handle logging and evaluation when training ML models?
3 points| calepayson | 3 months ago
One friction point I keep running into is how to handle logging and evaluation of the models. Right now I'm using Jupyter Notebook, I'll train the model, then produce a few graphs for different metrics with the test set.
This whole workflow seems to be the standard among the folks in my program but I can't shake the feeling that it seems vibes-based and sub optimal.
I've got a few projects coming up and I want to use them as a chance to improve my approach to training models. What method works for you? Are there any articles or libraries that you would recommend? What do you wish Jr. Engineers new about this?
Thanks!
-1|3 months ago
Two resources that might be useful are AWS’ SageMaker documentation and the Machine Learning Engineering book by Andriy Burkov. This book doesn’t really go into detail on logging though. One way to evaluate a model is to run a SageMaker processing job that saves the performance metrics in a json file in S3 somewhere. More info on processing jobs: https://docs.aws.amazon.com/sagemaker/latest/dg/processing-j... . AWS has various services for logging which you can look into. This will mostly apply to orgs using AWS, but it might give a sense of how things can be done more generally.
calepayson|3 months ago
I'm hoping the text editor + project directory approach helps force ML projects away from a single file and towards some sort of codified project structure. Sometimes it just feels like there's too much information in a file and it becomes hard to assign it to a location mentally (a bit like reading a physical copy of a tough book vs a kindle copy). Any advice or thoughts on this would be appreciated!