top | item 23763578

(no title)

rhythmvertigo | 5 years ago

Yep, it will work with your GPUs! You just set up self-hosted runners as usual https://docs.gitlab.com/runner/

GitHub & GitLab have both made it quite easy to use your own resources as runners. I recently met someone who was doing Actions with a Jetson Nano on their dresser :)

discuss

order

gravypod|5 years ago

That's really cool. I might have to play around with this. Do you have any docs on what you do to deploy a model? Something I've been doing at work is dealing with the output of some ML code we have. We end up with ~150GB of data that needs to be synced to a file share in prod. I'm assuming DVC can be used for this.

After the run, output files, upload as a data set in DVC or something?

Documenting this full workflow would save a lot of confused devopsy people (like myself) survive in the world of ML. Thanks for this hard work you've all put into this!

calebkaiser|5 years ago

Not to be too self-promotional here, but I'm a maintainer of Cortex, a model deployment platform that sounds like it might be useful: https://github.com/cortexlabs/cortex

With DVC/Cortex, you can set things up so that all you have to do is run `dvc push` to update your model and `cortex deploy` to deploy it.

dmpetrov|5 years ago

150Gb ML model file? That's cool!

Yes, DVC can help with that. Where the data lives? S3/GCS or just a server with SSH?

Disclaimer: I'm a creator of DVC.