top | item 40757023

(no title)

alexeldeib | 1 year ago

great talk! I’m curious about an approach like this combined with CUDA checkpoint for GPU workloads https://github.com/NVIDIA/cuda-checkpoint

discuss

order

Animats|1 year ago

This makes sense for checkpointing and restoring long ML training runs.

Doing this on a networked application is going to be iffy. The restored program sees a time jump. The world in which it lives sees a replay of things the restore program already did once, if restore is from a checkpoint before a later crash.

If you just want to migrate jobs within a cluster, there's Xen.