top | item 46180877 (no title) egeres | 2 months ago It stays around 26Gb at 512x512. I still haven't profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step discuss order hn newest SV_BubbleTime|2 months ago IDK. Seems odd. It’s an 11GB model, I don’t know what it could caching in ram.
SV_BubbleTime|2 months ago IDK. Seems odd. It’s an 11GB model, I don’t know what it could caching in ram.
SV_BubbleTime|2 months ago