top | item 46722149

(no title)

schopra909 | 1 month ago

Should be fixed now! Thanks again for the heads up

discuss

order

streamer45|1 month ago

All good, cheers!

schopra909|1 month ago

Per the RAM comment, you may able to get it run locally with two tweaks:

https://github.com/Linum-AI/linum-v2/blob/298b1bb9186b5b9ff6...

1) Free up the t5 as soon as the text is encoded, so you reclaim GPU RAM

2) Manual Layer Offloading; move layers off GPU once they're done being used to free up space for the remaining layers + activations