top | item 46726449

1.8-3.3x faster Embedding finetuning now in Unsloth

3 points| electroglyph | 1 month ago |unsloth.ai

3 comments

order

storystarling|1 month ago

Do the memory savings carry over to inference or is this strictly optimizing the backward pass? I'm running embedding pipelines via Celery and being able to squeeze this into lower VRAM would help the margins quite a bit.

danielhanchen|1 month ago

Excited to have collabed on this! Thanks electroglyph for the contrib!