top | item 45366525

(no title)

jerryliu12 | 5 months ago

Yep! Have tested it out on Qwen 2.5VL 3B and it works reasonably well on my 16GB Macbook Air. The only thing I will say is that I don't think it's a great idea to run local models on laptop battery, since it's quite compute intensive and drains kinda quickly. Have tested with Ollama and LMStudio, but you should be able to use any OpenAI compatible local server.

discuss

order

deanputney|5 months ago

Would it be possible to check for the power adapter and run processing then? These are the types of things I've been thinking about for my own app: https://stardateapp.com

jerryliu12|5 months ago

Wow, yeah that's clever I hadn't thought of that. Will add as an advanced setting.

jastuk|5 months ago

You've mentioned in the docs that:

> Gemini leverages native video understanding for direct analysis, while Local models reconstruct understanding from individual frame descriptions - resulting in dramatically different processing complexity.

For people like me who haven't dabbled much with AI video processing and have no intuition for it, could you clarify the drawbacks of such a local-only approach vs what Gemini offers? I don't mean the performance or power/battery impact (that part is clear), just in terms of end-result and quality what the practical differences are.

I'm in the only-100%-offline camp here but would like to know what I'm missing out on since I won't even try Gemini here.

smcleod|5 months ago

Nice that's great. I have a 96GB M2 Max that's plugged in 99.9% of the time so that's not an issue. Cheers for the response!