(no title)
canvascritic | 6 months ago
Organizations operating in high stakes environments
Organizations with restrictive IT policies
To name just a few -- well, the first two are special cases of the last one
RE your hallucination concerns: the issue is overly broad ambitions. Local LLMs are not general purpose -- if what you want is local ChatGPT, you will have a bad time. You should have a highly focused use case, like "classify this free text as A or B" or "clean this up to conform to this standard": this is the sweet spot for a local model
nojito|6 months ago
canvascritic|6 months ago
Your typical non-coastal state run health system does not have model access outside of people using their own unsanctioned/personal ChatGPT/Claude accounts. In particular even if you have model access, you won't automatically have API access. Maybe you have a request for an API key in security review or in the queue of some committee that will get to it in 6 months. This is the reality for my local health system. Local models have been a massive boon in the way of enabling this kind of powerful automation at a fraction of the cost without having to endure the usual process needed to send data over the wire to a third party
ptero|6 months ago
Running a local model is often much easier: if you already have data on a machine and can run a model without breaching any network one could run it without any new approvals.
captainregex|6 months ago
coredog64|6 months ago
edm0nd|6 months ago
https://i.pinimg.com/474x/4c/4c/7f/4c4c7fb0d52b21fe118d998a8...