top | item 42543268

(no title)

splintercell | 1 year ago

Even if it was not cost-effective, or you're just running worse models, it's learning an important skill.

Take for instance, self-hosting your website may have all these considerations, but you're getting information from the LLMs. It would be helpful to know that the LLM is in your control.

discuss

order

Oras|1 year ago

Self hosting website as local server running in your room? what’s the point?

Same with LLMs, you can use providers who don’t log requests and SOC2 compliant.

Small models that run locally is a waste of time as they don’t have adequate value compared to larger models.

KetoManx64|1 year ago

My self hosted website that has a bunch of articles about my self hosted setup and documentation, that i have running from a server in my living room got me my 6 figure devOps job.

troyvit|1 year ago

> Self hosting website as local server running in your room? what’s the point?

It depends on how important the web site is and what the purpose is. Personal blog (Ghost, WordPress)? File sharing (Nextcloud)? Document collaboration (Etherpad)? Media (Jellyfin)? Why _not_ run it from your room with a reverse proxy? You're paying for internet anyway.

> Same with LLMs, you can use providers who don’t log requests and SOC2 compliant.

Sure. Until they change their minds and decide they want to, or until they go belly-up because they didn't monetize you.

> Small models that run locally is a waste of time as they don’t have adequate value compared to larger models.

A small model won't get you GPT-4o value, but for coding, simple image generation, story prompts, "how long should I boil an egg" questions, etc. it'll do just fine, and it's yours. As a bonus, while a lot of energy went into creating the models you'd use, you're saving a lot of energy using them compared to asking the giant models simple questions.