top | item 37573539

(no title)

ormax3 | 2 years ago

sounds like something LLMs can help with, sift through huge amounts of documents to summarize and highlight the interesting ones

discuss

order

jstarfish|2 years ago

If only. The biggest problems right now are limited context size and basic security, including having to share such documents with God-knows-how-many third parties.

Tangent, but we use Azure instead of OpenAI due to data-retention concerns. To ensure nobody's inputting anything classified or proprietary, Legal demanded implementation of an "AI safety" tool...so we demoed one that ships all prompts to a third party's regex-retraction API.

So you never know who ends up the recipient of your LLM prompt, where it's getting logged to, who's reviewing those logs, etc. Even some local models require execution of arbitrary code, and Gradio ships telemetry data. Uploading Snowden's docs into a black box is a good way to catch a ride in a black van.

ormax3|2 years ago

Nowadays even consumer-level hardware can run some decent local LLMs, completely offline.

You might want to browse /r/LocalLLaMA/ if "security" is an issue for you.