top | item 47042808

(no title)

bpolania | 14 days ago

Fair question. Yes, you can absolutely generate a basic proxy with an LLM, the gap is in the stuff that's hard to get right and boring to maintain. Policy hot-reload without dropping in-flight requests (ArcSwap, not "restart the process"). Tamper-evident audit with blake3 hash chains, not just append-only logs. Credential injection where the agent process literally never sees the secret, not env vars. Content inspection that runs bidirectionally without buffering entire responses into memory. Correct TLS MITM for the HTTP proxy mode with dynamic per-host certs. An LLM will generate something that works for a demo. We created 409 tests including property-based testing with proptest, because the failure modes in a security proxy are subtle, off-by-one in glob matching, race conditions in policy reload, Content-Length mismatches after redaction. Same reason, for example, you use nginx instead of asking your AI to write an HTTP server. The first 80% is easy. The last 20% is where credentials leak.

discuss

order

verdverm|14 days ago

This is ai slop, likely automated, which against HN rules

If your Ai can do all of this, so can mine

bpolania|14 days ago

not sure why you say this, if you don’t like it don’t use it, if you can create your own, do it, or just check the repo and try it, you might find it useful