top | item 47210924

(no title)

entrustai | 4 hours ago

VSDD is the most rigorous development pipeline I've seen articulated for AI-native engineering. The purity boundary map in Phase 1b is particularly sharp — making verifiability an architectural constraint rather than an afterthought is exactly right.

But there's a boundary VSDD doesn't cross: the commit boundary into production runtime.

VSDD verifies that the code does what the spec says. It says nothing about whether the output that code generates — at runtime, from a live LLM — is admissible. A formally verified inference pipeline can still produce a clinical summary that omits a contraindication, or a financial disclosure that drifts outside regulatory bounds. The code is correct. The output is not.

The verification architecture ends at deployment. The governance problem begins there.

VSDD and runtime output enforcement aren't competing — they're sequential. You need both. But most teams treat deployment as the finish line when it's actually where the second problem starts.

discuss

order

No comments yet.