top | item 47213339

Toward Guarantees for Clinical Reasoning in Vision Language Models

5 points| barthelomew | 20 hours ago |arxiv.org

3 comments

order

barthelomew|20 hours ago

AI (VLM-based) radiology models can sound confident and still be wrong ; hallucinating diagnoses that their own findings don't support. This is a silent, and dangerous failure mode.

Our new paper introduces a verification layer that checks every diagnostic claim an AI makes before it reaches a clinician. When our system says a diagnosis is supported, it's been mathematically proven - not just guessed. Every model we tested improved significantly after verification, with our best result hitting 99% soundness.

We're excited about what comes next in building verifiably correct AI systems.