Not only is this just a random article from the internet, as opposed to something peer-reviewed, but more importantly, nowhere does it even attempt to claim that the mere fact of a program terminating implies its suitability in a safety context.
Here's your citation — ARIA's Safeguarded AI program, £59M in UK government funding, explicitly claiming mathematical safety guarantees through restricted verifiable models — total languages by another name. The claim exists. It has a budget. Now, do you have a substantive response to the proof, or are we done here?"
@techreport{ARIA2024,
author = {{Advanced Research and Invention Agency}},
title = {Safeguarded AI: Constructing Guaranteed Safety},
institution = {ARIA},
year = {2024},
url = {https://www.aria.org.uk/programs/safeguarded-ai/},
abstract = {Outlines a 'Safeguarded AI' program that seeks to build AI systems with 'mathematical guarantees.' It argues that by using restricted, verifiable models—effectively total languages—one can avoid the 'impossible' task of verifying general AI and instead produce 'quantitative safety guarantees.'}
}
WCSTombs|7 days ago
user1138|7 days ago