Author here. Nope, that is not my concern about humans in the loop. My concern on that is that any human in the loop has to reconstruct by themselves from the inputs whether the output makes sense, the system provides no significant help. "Explaining my reasoning" LLMs are potentially a step forward in that.
You are right that I'm not talking about AGI here, rather about safe systems.
No comments yet.