Not necessarily. Interpretability of a system used to make decisions is more important in some contexts than others. For example, a black box AI used to make judiciary decisions would completely remove transparency from a system that requires careful oversight. It seems to me that the intent of the legislation is to avoid such cases from popping up, so that people can contest decisions made that would have a material impact on them, and that organisations can provide traceable reasoning.
kenjackson|1 year ago
It would seem accountable would only be higher in systems where humans were not part of the decision making process.