NIST Publishes Proposed Principles for “Explainable” AI Systems
Time 2 Minute Read
Categories: General

On August 18, 2020, the U.S. National Institute of Standards and Technology (“NIST”) published a draft report, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312 or the “Draft Report”), which sets forth four proposed principles regarding the “explainability” of decisions made by Artificial Intelligence (“AI”) systems.

Explainability refers to the idea that the reasons behind the output of an AI system should be understandable. According to the NIST press release, AI must be explainable to society to enable understanding, trust and adoption of new AI technologies and the decisions and guidance they produce.

The proposed principles are:

  • Explanation: AI systems should deliver accompanying evidence or reasons for all outputs.
  • Meaningful: Systems should provide explanations that are understandable to individual users.
  • Explanation Accuracy: The explanation should correctly reflect the system’s process for generating the output.
  • Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient level of confidence in its output.

The Draft Report is a part of a broader NIST effort to develop trustworthy AI systems. In publishing the Draft Report, NIST indicated it hopes to start a conversation about the expectations to which decision-making devices should be held. NIST is accepting comments on the draft report until October 15, 2020.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page