The Inner Circle

 View Only

ACT-IAC ETHICAL APPLICATION OF ARTIFICIAL INTELLIGENCE FRAMEWORK

  • 1.  ACT-IAC ETHICAL APPLICATION OF ARTIFICIAL INTELLIGENCE FRAMEWORK

    Posted Oct 09, 2020 04:06:00 AM
      |   view attached
    Hi All,

    The ACT-IAC has just published "ETHICAL APPLICATION OF ARTIFICIAL INTELLIGENCE FRAMEWORK" 

    This paper and its index are intended to be an advisory framework to highlight that humans are
    ultimately responsible for the ethical application of Artificial Intelligence (AI) solutions. By
    monitoring and measuring critical elements of AI throughout the lifecycle of development,
    implementation, and operations, one can assess an AI application's level of credibility, and
    thus, the level of confidence to place in that instance of this rapidly evolving technology. This
    confidence can be demonstrated through an index that incorporates five core
    parameters underpinning the impact of AI on those systems: Bias, Fairness, Transparency,
    Responsibility, and Interpretability.
    1. Bias: AI algorithms learn from large quantities of data. The machine learning models that
    the AI builds can amplify some of the biases inherently present in the data. Accountable
    owners of AI systems should identify and address bias in AI to prevent negatively
    impacting desired mission outcomes or individuals in protected classes or statuses.
    2. Fair: AI systems should be designed to avoid the potential risk of unfair impact within the
    context of use, whether intentional or unintentional.
    3. Transparent: AI systems should be developed so that models, data, and results
    are auditable and explainable to decision-makers and the general population to the
    extent and manner appropriate or possible.
    4. Responsible: The implementation of an AI solution must be relevant to the purpose of
    the task. It must ensure that both data and model sources are uncompromised. It must
    produce repeatable, legal, authentic, auditable, and effective results.
    5. Interpretable: Stakeholders should thoroughly understand what AI has been asked to
    provide. They should be able to ensure that both data and model sources are credible,
    and will produce repeatable, trustworthy, and effective results.

    ------------------------------
    Michael Roza CPA, CISA, CIA, MBA, Exec MBA
    ------------------------------