Skip to main content

AI Lens: Enterprise AI Governance Purpose, Scope and Rationale

Governing the use of AI, maximising its business potential, managing risks and ensuring regulatory compliance.

Simon Field avatar
Written by Simon Field
Updated this week

Contents

Purpose and Value

Introduction

Artificial Intelligence is transforming the way in which almost every aspect of many organizations work. Its nature means that it cannot simply be treated as “just another application”. In a growing number of regions across the globe, it falls under specific regulatory control. It both consumes and disseminates huge quantities of information, which may conflict with laws and policies associated with data privacy, intellectual property and commercial sensitivity. It can interact directly with customers and machines, affecting health, safety, reputation and ethics. As a result, it has distinct architectural and security considerations that demand management attention.

Purpose

This solution provides a place where organizations can oversee how AI is being deployed and used. They can use it to identify opportunities, monitor progress, track risks, record compliance, implement controls and measure success.

Delivering value with this Solution

Ardoq’s Enterprise AI Governance Solution gives organizations the ability to address the following key business questions:

  • Which of our systems are subject to specific AI laws, regulations and policies?

  • Which applications are integrated with our AI systems, and so may fall within the scope of AI governance?

  • What is the compliance status of each of our AI systems?

  • Which compliance assessments are shortly due for renewal?

  • What technical debt have we accumulated that needs addressing? *

  • What are the AI compliance principles that guide our adoption and use of AI?

  • How well does each AI system meet the demands of our AI principles, policies or regulatory requirements and what, if anything, needs to be improved?

Scope and Rationale

Two complementary component types, Compliance Assessment and Solution Health Check are used in this Solution to record the results of AI compliance assessments, and the details of supporting evaluations of alignment with AI policies, principles or regulatory requirements respectively. For a more detailed discussion of how these two component types can be used in different situations, and in some cases together, see Assessing Applications with Ardoq.

AI Compliance Assessments

The Compliance Assessment component type

At the heart of this Solution is the ability to record the outcome of compliance assessments. The Solution does not impose any constraints on the criteria against which compliance is being assessed. In some jurisdictions this may be local legislation that imposes some regulatory requirements on organizations that are operating AI Systems (such as the EU AI Act in the European Union). And many organizations are likely to impose their own internal controls to manage their use and operation of AI, irrespective of whether there is a legal requirement to do so. These may take the form, for example, of internal policies or a set of guiding principles. The Regulatory Compliance Solution includes representation of regulations, and the same metamodel can be used to model policies and principles, as described in How to represent Policies, Principles, Standards and Frameworks in Ardoq.

Recording an AI Compliance Assessment involves adding a new Compliance Assessment Component to the Compliance Assessments workspace. This should be a child of the AI Compliance Category component in that workspace, to identify it as a compliance assessment that relates specifically to AI compliance. The same component type can also be used to represent compliance assessments that relate to other regulatory or policy requirements. For more information about these wider uses of compliance assessments, see the Architecture Records Solution.

To support the potential use of this component type for different situations, there are just five simple fields. Pass? Records the outcome - did the assessment result in a Pass or Fail decision? A Text paragraph field called Rationale facilitates recording an explanation for the outcome. Review Date records the date of the assessment. If the component is created from a survey, this can be automatically populated using a Hidden Field. The Live Date Range field is used to indicate the period during which the assessment outcomes remain valid, and Approval Status records the state of the record.

A Compliance Assessment component has a Has Subject reference going to each component that represents the AI System subject of that particular compliance assessment, and approval of the assessment by individuals can be recorded with Approved By references.

A Survey, Record an AI Compliance Assessment, is provided with the Solution for creating and maintaining AI Compliance Assessments, and the combination of AI Systems and their connected Compliance Assessment components (or absence of them) is used in the Solution’s reports and dashboard.

Renewing Compliance Assessments

The Compliance Assessment component type contains a Live date range field that represents the period during which it remains valid. It is likely that you will wish to have an automated process that ensures that expiring compliance assessments are renewed, ensuring that AI Systems that are live remain in compliance with policy or regulatory requirements, with an audit trail that demonstrates the fact.

A Broadcast, AI Compliance Assessments expiring, is included with the Solution. If launched, it will send a message to the owners of AI Systems that have a Live end date within the coming month advising them to conduct a new compliance assessment, and record the results by creating a new Compliance Assessment component using the Record an AI Compliance Assessment Survey.

We chose not to use a Survey Broadcast, because it would overwrite the previous Compliance Assessment component with new field values. Such a record should be retained for audit purposes even after it has expired, so the Broadcast will facilitate the creation of a new component, with new values and a new Live date range, that can coexist with past ones. Since there will be multiple components representing different assessments of the same AI Systems, it is a good idea to include both the name of the application and the date of the assessment in the component name to ensure that the most relevant one can be easily found. The need to create a new component means that the message sent to the AI System owners must include a link to the Survey for the creation of new assessment components.

You need to configure the Broadcast before you can launch and use it. See Getting Started with AI Lens: Enterprise AI Governance for details.

Using alternative Compliance Assessment records

You may wish to record more specific details for your AI compliance assessments than are provided for with the Compliance Assessment component type. Adding further fields to that component type would undermine its general reusability. If you wish to do this, we recommend you create your own AI compliance assessment type. By adhering to the following rules, your custom AI compliance assessment components will be included in the Solution’s reports and dashboard widgets that refer to compliance assessments and AI Systems’ compliance status.

  1. Place your AI Assessment components under a Category component that contains the term “AI” in the Compliance Assessments workspace.

  2. Include the word “Assessment” in the name of your component type

  3. Ensure that your custom component type contains the following fields:

    1. Pass?

    2. Review Date

    3. Live

    4. Approval Status

  4. Ensure that your custom component type uses the Has Subject reference type to point to the component or components that are the subject of the assessment (Applications or Technology Services).

EU AI Act Compliance Assessments

The EU AI Act Solution has been designed to work with this Solution, and you will note that the EU AI Act Assessment component type adheres to the above rules for alternative compliance assessment records. This means that EU AI Act Assessment components that are kept under a Category called EU AI Act Compliance will be recognized as AI compliance assessment components by this Solution’s reports and dashboard widgets.

AI Health Checks

The Solution provides support for conducting and recording AI Health Checks of your AI Systems. These consider how well a given system meets a set of criteria, recording an evaluation’s scores and recommendations. Such a Health Check can be recorded as evidence using a Refers To reference to it from its corresponding Compliance Assessment record (creating this link is supported in the Record and AI Compliance Assessment Survey. The process of conducting AI Health Checks uses the Conduct AI Health Check Survey.

The example AI Principles, which you can import to your Ardoq organization, are shown below. See Getting Started with AI Lens: Enterprise AI Governance for details of how to do this. You can open this view in Ardoq with the Solution’s AI Principles Viewpoint.

The Solution Health Check component type

An AI Health Check is an assessment of one or more AI System against the criteria set out in the AI Principles. When conducting an AI Health Check, each principle is considered, and scored, on a scale of 1 to 5, from two perspectives: the importance of that principles in the context of the system(s) under review, and the level of concern regarding how that principle is upheld by the system(s) under review. The reviewer may also record some comments and recommendations against each principle, and a summary set of recommendations along with their underlying rationale are recorded.

The AI Health Check is represented with a Solution Health Check component in the Solution Health Checks workspace. For more information about the Solution Health Check Solution and its wider architectural use, see Solution Health Check: Purpose, Scope and Rationale. A Survey, Conduct AI Health Check, supports the Health Check process described above, and a Broadcast, AI Health Checks expiring, supports the scheduling of further Health Checks prior to the expiry of existing ones.

The Viewpoint, AI Health Check details, generates a visualization of AI Health Checks including the highlighting of principles which have high levels of concern.

It is also possible to display this information in a Bubble Chart, an example of which is included in the Presentation Enterprise AI Governance.

Analyzing Health Check outcomes

The AI Governance Dashboard contains an AI Health Checks section which includes an analysis of all Health Checks that are not long since retired. This explores the Health Checks from the perspective of each AI Principle, showing the minimum, average and maximum scores for both their Importance and Level of Concern. It allows you to see, for example, which principles are proving to be the hardest to live up to, and which are the most, or least, important across your AI Systems. It might help you to refine your AI Principles, and to improve the way in which you deploy AI Systems, such as developing and publishing some design patterns or reference architectures.

Renewing AI Health Checks

A Broadcast, AI Health Checks expiring, is included with the Solution. If launched, it will look for any AI Health Checks that have a Live End Date within the next month, and send a message to the Application or Technology Service owners recommending that they run a further Health Check to replace the expiring one.

You will note that we chose not to implement this Broadcast as a simple Survey Broadcast. Had we done so, the expiring Health Check component would be overwritten with the data from the new Survey. We took the decision that the expiring Health Check component should remain, as a record of the previous Health Check. With this approach, each annual Health Check will create a new record, with new values and its own Live date range. Previous Health Checks will be preserved to create a growing auditable history. As with Compliance Assessments, we recommend including both the name of the subject AI System and the date of the Health Check in the component name to ensure that the right one can be easily found.

Documenting Technical Debt (optional)

Examining an AI System for compliance or assessing its conformance with AI Principles represent great opportunities to identify, and document, technical debt. Like any other IT System, AI Systems have the potential to acquire technical debt which, if left unaddressed, can become a substantial drain on an organization’s resources. Indeed, AI Systems are sometimes rushed into production in order to achieve perceived market advantages, and so they may be particularly vulnerable to being launched with an existing burden of technical debt (corners having been cut to achieve early adoption).

Ardoq’s Technical Debt Management Solution provides a comprehensive approach to recording, quantifying and prioritizing technical debt. If you’ve deployed this solution, you might wish to insert a link from your AI Health Check components to Debt Items that were newly identified during the Health Check process. To make the recording of new Debt Items easy, you might choose to add a reference question to the Conduct AI Health Check Survey. This will allow you to name and create new Debt Item components while you are running AI Health Checks. See Getting Started with AI Lens: Enterprise AI Governance for details of how to do this.

Did this answer your question?