Skip to main content

How to Use Ardoq to Track Compliance with the EU AI Act

Ardoq’s guidance on how to implement the EU AI Act

Sean Gibson avatar
Written by Sean Gibson
Updated this week

Written by: Sean Gibson

Contents

Before You Begin

Familiarize yourself with these traditional Ardoq solutions and best practices however, a number of these solutions are now provided as an integrated Foundation solution for new customers:

Purpose

Organizations can leverage Ardoq to address the regulatory obligations of the European Union’s Artificial Intelligence Act (EU AI Act). The EU AI Act, entering into force across the EU in August 2024 and phasing-in through 2026, establishes a comprehensive risk-based framework governing the design, development, deployment, and ongoing management of artificial intelligence systems.

Delivering Value

Ardoq’s guidance on how to implement the EU AI Act gives organizations the ability to address the following key business questions:

  1. Which of our existing AI systems are classified as "High-Risk" or "Unacceptable" under the EU AI Act, and what is their compliance status?

  2. How are the specific EU AI Act requirements linked to our existing internal policies, technical controls, and governance frameworks?

  3. For our high-risk AI systems, who are the documented third-party vendors and suppliers, and what is the evidence of their regulatory compliance?

  4. Are we effectively documenting the required processes and personnel responsible for human oversight and transparency for our impactful AI systems?

  5. What is the current implementation progress and monitoring status for all our high-risk AI systems across the enterprise?

Scope and Rationale

At a high level, the EU AI Act establishes a comprehensive legal and operational framework that organizations developing, deploying, or distributing AI systems must follow.

This requires organizations to respond in the following manner:

  1. Classify AI Systems by Risk Level

  2. Establish a Risk Management System

  3. Ensure Data Quality and Governance

  4. Maintain Technical Documentation and Transparency

  5. Implement Human Oversight and Accountability

  6. Conduct Conformity Assessments and Display CE Marking

  7. Monitor and Report Risks

Organizations can face severe penalties for non-conformity with fines up to an eye-watering €35 million or 7% of global annual turnover.

To summarize, the EU AI Act requires organizations to treat AI governance like product safety management—integrating risk-based validation, auditability, and human accountability into every stage of an AI system’s life cycle.

This guide provides a structured approach for assessing, documenting, and demonstrating compliance with the EU AI Act’s requirements by integrating them into enterprise architecture components, including business capabilities, processes, applications, information assets, and organizational units.

Using Ardoq, organizations can:

  • Model and classify AI systems according to the Act’s risk categories (unacceptable, high-risk, limited, minimal) and systematically map these systems to business functions and processes.

  • Assess and document technical, data governance, risk management, human oversight, transparency, and post-market monitoring requirements for high-risk and general purpose AI systems.

  • Establish relationships between regulatory requirements, organizational policies, technical controls, and existing frameworks to support ongoing compliance.

  • Track conformity assessments, CE marking status, incident reporting, audits, and the roles of internal and external stakeholders in the governance of AI systems.

The approach outlined in this guide enables organizations to embed EU AI ACT compliance into their architecture, reporting, and assurance processes, providing clear evidence to regulators, auditors, and stakeholders of effective governance and responsible AI practices.

Prerequisites

To effectively implement the EU AI ACT in Ardoq, develop a solid understanding of several key concepts and practices:

Essential Knowledge Areas:

  • Application Lifecycle Management - Oversee your organization's development, deployment, and ongoing support of applications and capabilities

  • Business Process Management - Manage operational flows and activities within your organization

  • Business Capability Realization - Focus on practical implementation and achievement of business goals

  • Organizational Patterns - Structure and manage enterprise components using proven frameworks

  • Enterprise AI Management - Organize to plan and track the adoption of AI in an organization's Application Portfolio.

  • Enterprise AI Governance - Governing the use of AI, maximising its business potential, managing risks and ensuring regulatory compliance.

  • Application Risk Management - Mitigate risks associated with critical applications (optional but recommended)

These expertise areas form a comprehensive toolkit for successfully delivering EU AI ACT implementation in Ardoq.

Process to track compliance with the EU AI Act

This section summarizes the steps to show compliance with the EU AI Act. A subsequent section, below, explains in detail how to implement each step.

Approach Summary

Follow this high-level process to implement EU AI Act compliance in your organization using Ardoq:

  1. Model EU AI Act Requirements

    • Catalogue and structure all requirements from the EU AI Act, organizing them by key articles, risk categories (unacceptable, high-risk, limited, minimal), and obligations for providers and deployers.

  2. Classify and Inventory AI Systems

    • Identify all AI systems in the organization and classify them according to the EU AI Act’s risk levels.

    • Record the ownership, deployment status, use cases, and interactions with individuals.

  3. Map Regulatory Relationships

    • Link EU AI Act requirements to existing internal policies, controls, technical standards, governance frameworks, and external regulations (e.g., GDPR, cybersecurity standards).

  4. Assess and Document System Compliance

    • Perform detailed assessments for each AI system, capturing evidence of compliance (technical documentation, data governance, risk controls, human oversight).

    • Use fields and surveys to track system status, conformity assessments, CE markings, and incident reporting obligations.

  5. Establish Human Oversight and Transparency

    • Document the processes and personnel responsible for human oversight, decision support, and remediation actions for high-risk and potentially impactful AI systems.

    • Record measures ensuring end-user awareness of AI interactions where relevant.

  6. Monitor and Maintain Compliance Over Time

    • Implement maintenance processes in Ardoq for ongoing monitoring, post-market surveillance, compliance audits, and reporting of serious incidents to regulators.

    • Schedule periodic reviews and updates to system documentation and regulatory mappings.

  7. Document Third-Party and Supplier Relationships

    • Track all external vendors and suppliers contributing to AI systems (especially high-risk systems), recording evidence of their regulatory compliance and data provenance.

  8. Generate Compliance Reports and Visualizations

    • Use Ardoq’s reporting and visualization features to build dashboards, heatmaps, and audit trails of system compliance status, risk levels, and implementation progress.

Note: This approach enables organizations to systematically model, manage, and demonstrate compliance with the EU AI Act, facilitating robust oversight, stakeholder assurance, and regulatory engagement.

Metamodel Elements of EU AI ACT Solution

Image Description - Regulation Compliance Metamodel

Information Artifact

Regulation Workspace: Create a separate workspace for each regulation implemented in Ardoq. Use the Information Artifact component type template to represent the EU AI ACT regulation in this workspace.

Assessment Workspace: To implement the EU AI Act assessment, leverage the Compliance Assessments workspace that is included in the Enterprise AI Governance.

The EU AI Assessment is included in sample materials to track EU AI ACT Criticality. Use these fields to generate reports, copy them to the subject component, and identify further actions. If necessary you can create additional fields on the survey to support additional data collection.

Reference: Information Artifact component types are introduced and defined in the Architecture Records Ardoq Solution. See also How to represent Policies, Principles, Standards and Frameworks in Ardoq.

Category

Use Category components to structure the Requirements that make up EU AI ACT and for categorizing assessments. Category components structure EU AI ACT requirements in a hierarchy of arbitrary depth, with leaf nodes being the component type organized within it.

Reference: The Category component type is described in How to use Categories in Ardoq.

Requirement

A requirement represents a specific obligation your organization must meet in addressing EU AI ACT regulation. It may also belong to a broader collection of requirements or characteristics that belong to an information artifact, such as a policy, set of principles, or Compliance Framework (e.g., NIST AI Risk Management Framework or ISO Framework).

Business Capability

Ardoq defines a business capability as a logical activity or group of activities your organization performs. Unlike a business process, we define business capabilities by grouping activities that access or utilize a shared resource (like customer information) rather than in response to a particular trigger or event.

Capability Instance: Apply to Business and Technical Capabilities using the existing Capability component type with a field to indicate Atomic or Instance. Use a brief naming convention and instance workspace to suggest the component is an instance.

AI Systems as an Application

The prerequisite foundational Enterprise AI Management solution leverages the EU AI ACT definition of an AI system. Defined as a machine-based system that is designed to operate with varying levels of autonomy. It may show adaptiveness after deployment, and it infers, from the input it receives, how to generate outputs like predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition highlights seven key elements:

  • It’s machine-based.

  • It can operate with different levels of autonomy.

  • It may adapt after it’s been deployed.

  • It has explicit or implicit objectives.

  • It infers from input data.

  • It generates outputs like predictions or decisions.

  • These outputs can influence physical or virtual environments.

Ardoq approaches modelling an AI System as an Application, Application Module or Technology Service Component Type rather than introducing a new component type.

An Application is a deployed and running software solution that provides specific business or technology capability, performs a defined task or analyzes specific information, regardless of its deployment environment (SaaS, local, etc.). It may be part of (i.e. a child component of) a larger Application.

Organization Unit

An organization represents the top-level component of your organization and other organizations forming part of your broader ecosystem, such as partners.

Organizational Units decompose into business units, departments, sub-organizations, committees, teams, and groups to enable organizational hierarchy creation. Document both structural (hierarchical) and functional entities within your organization using the Organizational Unit component.

External Organisation

Create a separate workspace to hold External Organizations, particularly organizations vendors and suppliers contributing to AI systems (especially high-risk systems) as defined in EU AI ACT.

Implementation

  1. Modeling EU AI ACT Requirements

To model regulatory requirements for the EU AI ACT we build on the concepts developed as part of the Generic Regulatory Compliance Solution. Specifically we recommend utilizing the Information Artifact and Requirement Component guidance.

Framework Workspace is composed of 3 component types

  1. Category - The Category component is used to organize in a categorized hierarchy according to respective regulations (EU AI ACT, EU AI ACT, NIS2, etc.). This may be of arbitrary depth, with the leaf nodes being of the component type that is organized in the hierarchy. The category itself would be an area of focus within the EU AI ACT.

  2. Information Artifact - The information artifact simply represents the EU AI ACT Regulatory Framework.

  3. Requirement - In this context, the component type represents the specific requirements from the Regulation (ex. EU AI ACT has a requirement of “Ensure the testing program covers all critical systems, applications, and processes, including third-party services.”)

The Diagram above shows the hierarchy of Information Artifact, Category and Requirement.

2. Classify and Inventory AI Systems

2a. Inventory AI Systems

Here we summarize the process for identifying and classifying AI Systems within an organization's deployed systems using a Technical Capability Model, directly leveraging the Enterprise AI Management solutions method. The approach leverages a flexible, "soft-coded" classification system to accurately tag systems as AI and specific types of AI capability (e.g., LLMs, Computer Vision) for effective governance and management, while avoiding automatic propagation of the AI classification to connected systems that don't actually utilize the embedded AI functionality.

Key Steps for Identifying and Classifying AI Systems

  1. Define the AI Capability Model: Place the "Artificial Intelligence" technical capability component (following a recognized definition like the EU AI Act) and its hierarchical descendants, which represent specific AI technology types, within your existing Technical Capabilities Workspace. A minimum of just the top-level "Artificial Intelligence" component can be used for a simple "AI or not AI" classification.

  2. Ensure Component Types Have the AI Fields: Verify that the relevant system component types—Applications, Application Modules, Technology Services, and Technology Products—contain the calculated field, AI System, which will be used to flag the component as an AI System.

  3. Establish the Realization Reference: Create an "Is Realized By" reference from an AI technical capability component (from the model created in step 1) to a system component (Application, Module, Service, or Product). This action will set the system component's AI System field to true.

  4. Propagate AI Classification (Where Applicable):

    1. If a Technology Product is classified as an AI System, the classification will automatically propagate to any components that are the destination of a "Deploys To" reference from that product or its children.

    2. If an Application Module is classified as an AI System, its parent Application will also automatically be classified as an AI System.

  5. Manually Classify Embedded AI Systems: Identify systems that connect to or are supported by a classified AI System (using the provided Viewpoint for help) and manually check whether they are leveraging or embedding the external AI capability. If they are, manually record them as AI Systems by creating an "Is Realized By" reference to the specific AI capability they have embedded, as the classification is not automatically propagated via Connects To or Is Supported By references.

3. Map Regulatory Requirements to Compliance Efforts

Prior to the enforcement of the EU AI ACT regulation, organizations may already have implemented a common compliance or control framework such as NIST AI RMF, ISO42001, ISO23894, ISO23053 or other frameworks relevant to their business operating model. These frameworks will often deploy controls into the organization to satisfy requirements for the EU AI ACT regulation.

From a compliance perspective Ardoq recommended establishing an ‘Is Realized By' relationship from the AI ACT regulation requirements to the relevant control framework. This way organizations can report on the compliance of EU AI ACT through the realization of the control framework controls.

The diagram above shows the dependence between a EU AI ACT requirement and a respective NIST AI RMF requirement.

Integration Steps:

  1. Link baseline requirements - Once you have EU AI ACT regulation baseline requirements in Ardoq, link them to existing standards or frameworks you have in place

  2. Create relationships - Establish links between individual EU AI ACT regulations and specific parts of different frameworks you've implemented that support EU AI ACT regulation requirements

  3. Generate compliance reports - Create reports demonstrating compliance and show your progress in addressing EU AI ACT regulation requirements through implemented standards, frameworks, and policies

  4. Connect internal controls - Link internal controls and policies implemented as part of your risk management or IT service management practices directly to regulations addressing EU AI ACT concerns

For more information regarding the implementation of Control Frameworks and controls please see the guidance in the Compliance Assurance metamodel.

4. Assess and Document AI System Compliance

To assess and Document AI System Compliance is broken down into a two part process.

The first part is to determine the intended purpose of the act. This differs conceptually from using technical capabilities to determine the technical characteristics or the business capabilities which determine the business deliverable to the organization or customers. The intended purpose is what the organization plans to do with the AI functionality and is more conceptual. It is this purpose that determines the Risk-Based Classification or risk level associated with a given AI implementation.

This comes from understanding the AI EU Act and how it categorizes how ai will be used and the associated risk levels which are:

  • Unacceptable Risk

  • High Risk

  • Low Risk

  • Minimal Risk

Understanding the Risk-Based Classification of AI Systems

The AI Act categorizes AI systems into four levels of risk, each with a different set of obligations:

  • Unacceptable Risk: These AI systems are outright banned as they are considered a clear threat to fundamental rights. Prohibited practices include:

    • Harmful AI-based manipulation or deception.

    • Exploiting vulnerabilities due to age, disability, or socio-economic situation.

    • Social scoring by public authorities.

    • Real-time remote biometric identification in public spaces for law enforcement, with limited exceptions.

    • Emotion recognition in the workplace or educational institutions.

    • Untargeted scraping of facial images from the internet or CCTV to create or expand facial recognition databases.

  • High Risk: This is the most regulated category. AI systems are classified as high-risk if they are a safety component of a product or a product itself that is covered by EU legislation, or if they fall into specific use-case categories listed in Annex III of the Act. These use cases include:

    • AI in critical infrastructure (e.g., transport, energy).

    • AI in education (e.g., for evaluating student performance).

    • AI in employment, worker management, and access to self-employment (e.g., CV-sorting software).

    • AI for access to essential public and private services (e.g., credit scoring).

    • AI in law enforcement and migration.

    • AI in the administration of justice and democratic processes.

    • Biometric identification and categorization systems.

  • Limited Risk: These AI systems are subject to specific transparency requirements. For instance, providers of chatbots must inform users that they are interacting with a machine. Similarly, providers of generative AI must ensure that AI-generated content (like deepfakes) is identifiable as such.

  • Minimal or No Risk: The vast majority of AI systems fall into this category and are subject to minimal, if any, regulation under the Act. Organizations are encouraged to adopt voluntary codes of conduct.

Consult the following EU AI Act sections Title II (Prohibitions), Title III (High-Risk), and Title IV (Transparency/Limited Risk) to fully understand the risk-based structure. The Annex III list is indispensable for identifying high-risk AI.

If you need a quick guide to compliance by system type this article will provide you with a summary table.

EU AI Act Compliance Assessment

The EU AI Act Solution has been designed to work with the alternative approach outlined in the Enterprise AI Governance solution. This means that EU AI Act Assessment components that are kept under a Category called EU AI Act Compliance in the Compliance Assessments workspace will be recognized as AI compliance assessment components by this Solution’s reports and dashboard widgets.

Place your AI Assessment components under a Category component that contains the term “AI” in the Compliance Assessments workspace. Include the word “Assessment” in the name of your component type.

Ensure that your custom component type contains the following fields:

  • Pass

  • Review Date

  • Live

  • Approval Status

Ensure that your custom component type uses the Has Subject reference type to point to the component or components that are the subject of the assessment (Applications or Technology Services).

Specific questions in the EU AI Act Assessment are determined, broken down by section and derived by the EU AI ACT itself. The need to answer all of these questions are conditional based on the Risk-Based Classification itself.

AI System Identification

To uniquely identify the system, understand its purpose, the role the organization plays (e.g., Provider, Deployer), and its jurisdictional scope (i.e., whether it operates within the EU).

Unacceptable Risk Assessment

If the AI system is prohibited and withdrawn from the market. A "Yes" answer to any question in this section likely indicates a violation of the Act.

High-Risk AI System Assessment

If the system is classified as High-Risk. The questions ask if the system is used in fields like Biometrics, Critical Infrastructure, Education, Employment, or Law Enforcement. A "Yes" answer triggers the comprehensive compliance obligations required for High-Risk AI.

Limited Risk AI System Assessment

If the system falls into the Limited Risk category, which mandates specific transparency obligations like informing users they are interacting with an AI (e.g., a chatbot) or labeling AI-generated content (e.g., deepfakes).

Minimal Risk AI System Assessment

This section confirms that the system poses a Minimal Risk and is therefore subject to no specific legal obligations under the Act, allowing for free development and deployment.

5. Establish Human Oversight and Transparency

To establish human oversight and transparency in addressing the EU AI ACT we leverage the Enterprise AI Governance and Enterprise AI Management Solutions. These solutions provide a practical framework for operationalizing key requirements mirrored in the EU AI Act, specifically Human Oversight and Transparency.

These two governance pillars are established through a set of core AI Principles, such as "Human-in-the-Loop" (mandating capacity for human intervention in high-stakes situations) and "Transparent by Design" (requiring understandable and explainable decision-making). The Enterprise AI Governance solution then enforces these requirements by mandating human accountability through the "Approved By" references on Compliance Assessments, ensuring a human reviews and signs off on an AI system’s status.

Crucially, Transparency is secured through technical measures: all Compliance Assessments require a written "Rationale" for the outcome, and periodic Health Checks are preserved as a distinct, un-overwritten historical record to ensure an auditable history of the system’s performance over time. This structured approach, built on the foundation of AI system classification provided by Enterprise AI Management, directly addresses the EU AI Act's focus on structured governance and accountability.

Establishing Human Oversight

The concept of human control and review is established through a dedicated AI Principle and specific governance processes:

Human-in-the-Loop Principle: This is one of the core AI Principles that organizations can adopt. Its definition explicitly states its purpose is to maintain "the capacity for oversight and intervention in all AI-driven processes, particularly in high-stakes situations". This principle ensures that human judgment supplements, rather than is replaced by, the AI system, and provides a clear point of accountability.

Compliance Approval: The Compliance Assessment component type includes an "Approved By" reference that connects the assessment record to specific individuals (People components). This formal mechanism ensures that a human must review and sign off on the compliance status of an AI system.

Recommendation Review: EU AI ACT Compliance Assessments require the reviewer to record comments, recommendations, and the underlying rationale for the assessment of each AI principle. This ensures a human reviewer provides actionable input on the system's performance and alignment with principles.

Process for Improvement: The analysis of EU AI ACT Compliance Assessment outcomes is designed to help organizations refine their AI principles and improve the deployment of AI systems, such as by developing design patterns or reference architectures.

Establishing Transparency

Transparency is established through mandatory data fields, auditable records, and a specific AI Principle:

"Transparent by Design" Principle: This core principle guides that AI solutions should be designed to make their decision-making processes understandable and explainable to both technical and non-technical stakeholders. This is intended to build trust, facilitate auditing, and comply with regulatory requirements for explaining automated decisions.

Auditable Compliance Records:

Rationale Field: The Compliance Assessment component type includes a "Rationale" text field to facilitate recording an explanation for the outcome (Pass or Fail) of the assessment.

Preserved History: The process for Renewing Compliance Assessments and EU AI ACT Compliance Assessments is specifically designed not to overwrite the previous records. Instead, a new component is created for each new assessment, preserving a growing, auditable history of the AI system's compliance and health status over time.

Clear Classification: The solution clearly identifies and classifies systems as AI Systems based on a Technical Capability Model, often leveraging the definition from the EU AI Act. This provides transparency to the organization about which systems fall under AI governance and what kind of AI capability they realize (e.g., LLMs, Facial Recognition).

Evaluation Scores: The EU AI ACT Compliance Assessment process documents and records the data required for submission to the regulator for conformity. This also provides clear, quantifiable data on how the system aligns with its guiding principles for each risk-based classification level.

6. Monitor and Maintain Compliance Over Time

Ardoq assists organizations in monitoring and maintaining compliance with the EU AI Act over time through the following mechanisms:

Continuous Monitoring and Auditable History:

Recurring Compliance Assessments: The Enterprise AI Governance solution features the EU AI ACT Compliance Assessment component type. The process for renewing EU AI ACT Compliance Assessments is specifically designed not to overwrite previous records, but rather to create a new component for each assessment. This preserves an auditable, time-stamped history of the AI system's health and compliance status over its entire lifecycle, which is crucial for demonstrating continuous compliance and post-market monitoring as required by the EU AI Act.

Log and Traceability: The requirement for Record-keeping (including logging system events) under the EU AI Act is supported by providing a structural place to store and link this historical assessment data, ensuring traceability of the system's compliance evolution.

Systematic Compliance Assessment and Remediation:

Formal Compliance Assessments: The Compliance Assessment component type allows organizations to formally assess an AI system against specific regulatory frameworks, including the EU AI Act. This ensures compliance is not a one-time event but a formalized, repeatable check.

Organizational Scope and Risk Prioritization:

Holistic AI Identification: The Enterprise AI Management solution ensures every system within the enterprise is identified, classified, and tracked (even those embedding AI via support or integration). This provides the necessary scope to ensure all relevant High-Risk systems are brought under the EU AI Act's compliance regime.

Vendor and Third-Party Management: The approach outlined in step seven (Document Third Party and Supplier Relationships) requires creating specific Vendor Assessment components and running periodic surveys. This ensures continuous monitoring of the security, quality, and compliance standards of external suppliers, which is vital since the Provider organization remains responsible for the compliance of its High-Risk AI system even if components are outsourced.

Data-Driven Review and Visualization:

Analytical Review: By recording compliance data structurally, Ardoq can generate visualizations (e.g., heatmaps, dashboards) that allow leaders to monitor the overall risk and compliance posture of their entire AI portfolio at a glance, facilitating ongoing governance and decision-making by human oversight bodies.

7. Document Third Party and Supplier Relationships

To adhere to EU AI Act regulations, organizations must manage external AI dependencies by establishing an External Organizations Workspace for Material Service Providers. This involves configuring provider fields on the Organization component to capture EU AI Act-specific information, such as EU AI ACT Third Party Vendor designation. Organizations identify critical providers for High-Risk AI Systems and enforce governance through EU AI ACT Vendor Assessments. These assessments involve creating dedicated components for each critical provider, establishing relationships, and deploying vendor surveys via Ardoq's broadcast functionality.

Finally, a Gremlin script transfers collected assessment data, including EU AI ACT Criticality fields, back to Organization components for reporting and risk visualization.

Ardoq provides a methodology for an organization to manage its obligations as a Provider or Deployer obligations under the EU AI Act, ensuring that systems and components sourced from third-party vendors are compliant with the high standards required for High-Risk AI.

  1. Create external organization workspace - Use or create an External Organizations Workspace using the Organization component type to hold Material Service Providers

  2. Configure provider fields - Create an additional field to identify EU AI ACT Third Party Vendor and add fields to capture all relevant EU AI ACT-specific information (address, business operation numbers, company numbers, contact information, support details)

  3. Identify critical providers - Start by identifying support/vendor organizations that provide critical EU AI ACT AI Systems

Conducting EU AI ACT Vendor Assessments:

  1. Create vendor assessments - Create a 'EU AI ACT Vendor Assessment' as a 1:1 relationship for each organization identified as a Third Party Service Provider and update relevant fields

  2. Establish assessment relationships - Create an 'is subject of' reference from individual EU AI ACT Vendor Assessment to the related organization component that 'is subject of' the assessment

  3. Deploy vendor surveys - Use survey functionality to have application, process, information asset owners, or relevant third-party contacts respond to surveys completing assessment fields. Automate regular sending through Ardoq's broadcast functionality

  4. Transfer vendor data - After collecting vendor assessment data, create a Gremlin script to copy EU AI ACT Criticality field values to the relevant organization component, enabling heat map generation and reporting on suppliers connected to EU AI ACT critical applications

8. Generate Compliance Reports and Visualizations

Overall Status in addressing the EU AI ACT

Ardoq provides an out of the box dashboard which demonstrates how well the EU AI Act requirements are being addressed in the organization through assessments, compliance frameworks or relationships to internal controls.

Image: EU AI Act Dashboard

Reporting on Risk-Based Classifications and Conformity

Ardoq assists organizations create EU AI ACT relevant reports and business supporting visualizations through the use of assessments to collect structured data. These assessments capture specific compliance attributes, risk ratings, and detailed textual rationales for assessment outcomes.

By linking these assessment components to the subject AI system and the relevant regulatory requirements (e.g., DORA's requirements, or the EU AI Act's High-Risk categories), Ardoq creates a comprehensive, interconnected data model of the organization's compliance landscape.

Did this answer your question?