Skip to main content

Introducing Ardoq Foundation Insights Agents

A key benefit of these Agents is their ability to look through large volumes of data and pick out the most valuable insights, giving you and your team more time to focus on taking action and delivering value to your stakeholders.

M
Written by Mario Aparicio
Updated today

The Foundation Insights Agents are currently in Open Beta

Access is currently limited to customers who have the Foundation Solution and associated reports

We will soon provide a guide for all customers to get access to the Foundation Solution reports

Please provide feedback through the Product Portal

Introduction

Ardoq Insights Agents are AI Agents that explore your architecture repository and look for a variety of architectural insights. You can think of them as architectural assistants, expanding your ability to track all the details in your repository. They will look for potential data quality issues, such as gaps or inconsistencies in your model, and can also find anomalies or insights that indicate architectural stories that could benefit from deeper investigation.

A key benefit of these Agents is their ability to look through large volumes of data and pick out the most valuable insights, giving you and your team more time to focus on taking action and delivering value to your stakeholders. We anticipate future versions being able to undertake follow-up action subject to user approval.

Model and Metamodel Agents

In this beta release, we are introducing initial versions of two complementary types of Insights Agent:

  1. Metamodel Insights Agent: this agent examines the repository’s metamodel and identifies patterns that reveal problems and gaps which can emerge as the repository is populated. In this first release, the agent itself is not visible to users of the Ardoq platform. It is run by the Ardoq product team, and only the resulting insights produced by this agent, which take the form of AI generated Reports and an associated Dashboard, are deployed to your Ardoq organization.

  2. Model Insights Agent: this agent examines your data - the components, references and all associated field values - looking for insights that can help you in your work as an Enterprise Architect. In this first release, these Agents are attached to specific Reports and run in their associated AI Assistant.

For this beta release, the two types of Insights Agents work independently, and both are limited to the metamodel that underpins Ardoq Foundation and models that conform to it. See Future Evolution of Insights Agents at the end of this article for details of how we intend these AI capabilities to grow.

Prerequisites

To use the beta release of Foundation Insights Agents your Ardoq organization must have the FND - Foundation Solution deployed. For new Ardoq customers, this is the Solution that is already deployed when you start working with Ardoq. For older Ardoq customers who do not have the Foundation Solution deployed, you need to follow these instructions to add the Foundation Solution to your Ardoq organization.

Access to the Foundation installation instructions will be provided soon

This beta release requires that the Foundation Metamodel remains unaltered. This means that the following must not have been changed:

  • Component type names in the Foundation metamodel

  • Reference type names in the Foundation metamodel

  • Field names in the Foundation metamodel

  • Report names (of those linked to Insights Agents)

Future releases will provide more robust support for adjusted metamodels.

To use the Model Insights Agents you must have access to the Report AI Assistant. To check this, open any report and check that you see the following button in the top right corner:

If you don’t see this button, you should first ensure that you have AI Features enabled. If you don’t, see How do I enable AI features in my Ardoq instance? for details. Once AI Features are enabled, you can turn on the Report AI Assistant in Organization Settings → Features (you will need to be an Admin user). After this, the AI Assistant button will be available in all your reports.

Deploying the Foundation Insights Agents

This beta release is restricted to Insights Agents that work with the metamodel and selected reports that belong to the Foundation solution.

Foundation Metamodel Insights Agent

To have access to the reports and dashboard created by the Foundation Metamodel Insights Agent you need to request for these to be deployed into your Ardoq organization. These will soon be available by contacting Technical Support or your Customer Success Manager to arrange for their deployment. Successful deployment will result in new assets being added to your existing asset folders:

  • In Solution Materials/Solution Assets/Foundation/Reports you will see a number of new reports. They will contain the suffix AIA, indicating that they were generated by an Ardoq Insights Agent. Their names indicate the nature of the patterns they focus on.

  • In Solution Materials/Solution Assets/Foundation/Dashboards you will see one new Dashboard named Foundation Insights - AIA. This report presents the output of all the reports generated by the insights agent in a single dashboard.

As a reminder, in this beta release the Metamodel Insights Agent itself is cannot be configured or run by end users. What is seen is the output of the agent that is, for now, managed and run by Ardoq internally: a collection of AIA-labeled Reports and an AIA-labeled Dashboard. Note that the AIA Reports contain gremlin code generated by the Metamodel Insights Agent. You should not attempt to modify this code, as it cannot be restored without redeploying the solution.

If you have changed the workspace names where Foundation components are held, you may update the list of a Report’s source workspaces by editing the report setup to ensure that the output contains your live data.

The Foundation Insights - AIA Dashboard contains a widget for each pattern identified by the agent following its analysis of the metamodel. This is a large set of possible insights. It is likely that some of these are not relevant for your organization; for example, there may be some component fields in the metamodel which you have decided not to populate for the time being. You can edit the Dashboard to rearrange the widgets, deleting those that are not relevant, changing the coloring or type of widget to suit your needs. In this way, you are tailoring your use of the agent’s insights. You should not amend the gremlin code of the underlying reports.

Foundation Model Insights Agents

You do not need to take any action to gain access to the Model Insights Agents provided you have the Foundation solution deployed and the Report AI Assistant enabled (as described in Prerequisites above). In this initial release, the Model Insights Agents are automatically embedded within the Report AI Assistant associated with selected Foundation reports. In the list of Reports, you can identify those that contain Model Insights Agents by the agent icon that appears after the report name:

Note: in this first beta release, renaming the Report will cause the loss of this identifying icon.

Using the Foundation Insights Agents

Foundation Metamodel Insights Agent

For this beta release, the Metamodel Insights Agent is only represented by its output, which takes the form of a collection of generated Reports and an associated Dashboard. We recommend accessing these via the Dashboard Foundation Insights - AIA. This is divided into the following sections, reflecting the different underlying AI-generated Reports:

  • Ownership gaps

  • Coverage gaps (Applications)

  • Coverage gaps (Business Capabilities)

  • Strategic Coherence

  • Application Lifecycle

  • Organizational Structure

  • People Structure

Each section contains a collection of Pie Chart Widgets, each of which focuses on a pattern that might be worthy of deeper investigation. These patterns are ones identified by the Foundation Metamodel Insights Agent as being of potential architectural significance. Each one corresponds to a column in a report that was generated by the agent, containing the value True if a given component has data that matches the pattern.

The True slice is coloured red, and represents the proportion of components that match the pattern (i.e. those that require investigation). As with all dashboard widgets, clicking on a pie segment opens the widget’s underlying report filtered to show just those components that are represented in the selected segment. Clicking on the True segment of the pie titled Unowned Critical Applications in the above example opens this filtered view of the AI generated report Application ownership gaps - AIA:

The user can now see the full list of all unowned applications that have a high score (4 or 5) for business criticality. And note that the AI Assistant button is available in the top right-hand corner should the user wish to explore the content of the report in more detail conversationally.

Tailoring the Foundation Insights - AIA Dashboard

The Foundation Metamodel Insights Agent has identified a large number of patterns: 30 in all, across 7 different AI-generated Reports.

Not all of these will be interesting to all customers. For example, if you have not made use of the Reports To reference that allows you to show who reports to whom, four of the Dashboard widgets, under the heading People Structure, will be of no interest to you. We’ve chosen to leave all of the insights identified by the Agent in the Dashboard rather than impose our own preferences and limit your choice. The consequence of that decision is a large Dashboard with widgets that will vary in interest depending on how much of the Foundation metamodel you are using, and which of the patterns might be a useful “red flag” for you, given your organizational context.

In the future, the Agent will become part of the Ardoq platform, at which point we anticipate giving you the opportunity to configure its output. For this release, we recommend two ways that you can safely change the output of the Agent (the generated Reports and their accompanying Dashboard):

  1. If a Pie Chart widget is of no interest to you, and never will be, you can edit the Dashboard and delete the widget. You are also, of course, free to reorganize the Dashboard, change the type of widget for any of the patterns, or even create additional Dashboards to “feed off” the AIA Reports.

  2. If the underlying Reports contain columns that are of no interest to you, and never will be (e.g. because you will not populate the fields or references they depend upon), you can edit the Report and remove unwanted columns, or rearrange their order, using the Data presentation section of the Report Builder (see below).

You should not, however, amend the Report’s agent-generated gremlin code.

Foundation Model Insights Agent

Model Insights Agents are accessed via the AI Assistant attached to individual Reports. For this beta release, Model Insights Agents are only available for the following reports:

  • Applications - FND

Agents linked to further Foundation reports are under development and will be released in the coming weeks.

When triggered, each Model Insights Agent examines the data that lies behind the report for insights that can be gained by looking at the report for evidence of identified patterns. These patterns may variously correspond to gaps or errors in the data, or anomalies that are worthy of further investigation. These might also turn out to be data errors, but they might simply be anomalies that can be explained with your knowledge about the distinctive nature of your organization. Or they may be indicative of an issue in your organization, such as conflicting data from different sources, that represents an opportunity for the architect to add value - bringing together stakeholders to build better understanding leading to better business decisions. For example, an application may be considered to be of the highest level of business criticality by its business owner, yet it receives only a low level of support from the IT Service team. Some of these patterns reveal opportunities for new architectural initiatives - potential stories worth developing into action that can lead to tangible business value.

Taken together, these aspects that are investigated by the Model Insights Agent are often referred to by architects as “architectural smells” (Garcia et al. 2009). The value of the Insights Agents is that they have the potential to spot valuable insights among a sea of report data that would likely be missed by a human expert architect.

Each of the Agents is accessed and triggered in the same way. Open the report that contains a Model Insights Agent and open the AI Assistant by clicking on its button in the top right corner of the Report:

The AI Assistant panel will open on the right-hand side of the Report. Towards the bottom of this panel, just above the prompt input field, you will see a button inviting you to run the Insights Agent.

Click on this button to invoke the Insights Agent. It will take a minute or two to fetch the data associated with the Report, perform some calculations then analyze it from a number of different perspectives before populating the upper portion of the AI Assistant window with the output. At the end of the output you can ask to see the rationale behind the agent’s recommendations.

The insights remain in the agent context, so you can continue to discuss the results and recommendations with the AI Assistant.

Like all chats conducted with the AI Assistant, you can recall past conversations, including invocations of the Insights Agent by clicking on the history icon at the top of the AI Assistant window:

How Ardoq Insights Agents work

Metamodel Insights Agent

This AI agent examines a metamodel and identifies a set of metamodel policies for healthy EA data that can be monitored with rules. For this beta release, the agent visible to users of the Ardoq platform. Instead, it has been run independently with the Foundation metamodel, and its output is being made available via a set of AIA Reports that encode the identified rules, and an AIA Dashboard from which their application to your data can be visualized.

To analyse the metamodel, the AI Agent needs to be given a definition of the metamodel with all its richness of expression. We know that using a simple description, as one does in a text prompt to a Large Language Model, results in knowledge getting “lost in translation”. Parts of the metamodel get misunderstood, while other parts are invented by it in hallucination, resulting in inaccurate and inconsistent recommendations. To overcome this, we use an ontological representation providing far more information density about the metamodel. Combined with the EA reasoning rubrics, the AI Agent is able to produce much higher quality policies and rules than we’ve seen using manual development.

The result is a set of recommended patterns that are genuine insights, that are then translated automatically into a set of reports ready for consumption and visualization by Ardoq users.

Model Insights Agents

Each Model Insights Agent analyses a Report’s underlying data from the perspectives of multiple policies. Below is a summary, by Report, of the policies considered, and the corresponding questions addressed by the Report’s agent.

Report: Applications - FND

Policy

Question

The distribution of values of Criticality across the portfolio should broadly match an expected pattern. This is close to a normal distribution, but with potentially more applications at the lower end of the scale than the upper.

Does the distribution of Criticality values (i.e. business criticality of each application) across the application portfolio fit an expected pattern, or are there anomalies in the distribution that should be brought to the attention of an expert architect?

The distribution of values of Service Level across the portfolio should broadly match an expected pattern. This should contain a relatively small number of applications given the highest levels of service, with more applications at the middle and lower parts of the scale.

Does the distribution of Service Level values (i.e. the service level provided by the IT service team for each application) across the application portfolio fit an expected pattern, or are there anomalies in the distribution that should be brought to the attention of an expert architect?

The distribution of values of this field across the portfolio should broadly match an expected pattern. This is related to the maturity of the organization’s IT management and the degree of change being pursued by the organization at the present time. A stable and mature organization will have the majority of its portfolio live, with a small proportion both in development and phasing out. Lower proportions of “Live” indicate a greater degree of turbulence for the organization. If there is a mismatch between the data and the organization’s current state, there are likely some errors or gaps in the data.

Does the distribution of Lifecycle Phase values (i.e. the current lifecycle phase of each application) across the application portfolio fit an expected pattern, or are there anomalies in the distribution that should be brought to the attention of an expert architect?

For each Application, the value assigned to Criticality should be consistent with the nature of the Application in question.

Is the value of Criticality for each application consistent with the type of application and how it is used in the organization (the capabilities it realizes, and the org units that consume it)?

For each Application, the value assigned to Service Level should be consistent with the nature and use of the Application in question.

Is the value of Service Level for each application consistent with the type of application and how it is used in the organization (the capabilities it realizes, and the org units that consume it, and its stated business criticality)?

The Live date range should be consistent with other information about the application (e.g. it should not be currently live whilst also being marked as “retired”).

For each application in the portfolio, is the value of Live (the date range recording the period during which the application was, is or will be deployed and actively used) consistent with the rest of the information provided about that application?

Multiple applications that are similar, and realize the same business capabilities, might present opportunities to consolidate and rationalize the application estate.

Are there groups of applications that offer similar functionality and might, therefore, represent candidates for consolidation? If so, to help prioritization, give a very rough estimate of the likely level of savings that might be achieved.

As with the Metamodel Insights Agent, we’ve developed unique ways to give the Model Insights Agents detailed context to be considered alongside the data that lies behind the report. This gives the AI agent detailed, relevant and accurate information, pertinent to each policy, to work on.

Could this functionality be replicated using the Ardoq MCP Server?

Something similar to these Agents can indeed be accomplished with suitable prompts in an MCP client (such as Microsoft Copilot) attached to Ardoq’s MCP Server. Depending on the prompts used, they are likely to give less accurate and consistent responses, and will likely degrade or fail with larger reports. This is where the design of Ardoq’s Insights Agents has a number distinct advantages:

  • The use of ontological description ensures that the metamodel is accurately represented to the AI agent;

  • for quantitative analysis, we are using code generation techniques to ensure correct answers to questions that GenAI is generally poor at addressing;

  • EA reasoning rubrics are provided that improve the quality of AI analysis beyond that which can be achieved with regular AI model training.

Future Evolution of Insights Agents

This beta release of Foundation Insights Agents is just the first fruits of our research and development of insights agents. We anticipate further evolution of Insights Agents along the following lines:

This initial release has focused on the metamodel that lies behind the Foundation solution, and its resulting populated model. Future versions of both Metamodel and Model Insights Agents will be developed to support analysis of the metamodels and populated models from other Ardoq Solutions (such as Application Rationalization) and potentially metamodels that have been extended beyond them.

  • The Metamodel Insights Agent is not currently embedded in the Ardoq platform. In this beta release, Ardoq users only see its output. A future version will incorporate the Metamodel Insights Agent in the Ardoq platform, allowing it to be triggered by admin users.

  • Model Insights Agents will in future not be constrained to viewing model data only via Reports. A Report is, in effect, a limited selection from the overall model; in effect, a sub-graph. Future versions of Model Insights Agents will be able to analyse a selection of components from the perspective of Ardoq Viewpoints, in effect enabling Model Insights Agents to analyze any selection of data from Ardoq’s knowledge graph.

  • We continue to look at ways to give the AI Agents improved context knowledge, which will lead to even more valuable insights among the Agents’ recommendations.

  • An Insights Agent Management Framework will give users control over how Insights Agents are run. Some agents are primarily focused on data quality issues. Since a repository undergoes continuous evolution, it makes sense for these agents to be run regularly and to alert administrators or interested parties only when something requiring investigation or action has been found. Other agents may be run less frequently or as one-off exercises. The Insights Agent Management Framework will facilitate the configuration and operation of Insights Agents, including selection of the policies and rules that each agent should monitor.

  • At present, the Metamodel Insights Agent and Model Insights Agents are independent of each other. There is exciting potential for these to come together: the Metamodel Insights Agent identifies patterns worth looking out for from its examination of the metamodel, and after highlighting these to an expert architect, can offer to automatically generate a Model Insights Agent to look for these patterns on behalf of the expert architect.

With this beta release of Ardoq Insights Agents, we are keen to learn from the experiences and ideas of early adopters. If you have suggestions for future iterations of Ardoq Insights Agents, please visit Ardoq AI Labs and submit your ideas.

Did this answer your question?