Skip to main content

AI Experiment: Digital Twin - Analyzing Operational KPIs with the Architecture

Ardoq Labs experiment to explore possibilities with digital twin functionality when combining process mining information with enterprise architecture information in Ardoq

Written by Jason Baragry
Updated over 5 months ago

Objective

The objective is to explore DTO (digital twin of the organization) possibilities by performing AI analysis of architecture information together with operational KPIs - such as those available from process mining (Celonis) or other analytics environments

Experiment 1: Analysing KPI Degradation to IT Root Cause correlation

Problem:

IT always faces more demand than capacity to perform improvements. There are many requests for process optimization and transformation and each requires capacity just to identify what the problem is, the potential root-cause in IT, and the scope of the improvement needed. Its manually intensive work that requires time from architects, process owners, system owners, and integration experts. Amongst others.

Prioritising that time and capacity is hard. The result is that IT is perceived as slow, reactive, and struggles to act as a strategic partner for business change.

Hypothesis

This experiment tests if AI can be used to perform parts of that manual analysis and save time for those people.

Specifically to:

  • automatically detect KPI degradation

  • Identify IT components needed to support the Process that is measured by that KPI. I.e., Applications, relevant integrated applications, and their required Infrastructure

  • Assess the correlation between the KPI degradation and the relevant Application quality assessments in Ardoq. These include: ongoing Incidents, Trends of Incidents over time, Solution Health Checks (architecture reviews), Technical Debt, and IT Risks.

  • Based on these correlation assessments, make a recommendation about the next steps and the people to involve in making the decision

Benefits

The benefits are that AI can be used to perform manual analysis tasks which free the architect to use more capacity on delivering business improving changes.

This also has the potential for IT to proactively suggest improvements and become the enabler the agile enterprise

Experiment Execution

The experiment focusses on KPI degradation identification and correlation to incidents, incident trends, and solution health checks.

Separate AI agents are executed to perform each of these analyses.

Part of the demo shows "AI manipulating Ardoq". We want to make clear that the AI test harness accesses data directly through the Ardoq internal API. The demonstrated UI manipulation is done as for generating screenshots for the final report. This is usually done in the background but is performed in the foreground for the demo to simulate the work being performed by the Agents.

Results

KPI degradation detection

  • Observed positive results for detecting KPI degradation based on the KPI details and details of the process being measured.

  • Tested a number of different KPI trend variations beyond that shown in the demo.

  • Some false positive and false negatives observed.

  • Standard foundational AI models together with multi-shot prompting and other context information improved detection results

Incident and Incident trend correlation

  • Observed positive results for correlating both ongoing Incidents and historical Incident trends. This was based on the KPI and process details and details of the recorded incidents.

  • Tested a number of different incident types and trends matching KPI degradation trends.

  • Some false positive and false negatives observed.

  • Used standard foundational AI models together with multi-shot prompting and other context information improved detection results

Solution Health Check (architecture review) correlation

  • Observed positive results for correlating solution health check information with KPI degradation. This was based on the KPI and process details together with details of solution health checks.

  • Tested a number of different types of quality criteria and textual descriptions of issues

  • Some false positive and false negatives observed.

  • Used standard foundational AI models together with multi-shot prompting and other context information improved detection results

Conclusion

Positive results shows this approach could be the basis for a productive product implementation with significant customer benefits.

Additional reading

Experiment 2: Analysing Benefit Realization using KPIs and Strategy to Execution changes

Problem:

Empirical research shows actively tracking benefit realization strongly correlates with successful organizations. For example:

  • 59% of successful strategic execution is connected to predictive measures

    • Tomas Nielsen. “Turning Strategy Into Reality — How to Successfully Execute on Your Strategic Goals.” Gartner IT Symposium / Xpo, Barcelona, Spain, November 2022.

  • 40%-52% of IT spend is waste or value leakage

    • IBM study in early 2000s of CIO

    • Enterprise Value: Governance of IT Investments : Getting Started with Value Management. IT Governance Institute, 2008.

  • Existence of BM plan has strong impact 48% better on client benefit

    • Holgeid, Knut Kjetil, Magne Jørgensen, Dag I. K. Sjøberg, and John Krogstie. “Benefits Management in Software Development: A Systematic Review of Empirical Studies.” IET Software 15, no. 1 (2021): 1–24. https://doi.org/10.1049/sfw2.12007.

But the work needed to track objectives, benefits, and completed initiatives is time-consuming and difficult to prioritize.

Hypothesis

This experiment tests if AI can be used to perform aspects of benefit realization analysis to save the current manual that is needed .

Specifically to

  • automatically detect KPI improvement on Processes

  • Identify if IT components that support the Process have been improved during recently completed IT initiatives.

  • If the business case for those initiatives are measured using the same KPIs.

  • And, finally, if the proposed scope of the IT initiative was likely to alleviate known IT problems that would result in the observed improvement

Benefits

  • That AI can be used to perform aspects of benefit realization to allow the company to get the known benefits without the existing manual effort

Experiment Execution

The experiment focusses on KPI improvement identification and correlation to IT projects, business cases and solution health checks.

Separate AI agents are executed to perform each of these analyses.

Part of the demo shows "AI manipulating Ardoq". We want to make clear that the AI test harness accesses data directly through the Ardoq internal API. The demonstrated UI manipulation is done as for generating screenshots for the final report. This is usually done in the background but is performed in the foreground for the demo to simulate the work being performed by the Agents.

Results

Note: We don't publish detailed results on Ardoq Labs.

KPI improvement detection

  • Observed positive results for detecting KPI improvement based on the KPI details and details of the process being measured.

  • Tested a number of different KPI trend variations beyond that shown in the demo.

  • Some false positive and false negatives observed.

  • Standard foundational AI models together with multi-shot prompting and other context information improved detection results

Identification of recently completed change initiatives and business cases measured using the same KPIs.

  • Deterministic functionality that was performed without AI.

Solution Health Check (architecture review) correlation

  • Observed positive results for correlating solution health check information with KPI improvement and change initiative descriptions. This was based on the KPI and project details together with details of solution health checks.

  • Tested a number of different types of quality criteria and textual descriptions of issues

  • Some false positive and false negatives observed.

  • The correlation was not as strong as in experiment one. This experiment was highly dependent on the quality of solution health check and project details with less information to use as the basis for inference compared to experiment one.

  • Used standard foundational AI models together with multi-shot prompting and other context information improved detection results

Conclusion

  • Encouraging results shows this approach should be further explored to test other aspects of benefits management and realization to see if the correlations can be strengthened with more information.

Did this answer your question?