Skip to main content

Capability Based Planning: Purpose, Scope, & Rationale

Capability-Based Planning enables you to systematically evaluate business capability quality across People, Process, Technology, and Data dimensions to identify gaps, prioritize improvements, and drive targeted capability enhancements.

A
Written by Adam Walls
Updated over 2 weeks ago

Capability Based Planning (CPB) enables organizations to systematically translate strategic objectives into targeted capability investments by evaluating business capabilities across four critical dimensions: People, Process, Technology, and Data. Through structured assessment, collaborative prioritization, and evidence-based decision-making, CBP identifies which capabilities require enhancement and what specific improvements will deliver the greatest strategic value.

We have created a new component and workspace called the Capability Health Check to serve as the diagnostic foundation of this planning approach, providing organizations with flexible assessment options matched to their needs. Organizations can choose between Lite assessments (evaluating 4 dimensional-level requirements) for rapid screening, portfolio-level prioritization, and ongoing monitoring, or Full assessments (evaluating 16 specific quality requirements) for detailed diagnostic analysis, root cause identification, and targeted improvement planning. Both assessment modes generate quantified exposure scores that prioritize capability gaps and inform the creation of capability deltas for strategic realization.

More information can be found in the Capability Based Planning - (CBP): Metamodel

Contents

Introduction

Organizations often struggle to understand what specific factors are hindering their business capability performance. Without a structured approach to evaluating capability quality across multiple dimensions, improvement efforts become scattered, resources mis-allocated, and strategic initiatives failing to address the underlying dynamics and constraints that shape performance.

Capability Based Planning provides organizations with a systematic methodology for translating strategic objectives into targeted capability improvements. At its core, CBP answers a fundamental question: Which capabilities require investment, and what specific enhancements will deliver the greatest strategic value?

The approach recognizes that capability effectiveness depends on the quality and alignment of four fundamental dimensions working together: People, Process, Technology, and Data (PPTD). By establishing a clear understanding of capability health across these dimensions, organizations can make evidence-based investment decisions that align resources with strategic priorities.

The Capability Health Check serves as the diagnostic mechanism within this planning framework: a structured, collaborative assessment that evaluates business capabilities across all four PPTD dimensions. Through systematic evaluation of quality requirements, stakeholder workshops, and quantified prioritization, the Health Check transforms subjective capability concerns into objective evidence about where quality gaps exist and which improvements will deliver the greatest impact.

This diagnostic insight becomes the foundation for creating Capability Deltas, the representation of a gap in functionality/execution from the business perspective, specifically leading to actionable improvements that address identified quality gaps. Capability Deltas first appeared in the Strategy to Execution solution described here. These deltas then flow into the broader Strategy to Execution framework, ensuring that capability investments are:

  • Evidence-based: Grounded in systematic assessment rather than opinion

  • Strategically aligned: Connected to organizational objectives

  • Holistically designed: Addressing all relevant PPTD dimensions

  • Prioritized: Focused on highest-impact opportunities

  • Trackable: Linked to initiatives with measurable outcomes

By integrating capability assessment, Capability Delta identification, and strategic realization, Capability Based Planning (CPB) creates a complete pathway from understanding capability performance to delivering targeted improvements that advance strategic goals.

Organizations can choose between two assessment approaches based on their needs:

  • Lite Capability Health Check: Evaluates overall quality across the four dimensions (People, Process, Technology, and Data), providing a rapid assessment suitable for capability portfolio screening, regular monitoring, and identifying which capabilities warrant deeper investigation.

  • Full Capability Health Check: Evaluates 16 specific quality requirements (across the four dimensions), providing detailed diagnostic insights for comprehensive improvement planning and targeted capability enhancement.

Businesses find Capability Health Checks useful for systematically identifying quality gaps, building stakeholder consensus on priorities, and creating targeted improvement initiatives that address root causes rather than symptoms. The Lite assessment enables quick identification of problem areas, while the Full assessment pinpoints specific quality deficiencies requiring attention.

PPTD Framework Overview - showing four pillars: Data, Process, People, Technology together with their respective quality requirements beneath each

Using the PPTD framework, we can see how the four dimensions provide a comprehensive perspective on capability quality, with each dimension containing specific quality requirements that collectively determine capability health.

Purpose

Ardoq's Capability Based Planning Solution enables organizations to deliver maximum capability performance by systematically evaluating, prioritizing, and addressing quality gaps across the People, Process, Technology, and Data dimensions. It aligns stakeholder perspectives, creates objective prioritization through exposure scoring, and connects quality concerns directly to improvement initiatives, answering critical questions about what is affecting capability performance and which improvements will deliver the greatest impact.

Questions Solution Addresses

1. What specific quality issues are affecting this capability's performance?

  • Which quality requirements matter most to our stakeholders (importance)?

  • Where are we falling short in meeting these requirements (concern)?

  • Which dimension (People, Process, Technology, or Data) has the most critical quality gaps?

Bubble chart showing the distribution and prioritisation of the quality requirements

2. How should we prioritize improvements across multiple quality concerns?

  • Which quality gaps have the highest exposure (importance × concern)?

  • Where will improvements deliver the greatest impact on capability performance?

  • Improvements ranked by importance.

Dependency map showing the stakeholder’s level of importance and concern with each quality requirement and the deltas produced to address the highest level of exposure (Importance x concern)

3. How do we connect quality concerns to actionable improvements?

  • Which business gaps exist, and need to be addressed, to enable the company to achieve its strategic objectives in the coming time period.

These questions can be answered by:

  • Conducting collaborative workshop assessments with cross-functional stakeholders

  • Scoring each quality requirement on both Importance (1-5) and Level of Concern (1-5)

  • Looking at the calculated Exposure scores (Importance × Concern) to objectively prioritize quality gaps

  • Creating capability deltas that address the highest-priority quality concerns

  • Connecting deltas to initiatives through Strategy to Execution for implementation tracking

Assessing capability quality using the PPTD framework provides a clear, holistic, view of what factors are affecting capability performance across all critical dimensions.

Key benefits of this practice

  • Enables holistic quality assessment: Evaluating capabilities across People, Process, Technology, and Data ensures comprehensive coverage of all factors affecting performance. By examining all four dimensions systematically, this approach prevents the common mistake of technology-only assessments that fail to address process inefficiencies or skills gaps, fostering a complete understanding of capability health.

  • Creates objective, data driven, prioritization: The Exposure calculation (Importance × Concern) transforms subjective opinions into quantifiable priorities, eliminating debate about which concerns matter most. This mathematical approach creates transparency and builds stakeholder trust in resource allocation decisions, ensuring investments target the highest-impact improvements.

  • Builds stakeholder consensus and ownership: Workshop based assessment brings together diverse perspectives from across the organization, creating shared understanding of quality gaps and collective commitment to improvement priorities. This collaborative approach increases buy in and implementation success rates compared to top down assessment approaches.

  • Facilitates actionable, trackable improvements: Direct integration between assessment findings and capability deltas ensures quality concerns immediately translate into improvement actions. Connection to Strategy to Execution enables tracking from identified concern through delta creation to realized improvement, closing the assessment action gap and demonstrating a return on investment on capability investments.

Scope and Rationale

Several distinct approaches to Capability Based Planning have evolved across different contexts and applications. These include:

  • The Military/Defense CBP (original approach from US DoD in the 1990s) focuses on defense capabilities across diverse threat scenarios with highly structured processes and long time horizons of 10-20 years.

  • Business Architecture CBP centers on business capabilities as stable abstractions, using capability heat maps and maturity assessments to identify gaps and prioritize improvements within 3-5 year planning horizons.

  • Portfolio Management CBP views capabilities through an investment lens, balancing portfolios across run/grow/transform categories with emphasis on ROI and benefits realization over 1-3 years.

  • Scenario-Based Strategic Planning CBP emphasizes developing multiple future scenarios to test capability requirements, focusing on strategic flexibility and options thinking across 5-15 year horizons.

  • Agile/Adaptive CBP applies iterative development cycles with rapid prototyping and continuous adaptation based on feedback.

  • Risk-Based CBP prioritizes capabilities based on risk exposure and focuses on resilience and business continuity.

  • Outcomes-Based CBP (common in the public sector) defines capabilities in terms of outcomes achieved and stakeholder value delivery.

  • Technology-Centric CBP focuses on technical capabilities and digital transformation through technology roadmaps and architecture.

Most organizations in practice use hybrid approaches that combine elements from multiple methodologies, with the most common being Business Architecture + Portfolio Management for enterprises, Scenario-Based + Risk-Based for uncertain environments, and Agile + Business Architecture for digital transformation. The right approach depends on organizational context (industry, size, maturity), planning objectives (transformation vs. optimization), environmental conditions (uncertainty, rate of change), and available resources (expertise, data, time).

Our goal with the Capability Based Planning solution is to initially provide a structured assessment for understanding any Business Capability’s quality across the four dimensions that determine performance. Capability Health Checks provide valuable insights from collaborative stakeholder engagement that identify specific improvement opportunities and create consensus on priorities. This assessment follows the same approach as other assessments in Ardoq such as the Solution Health Check. We may extend this in a later version to incorporate other aspects.

What do we mean by Quality?

Quality in capability assessments is about how well the capability meets stakeholder expectations and requirements across all dimensions that affect its performance. It is fitness for purpose measured against specific, observable criteria.

Quality is evaluated across multiple perspectives because capabilities are complex systems. A capability might have excellent technology but poor process efficiency, or highly skilled people but inadequate data quality. Only by examining all dimensions can we understand true capability health.

"Building effective data management capabilities requires solving a complex, multi-dimensional problem. Breaking down the problem into its components makes it easier to pinpoint an organization's weaknesses, and be specific when building a roadmap for improvement. Looking at them under the length of people, process, and technology ensures an exhaustive analysis of factors that make an organization successful." https://www.researchgate.net/publication/283226690_Three_New_Dimensions_to_People_Process_Technology_Improvement_Model

In Full Capability Health Checks, quality is measured through 16 specific requirements spanning the four dimensions. Each requirement represents a measurable aspect of capability performance that stakeholders can evaluate based on both its importance to success and their level of concern about current performance. Quality improvements are realized through targeted capability deltas that address the highest-exposure gaps identified in the assessment.

What is the PPTD Framework?

The PPTD Framework is a structured approach to capability assessment that evaluates quality across four interdependent dimensions: People, Process, Technology, and Data. Each dimension contains specific quality requirements that collectively determine capability effectiveness. The framework recognizes that capability performance depends on all four dimensions working in harmony.

Excellence in one dimension cannot compensate for deficiencies in others; technology alone cannot overcome process inefficiency, just as highly skilled people cannot succeed with poor data quality. A PPTD assessment provides a systematic method for identifying which specific quality requirements need attention and objectively prioritizing improvements based on their exposure (importance to success multiplied by level of concern about current performance).

Key Characteristics of Capability Health Checks

Stakeholder-Focused:

Capability Health Checks are centered on gathering diverse stakeholder perspectives about capability quality. They bring together people from across the organization who realize, implement, interact with, or depend on the capability to build a shared understanding of quality gaps.

Dimension-Balanced:

The framework ensures equal attention to People, Process, Technology, and Data dimensions. Full assessments systematically evaluate all 16 quality requirements rather than focusing only on technical or operational concerns.

Collaborative and Consensus-Driven:

Assessments are conducted in workshop settings where stakeholders collectively discuss and agree on scoring. This collaborative approach builds ownership and ensures priorities reflect diverse perspectives rather than individual opinions.

Quantitatively Prioritized:

The Exposure calculation (Importance × Concern) creates objective, transparent prioritization of quality gaps. This mathematical approach eliminates subjective debate about which concerns should receive resources and attention.

Action-Oriented:

Assessment findings directly inform capability delta creation, ensuring quality concerns translate immediately into improvement plans. Integration with Strategy to Execution connects deltas to initiatives for implementation tracking.

Choosing Between Lite and Full Assessments

The Capability Health Check solution offers two assessment formats, allowing organizations to match their assessment approach to their specific context, time constraints, and improvement objectives.

Lite Assessment (4 Requirements)

The Lite Capability Health Check evaluates overall quality across the four PPTD dimensions without drilling into specific characteristics within each dimension.

What It Assesses:

  • People: Overall assessment of workforce diversity, skills, and availability

  • Process: Overall assessment of process effectiveness, efficiency, reliability, and completeness

  • Technology: Overall assessment of technology strategic alignment, functionality, security, reliability, and adaptability

  • Data: Overall assessment of data accuracy, availability, relevance, and completeness

Recommended For:

  • Initial capability portfolio screening to identify which capabilities need deeper attention

  • Regular monitoring and health checks (quarterly or bi-annual reviews)

  • Quick prioritization across multiple capabilities to inform investment decisions

  • Organizations new to structured capability assessment and seeking to build momentum

  • Rapid assessment when time or stakeholder availability is constrained

  • Validation checks after implementing improvements to confirm dimensional health

Typical Duration: 1-2 hours per capability

Output: Directional guidance identifying which dimensions (People, Process, Technology, Data) have quality concerns requiring attention, with high-level exposure scores guiding prioritization.

Capability Deltas Created: Dimension-level improvement initiatives (e.g., "Improve Process quality for Order Management capability")

Full Assessment (16 Requirements)

The Full Capability Health Check evaluates specific quality characteristics within each dimension, providing detailed diagnostic precision.

What It Assesses:

People Dimension (3 specific requirements):

  • Diversity: Variety of perspectives, backgrounds, and approaches

  • Skills: Competencies, expertise, and capabilities of individuals

  • Availability: Sufficiency of staffing levels and time allocation

Process Dimension (4 specific requirements):

  • Effectiveness: Degree to which processes achieve intended outcomes

  • Efficiency: Optimization of resource utilization, time, and effort

  • Reliability: Consistency, predictability, and dependability of execution

  • Completeness: Comprehensiveness of documented processes and procedures

Technology Dimension (5 specific requirements):

  • Strategic Alignment: Support for strategic objectives and business direction

  • Functionality: Completeness and appropriateness of features and capabilities

  • Security: Protection of systems, data, and processes from threats

  • Reliability: Stability, uptime, and dependability of technology systems

  • Adaptability: Flexibility and ease of modification to meet changing requirements

Data Dimension (4 specific requirements):

  • Accuracy: Correctness, precision, and reliability of data

  • Availability: Accessibility and timeliness when needed

  • Relevance: Appropriateness and applicability to capability decisions

  • Completeness: Extent to which all necessary data elements are present

Recommended For:

  • Detailed diagnostic assessment of underperforming or strategic capabilities

  • Capabilities targeted for significant investment or transformation initiatives

  • Creating specific, actionable capability deltas with clear improvement targets

  • Organizations with mature capability management practices

  • Follow-up to Lite assessments that identified dimensional concerns

  • Capabilities where stakeholders have conflicting views about quality issues

Typical Duration: Half-day workshop per capability (3-4 hours)

Output: Precise identification of specific quality gaps with detailed exposure scores enabling targeted improvement planning and resource allocation.

Capability Deltas Created: Specific, requirement-level improvement initiatives (e.g., "Improve Process Efficiency for Order Management capability" or "Address Data Accuracy gaps in Customer Master")

Progressive Assessment Approach

Many organizations adopt a progressive approach to capability assessment:

Phase 1 - Portfolio Screening (Lite):

  • Conduct Lite assessments across all or most capabilities in the portfolio

  • Identify capabilities with highest exposure scores in any dimension

  • Prioritize which capabilities warrant deeper investigation

Phase 2 - Detailed Diagnosis (Full):

  • Conduct Full assessments on the highest-priority capabilities identified in Phase 1

  • Create specific capability deltas addressing the most critical quality requirements

  • Develop detailed improvement initiatives through Strategy to Execution

Phase 3 - Monitoring (Lite):

  • Use Lite assessments for regular monitoring (quarterly/bi-annual) of all capabilities

  • Use Full assessments periodically (annually) for strategic capabilities or when Lite assessments reveal emerging concerns

This progressive approach maximizes assessment efficiency while ensuring detailed attention where it matters most.

Decision Criteria Summary

Consider Lite Assessment When:

Consider Full Assessment When:

Assessing multiple capabilities to prioritize attention

Detailed diagnosis of specific capability needed

Time or stakeholder availability is limited

Half-day workshop time is available

Initial screening or regular monitoring

Preparing for significant investment or transformation

Building assessment capability/momentum

Specific, actionable deltas are required

Quick directional guidance is sufficient

Stakeholders need precise understanding of gaps

Starting capability assessment practice

Mature capability management practice exists

Important Note: Organizations can transition from Lite to Full assessment for any capability at any time. The Lite assessment creates a foundation that can be expanded into Full assessment when deeper insights become necessary.

Repeatable for Progress Tracking:

Organizations can repeat assessments over time to measure improvement, creating baselines and demonstrating ROI on capability investments. Trend analysis reveals whether quality gaps are closing and whether interventions are effective.

There is an excellent piece on the relationship between capabilities, processes, and improvement approaches in the document What Are the Differences Between Business Capabilities, Processes, and Value Streams?

Ardoq's Modeling Approach

Modeling Capability Health Checks

Capability Based Planning Metamodel showing Capability Health Check component connected to Business Capability and Quality Model Requirements

The Capability Health Check consists of a few very important components. The detailed document can be found here

Firstly, you will set up a Capability Health Check workspace where you will capture your Health Checks using the Capability Health Check component type. The Health Check component consists of several fields that calculate the stakeholder’s feedback scores from the assessment. The feedback is captured in fields on the Depends on reference which connects the Health Check component to the individual quality requirement being assessed.

When a new Capability Health Check is created, a Has Subject reference connects the health check to the target business capability component. This allows you to repeat the same Capability Health Check survey as often as necessary.

After the health check process, you will be invited to identify a Capability Delta. Capability Deltas are an important part of the realization process of health checks because they take the output of the health check and connect it to an actual change within the organization enabling you to see how the assessment will improve the capabilities. If a Business Capability delta is selected, when it is submitted a Is Impacted By reference is created from the business capability to that delta. To learn more about how capability deltas are realized read the Strategy To Execution: Purpose, Scope and Rationale.

For a more detailed explanation of the Capability Health Check component, reference, and field types, please review the Metamodel document here

For Lite Assessments:

  • Identify the capability to assess (the capability becomes the subject)

  • Select the New Capability Health Check (Lite) - CBP survey

  • In workshop setting, stakeholders score each dimension on Importance (1-5) and Level of Concern (1-5)

  • The system calculates Exposure scores (Importance × Concern) for each dimension

  • High-exposure dimensions inform which areas need attention

For Full Assessments:

  • Identify the capability to assess (the capability becomes the subject)

  • Select the New Capability Health Check (Full) - CBP survey

  • In workshop setting, stakeholders score each specific requirement on Importance (1-5) and Level of Concern (1-5)

  • The system calculates Exposure scores (Importance × Concern) for each of the 16 requirements

  • High-exposure requirements inform specific capability delta creation

When assessment findings identify quality gaps requiring action, capability deltas can be created using the Create Capability Deltas - CBP survey. These deltas become inputs to the Strategy to Execution solution for realization through initiatives.

Progressive Modeling Pattern: Organizations often:

  1. Create Lite assessments for multiple capabilities to build portfolio view

  2. Identify highest-exposure capabilities from Lite assessment results

  3. Create Full assessments for those priority capabilities

  4. Link specific capability deltas to Full assessment findings

  5. Use Lite assessments for ongoing monitoring

The Full definition of the model, including the relationships between Lite and Full assessments, is detailed in the Capability Based Planning Metamodel document. Both assessments use Quality Requirements from the PPTD Quality model.

What are Quality Requirements?

A Quality Requirement is a specific, measurable aspect of capability performance that stakeholders can evaluate. Quality Requirements define observable characteristics that determine whether a capability is functioning effectively across People, Process, Technology, and Data dimensions.

Quality Requirements provide:

  1. Standardized evaluation criteria - Consistent language for discussing capability quality

  2. Comprehensive coverage - Systematic examination of all factors affecting performance

  3. Stakeholder alignment - Common framework for diverse perspectives to assess capability health

  4. Actionable insights - Specific gaps that inform targeted improvement initiatives

Quality Requirements are organized in a two-level hierarchy:

  • Level 2 (Dimension-Level): Four high-level quality assessments; People, Process, Technology, and Data, used in Lite assessments

  • Level 3 (Requirement-Level): Sixteen specific quality characteristics (4 per dimension) used in Full assessments

The Lite assessment evaluates broad dimensional health, while the Full assessment examines specific quality characteristics within each dimension

Key Components of PPTD Quality Requirements

The Full Capability Health Check assesses 16 specific quality requirements organized across four dimensions. Each dimension contains 4-5 specific requirements that collectively determine quality within that dimension.

For Lite assessments, stakeholders evaluate overall dimensional quality without breaking down into specific requirements. For Full assessments, stakeholders evaluate each of the 16 specific requirements listed below:

Dimension

Description

Requirement

People

Emphasizes the roles, skills, and competencies necessary for achieving quality in processes and deliverables. Central to this is the effective management, development, and engagement of personnel resources to optimize performance and output. This requirement covers aspects such as leadership, training, collaboration, and communication strategies within teams. Ensuring alignment of personnel capabilities with organizational objectives is crucial for fostering a culture of continuous improvement and quality enhancement.

Diversity: Focuses on ensuring a diverse workforce encompassing various characteristics such as age, gender, and background. Diversity is critical for delivering consistent and high-quality services and processes, as it brings different perspectives and ideas to the organization. Implementing diversity initiatives helps to prevent groupthink, encourages innovation, and contributes to adaptability in dynamic environments. Assessing diversity metrics is essential for understanding and improving the impact on service quality.

Skills: Assesses the competency level and adequacy of skills within the workforce to accomplish designated tasks effectively. It involves evaluating the alignment of employees' skills with organizational goals and ensuring continuous skill development through training programs. Measurement of this requirement includes analyzing skill gaps, training effectiveness, and workforce adaptability to evolving job demands. Success is indicated by a well-equipped workforce capable of meeting operational expectations and driving business growth.

Availability: Pertains to ensuring that suitably skilled personnel are accessible to carry out business activities efficiently. The availability of these resources is essential for maintaining operational productivity and meeting business objectives. It involves strategic planning around staffing, training, and scheduling to align resources with organizational needs. Proper management of people availability is crucial for optimizing resource allocation and minimizing downtime.

Process

Refers to the structured set of activities designed to achieve specific objectives efficiently and effectively. It encompasses methodologies that streamline tasks, enhance productivity, and ensure consistency in outcomes. The approach involves the optimization of workflow, alignment with organizational goals, and adherence to best practice standards.

Effectiveness: Focuses on ensuring that business processes achieve their intended results efficiently and consistently. Key performance metrics include time, quality, and cost measures, aiming for optimal resource utilization and minimal waste. Effective processes should be adaptable to changes and capable of continuous improvement, thus contributing to overall organizational agility and competitiveness. Evaluation criteria include achieving predefined benchmarks and demonstrating measurable improvements over time.

Efficiency: Refers to a process's capability to optimize the use of resources while ensuring timely completion. Efficient processes minimize waste and maximize output, which contributes to overall operational performance. Assessing efficiency involves evaluating resource allocation, time management, and the achievement of desired outcomes. The requirement prioritizes streamlined operations that achieve goals with minimal resource expenditure.

Reliability: Focuses on ensuring processes are executed consistently, minimizing failures and delays. This requirement emphasizes robustness in execution, aiming for high dependability in all operations. Reliable processes seek to achieve predictable outcomes and effective risk management. Key aspects include adherence to standards, accurate execution, and timely completion of tasks.

Completeness: Evaluates how thoroughly a business function is fulfilled by a streamlined set of high-quality processes. It emphasizes the importance of efficiency, ensuring all necessary aspects of the function are covered without excess redundancy. Achieving Process Completeness requires identifying and optimizing the essential processes to deliver comprehensive service. This category targets maximizing operational effectiveness while minimizing complexity.

Technology

Technology serves as a critical component within the PPTD framework, providing the tools and systems essential for achieving organizational goals and objectives. It encompasses hardware, software, infrastructure, and digital platforms that enable efficient processes and innovation. The effective integration of technology ensures data accuracy, enhances productivity, and supports decision-making. Continuous evaluation and upgrading of technology is vital to maintain competitive advantage and compliance with evolving industry standards.

Strategic Alignment: Alignment requires the technology to be consistent with the organization's IT Strategy, ensuring that it supports corporate objectives. The technology must utilize approved standard technologies and operate on platforms that are officially supported by the IT department. This alignment is essential for achieving operational efficiency and fostering innovation within the organization. Compliance ensures reduced risk and streamlined integration into existing systems.

Functionality: Demands that a technology product or system must efficiently perform its intended tasks and operations. This includes meeting specified performance metrics, enabling user-related objectives, and providing reliability in its functional capabilities. Adequate functionality ensures that the technology integrates seamlessly within a defined ecosystem and supports expected user interactions. Key measures of success involve user satisfaction, system stability, and compliance with operational standards.

Security: Assesses whether the technology complies with established security standards. It ensures that systems and applications have adequate protections against unauthorized access, data breaches, and other security vulnerabilities. Auditing and validation processes are integral to confirming adherence to security protocols. Continuous updates and monitoring are essential to maintain compliance over time.

Reliability: Refers to the consistent performance and dependability of a technology or system, ensuring it functions correctly under predefined conditions over time. This requirement involves assessing the system's ability to perform tasks without failure, accounting for factors such as fault tolerance, error handling, and uptime. Evaluating reliability includes scrutinizing the frequency and impact of failures, adherence to standards, and maintenance practices. A high reliability score indicates robust technology capable of meeting service obligations consistently.

Adaptability: Refers to the ability of a system or process to efficiently adjust or evolve with emerging technologies and industry trends. Key considerations include modular architecture, scalable infrastructure, and support for interoperability standards. Success metrics include rapid integration of new technologies, minimal disruption during updates, and sustained competitive advantage. Effective governance, continuous monitoring, and proactive change management are essential to meet this requirement.

Data

Data within the PPTD framework refers to the structured, relevant, and comprehensive data necessary for effective decision-making and process optimization. This encompasses the accuracy, accessibility, and timeliness of data inputs used in the framework. Information must support the objectives of the PPTD framework by providing insights and evidence for continuous improvement and performance measurement. Ensuring data integrity and secure management are critical components of this requirement.

Accuracy: Ensures that the data utilized in executing the value stream stage meets the necessary precision and correctness for optimal performance. Accurate information is critical for maintaining process efficiency, reducing errors, and achieving desired outcomes. This requirement involves verifying data validity, consistency, and reliability across all relevant operations. Proper adherence to accuracy standards enhances overall decision-making quality and operational efficiency.

Availability: Specifies the need for data to be accessible when required by authorized users, systems or processes within defined parameters. It emphasizes ensuring minimal downtime and includes strategies for redundancy, failover, and continuity to maintain consistent access. Benchmarks for data availability include uptime percentages and recovery time objectives. Implementing robust infrastructure and monitoring is key to achieving this requirement.

Relevance: Ensures that the data utilized within the systems meets the criteria of being pertinent to the objectives and operations. This involves validating the data's applicability, accuracy, and alignment with the current business needs and processes. Key metrics include assessing context, usability, timeliness, and accuracy to drive effective decision-making and operational efficiency. Achieving data relevance is crucial for maintaining the integrity and value of analytical insights and derived outcomes.

Completeness: Refers to ensuring that all necessary data attributes for a given process, product, or system are present and accounted for. This requirement emphasizes the importance of having comprehensive and unbroken datasets to support business operations, analytical needs, and decision-making processes. Achieving data completeness involves rigorous data collection, validation, and maintenance practices to prevent data gaps or missing information. The effectiveness of this requirement can be measured through audits and assessments of data availability and integrity.

Understanding Exposure Scoring

Exposure scoring is the mathematical approach that creates objective prioritization in both Lite and Full Capability Health Checks. The calculation method is identical for both assessment types; only the granularity differs.

For Lite assessments, exposure scores identify which dimensions (People, Process, Technology, or Data) have the most critical quality concerns requiring attention or further investigation through Full assessment.

For Full assessments, exposure scores identify which specific quality requirements within those dimensions need immediate improvement action.

For each quality requirement (whether dimension-level or specific), stakeholders assess two dimensions:

Importance (1-5)

Use this rubric to assess how important each quality characteristic is to the capability.

Assessment Tip: Consider "If this characteristic were completely absent or failing, could we still realize benefits effectively?" The harder it is to answer "yes," the higher the importance score should be.

  1. Minimal Importance
    This characteristic has negligible impact on capability effectiveness and makes little practical difference to the capability's ability to deliver value.

  2. Low Importance
    This characteristic has limited impact on core capability: improvements might increase efficiency but gaps don't significantly prevent capability execution.

  3. Moderate Importance
    This characteristic has noticeable impact and contributes meaningfully to effectiveness, though the capability can still function with workarounds or compensating strengths in other areas.

  4. High Importance
    This characteristic significantly impacts capability effectiveness, and major gaps would seriously undermine the capability though some limited function might still occur.

  5. Critical
    This characteristic is absolutely essential to the capability: without it functioning well, the entire capability fails or is fundamentally compromised.

Level of Concern (1-5)

Use this rubric to assess your level of concern about the current state of each quality characteristic.

Assessment Tip: Consider "How much is the current state of this characteristic hindering our capability performance?" The more it's actively causing problems or creating barriers, the higher the concern score should be.

  1. Minimal Concern
    This characteristic is performing strongly with no significant issues: it represents a capability strength that we can confidently rely upon.

  2. Low Concern
    This characteristic is performing reasonably well with only minor issues or gaps that have minimal impact on capability effectiveness.

  3. Moderate Concern
    This characteristic has noticeable issues or inconsistencies that create some friction or inefficiency, though they're not fundamentally preventing the capability from functioning.

  4. High Concern
    This characteristic has significant weaknesses or gaps that are noticeably impeding capability effectiveness and creating substantial friction or value leakage.

  5. Critical Concern
    This characteristic is performing very poorly with severe, pervasive issues that are actively blocking capability performance and causing significant problems right now.

Exposure Score Framework

Exposure Calculation:

Exposure = Importance × Level of Concern

This multiplication creates a 1-25 scale that quantifies priority. The Level of Exposure identifies which quality requirements need attention:

20-25: Critical Priority

  • Action Required: Immediate intervention needed. These represent severe capability weaknesses in areas critical to performance that are actively preventing value delivery.

  • Response: Assign dedicated resources, executive sponsorship, and urgent action plans. Address within 0-3 months.

15-19: High Priority

  • Action Required: Significant attention and investment needed. These are important capability gaps causing substantial problems that must be addressed to improve effectiveness.

  • Response: Develop detailed improvement plans with clear ownership and timelines. Address within 3-6 months.

10-14: Medium Priority

  • Action Required: Planned improvement needed. These gaps are creating noticeable friction or inefficiency that warrants structured improvement efforts.

  • Response: Include in capability improvement roadmap with defined initiatives. Address within 6-12 months.

5-9: Low Priority

  • Action Required: Monitor and opportunistic improvement. These represent minor gaps or less critical areas where low-effort improvements may be beneficial.

  • Response: Address through business-as-usual activities or when resources permit. No urgent timeline required.

1-4: Minimal Priority

  • Action Required: Maintain current state. These represent either strengths to preserve or characteristics with minimal impact on capability effectiveness.

  • Response: No immediate action required. Monitor to ensure performance doesn't degrade.

Prioritization Note: Focus improvement resources on Critical and High Priority items first as these offer the greatest impact on capability effectiveness and represent the most significant risks to value delivery.

The exposure score creates transparency and objectivity in prioritization, eliminating subjective debates about which quality gaps should receive resources and attention. Stakeholders can clearly see why certain improvements take priority based on mathematical calculation rather than opinion.

Lite vs. Full Assessment Comparison

Aspect

Lite Assessment

Full Assessment

Requirements Evaluated

4 dimensions (People, Process, Technology, Data)

16 specific quality characteristics

Assessment Granularity

Dimensional overview

Detailed diagnostic

Workshop Duration

1-2 hours

Half day (3-4 hours)

Stakeholder Commitment

Lower time requirement

Higher time investment

Output Precision

Directional guidance

Precise gap identification

Exposure Scores

4 scores (one per dimension)

16 scores (one per requirement)

Best For

Screening, monitoring, prioritization

Detailed improvement planning

Capability Deltas

Dimension-level initiatives

Requirement-specific initiatives

Assessment Frequency

Quarterly/bi-annual

Annual or as-needed

Stakeholder Preparation

Minimal

Moderate (review requirements)

Facilitator Expertise

Basic PPTD understanding

Deep PPTD framework knowledge

Follow-up Actions

Identify capabilities needing Full assessment

Create specific improvement initiatives

Typical Use Case

"Which of our 50 capabilities need attention?"

"Exactly what is wrong with this capability?"

Exclusions

Roadmap and Timeline Considerations

The Capability Based Planning solution does not include a dedicated roadmap view. Instead, roadmap visualization is delivered through the Strategy to Execution solution, where Initiatives carry the date fields necessary for timeline planning and sequencing. Checkout the Strategy to Execution Capability Delta Presesntation for examples of Capability based roadmaps.

Capability Deltas include an Calculated Active Period field that is calculated from the connected Initiative component in Strategy to Execution. This calculation flows through the Realizes reference from the Initiative to the Capability Delta, ensuring that delta timelines remain synchronized with strategic planning cycles.

For this calculation to function correctly, each Capability Delta should be connected to an Initiative using the Realizes reference. This connection ensures that:

  • Delta timelines reflect strategic planning horizons

  • Roadmap views in Strategy to Execution accurately represent capability improvements

  • Progress tracking aligns with Initiative review cycles

Organizations requiring roadmap visualization of capability improvements should use the Strategy to Execution timeline and Gantt views, filtering to show Capability Deltas and the Initiatives that realize them.

Checkout the Strategy to Execution Capability Delta Presesntation for examples of Capability based roadmaps.

Key Elements and Evidence for our Approach

1. Flexible Assessment Granularity

Description: The Capability Health Check provides two levels of assessment granularity, Lite and Full enabling organizations to match assessment depth to their specific context, time constraints, and improvement objectives. This flexibility allows efficient portfolio screening through Lite assessments while reserving detailed diagnosis for capabilities that warrant comprehensive evaluation.

Supporting Reference: Maturity and assessment frameworks emphasize the importance of proportionate evaluation approaches that match assessment effort to organizational needs and readiness. Flexible assessment strategies enable organizations to build assessment capabilities progressively while maximizing the ratio of insight to effort (CMMI Institute, 2018; Paulk et al., 1993).

2. Multi-Dimensional Capability Assessment

Description: Capability Health Checks recognize that capability effectiveness depends on the quality of four interdependent dimensions working together. Technology alone cannot compensate for process inefficiency, just as highly skilled people cannot overcome poor data quality. Only by examining all dimensions can organizations develop complete understanding of capability health.

Supporting Reference: Research on capability maturity models demonstrates that single-dimension assessments e.g. the traditional 1-5 Maturity Value (particularly technology-focused approaches) consistently fail to predict capability performance. Organizations must evaluate People, Process, Technology, and Data holistically to understand true capability health (Paulk et al., 1993; Rosemann & de Bruin, 2005).

3. Collaborative Stakeholder Engagement

Description: Workshop-based assessments bring together diverse expertise from across the organization, creating shared understanding of quality gaps and collective ownership of improvement priorities. This collaborative approach ensures assessments reflect multiple perspectives rather than individual opinions.

Supporting Reference: Research on participatory decision-making demonstrates that involving stakeholders in assessment and decision processes improves both the quality of decisions and their implementation. Contingency models of leadership show that participatory approaches are particularly effective when subordinate acceptance is critical to implementation and when subordinates share organizational goals (Vroom & Yetton, 1973). Implementation research confirms that participation tactics achieve substantially higher success rates (75%) compared to directive approaches (43%), with collaborative engagement creating psychological ownership that drives follow-through (Nutt, 1986).

4. Quantified, Objective Prioritization

Description: The Exposure calculation (Importance × Concern) creates transparent, mathematical prioritization that eliminates subjective debate about which quality gaps deserve resources and attention. Stakeholders can see exactly why certain improvements take priority based on objective scoring.

Supporting Reference: Quantified prioritization approaches align with established risk assessment methodologies that combine likelihood and impact dimensions to determine priority. Organizations using structured prioritization frameworks show higher stakeholder satisfaction with resource allocation decisions and demonstrate more effective investment of improvement resources (Kaplan & Garrick, 1981; Keeney & Raiffa, 1976).

5. Action-Oriented Integration

Description: Direct linkage from assessment findings to capability deltas to Strategy to Execution initiatives ensures quality concerns immediately translate into tracked improvement actions. This integration eliminates the common gap between assessment and action.

Supporting Reference: Research demonstrates that integrated assessment-action systems reduce time-to-improvement by an average of 50% compared to disconnected assessment approaches. Organizations that directly connect assessment findings to tracked initiatives show significantly higher ROI on capability investments (Hammer, 2007).

Strengths of our Approach

1. Comprehensive, Balanced Evaluation

Strength: The PPTD framework ensures equal attention to all dimensions affecting capability performance, preventing the common mistake of technology-only assessments that miss critical gaps in process, people, or data quality.

Supporting Reference: Research demonstrates that single-dimension assessment approaches can lead to imprecise or incorrect identification of root causes. Multi-dimensional frameworks that examine interconnected organizational factors (such as people, process, technology, and governance) enable more accurate diagnosis and comprehensive improvement strategies (Lin et al., 2022; Rosemann & de Bruin, 2005).

2. Stakeholder Consensus and Ownership

Strength: Collaborative workshop assessment builds shared understanding and collective commitment to improvement priorities. Participants become invested in both assessment accuracy and improvement success.

Supporting Reference: Research demonstrates that participatory approaches to assessment and decision-making lead to higher implementation success rates than top-down methods. Stakeholder involvement in assessment processes improves data quality, enhances understanding of findings, and increases adoption of recommendations (Guijt, 2014). Participative implementation tactics show substantially higher success rates compared to directive approaches, with stakeholder engagement creating psychological ownership that drives follow-through on improvement actions (Nutt, 1986).

3. Transparent, Data-Driven Prioritization

Strength: Exposure scoring creates objective, mathematically-based prioritization that builds stakeholder trust in resource allocation decisions. This transparency eliminates political debates about which improvements deserve attention.

Supporting Reference: Quantified prioritization approaches provide transparent, mathematical foundations for resource allocation decisions. Multi-criteria frameworks that systematically evaluate and prioritize objectives enable more structured decision-making processes, reducing reliance on subjective judgment alone (Keeney & Raiffa, 1976). The exposure calculation (Importance × Concern) creates visible, defensible rankings that stakeholders can understand and validate.

4. Repeatable Progress Measurement

Strength: Organizations can repeat assessments over time to measure improvement, creating baselines and demonstrating ROI on capability investments. Trend analysis reveals whether quality gaps are closing and interventions are effective.

Supporting Reference: Longitudinal maturity assessment approaches enable organizations to track progress over time, compare results against improvement goals, and demonstrate the value of capability investments. Continuous assessment methods provide organizations with trend-based views that reveal whether interventions are effective and support data-driven improvement planning (ITIL, 2024; Keeping Your Maturity Assessment Alive, Springer, 2023).

5. Seamless Integration with Capability Management

Strength: Direct connection between PPTD assessments, capability deltas, and Strategy to Execution ensures findings drive action. This integration closes the assessment-action gap that plagues many improvement approaches.

Supporting Reference: Integrated approaches that directly connect assessment findings to improvement initiatives enhance organizational effectiveness. Process-based frameworks that link assessment to action enable organizations to comprehensively formulate and track transformation efforts, reducing the common problem where organizations make little progress despite investment in assessment (Hammer, 2007).

6. Holistic Systems Perspective

Strength: PPTD assessments recognize that capabilities are complex systems where dimensions interact and affect each other. This systems thinking prevents solving symptoms while missing root causes.

Supporting Reference: Systems thinking emphasizes seeing interrelationships rather than linear cause and effect chains, enabling organizations to understand how different elements of a system interact. Organizations that adopt systems perspectives can better identify leverage points for change and avoid fragmented approaches that address symptoms rather than underlying structures (Senge, 1990).

Further Reading

Check out the following documents for more information about [Business Capability Modeling](https://help.ardoq.com/en/articles/44051-getting-started-with-business-capability-modeling), [Strategy to Execution](https://app.eu.intercom.com/a/apps/e2bo1r54/knowledge-hub/article/66321), [Business Health Check](https://help.ardoq.com/en/articles/251443-business-health-check-purpose-scope-rationale-closed-beta), and [Capability Based Planning]().

For further reading about capability quality assessment:

Capability Maturity Model Integration (CMMI)

- Foundational framework for multi-dimensional capability assessment that influenced the PPTD approach

- Reference: Chrissis, M. B., Konrad, M., & Shrum, S. (2003). CMMI: Guidelines for Process Integration and Product Improvement

Business Capability Assessment Methods

- Research on comprehensive approaches to evaluating organizational capabilities across multiple dimensions

- Reference: Rosemann, M., & de Bruin, T. (2005). Proposal for a Competency Center for BPM

Competing on the Eight Dimensions of Quality

- Seminal work on quality dimensions that informed the PPTD quality requirement framework

- Reference: Garvin, D. A. (1987). Harvard Business Review

Decisions with Multiple Objectives

- Framework for multi-criteria decision-making that supports the Exposure scoring approach

- Reference: Keeney, R. L., & Raiffa, H. (1976)

Did this answer your question?