1. Which AI Models does Ardoq use? Do you name specific models and versions?
Short Answer:
Ardoq initially deployed Azure OpenAI as part of its early AI rollout. We are now transitioning to an Amazon-hosted AI architecture, including a combination of self-hosted AI models and selected managed models via Amazon Bedrock.
Ardoq intentionally follows a multi-model strategy to ensure we can always select the most appropriate AI model for each specific use case, rather than being tied to a single provider or model version. This approach allows us to:
optimize performance by use case
evolve models as AI capabilities improve
avoid vendor lock-in
manage upgrades in a controlled and deliberate way
Our approach is not about obscuring model usage, but about preserving the flexibility to continuously improve outcomes for customers as the AI landscape evolves. Data processed with AI remains in the same region / datacenter as the customer instance.
Both Microsoft Azure and Amazon AWS are approved sub-processors in the DPA entered with customers. For customers who require model transparency for compliance, security, or risk assessments, Ardoq can provide model disclosures under NDA.
2. Does Ardoq perform its own AI validation or testing? Or is that handled by the model provider?
Yes, Ardoq performs systematic, ongoing AI evaluations.
This complements the testing performed by our model providers.
Ardoq uses an internal AI evaluation framework for all feature development. This is an automated suite of accuracy, stability, and consistency checks applied to AI-powered features.
Think of it like software QA testing, but for generative AI.
What we evaluate:
Contextual grounding: AI outputs are stress-tested against real architecture use cases. This is to assure that outputs are grounded in Ardoq data and the architecture practices we provide as guidelines for the AI.
Consistency: Results remain stable as the data in the customer’s workspace evolves.
Responsible AI: We evaluate that the AI produces outputs that are safe, free of biases and ethical. We apply instructions and guardrails to avoid irresponsible AI output.
This makes Ardoq’s AI enterprise-grade, predictable, and transparent, not a black box.
3. Where are Ardoq’s AI models hosted?
Ardoq has transitioned to an Amazon AWS infrastructure, which includes:
Amazon Bedrock-hosted models
The possibility for self-hosted AI models within our AWS infrastructure
Support for a mix of open-weight and proprietary models
Key benefits:
Stronger data isolation
Better compliance fit for regulated industries
Ability to run models without sending customer data to third-party APIs
Flexibility for Ardoq to select the best-performing models per use case (Note, we are selecting this internally, we do not allow customers to customize their model selection here.
We do not expose customer data to public model training.
4. Does customer data ever get used to train internal or external models?
No. Although customer data may be used to improve our AI features, we do not use it to train AI models.
Ardoq’s AI architecture is designed so that:
customer content is processed within Ardoq-controlled infrastructure
although prompts and outputs may be stored, they are not used to train any AI models
no customer data is shared with model providers for training purposes
Processing is stateless and scoped to each request. No data is retained by model providers as part of training pipelines.
5. How does Ardoq ensure privacy and data isolation for AI processing?
AI requests are processed inside Ardoq's secure cloud environment. All AI features are clearly marked within the platform as being AI generated, and not inserted automatically without human oversight, avoiding automated decision making.
Customer data is:
logically isolated per tenant
protected by the same access controls as the core platform
permission-aware at query time
AI outputs only include data the requesting user is authorized to access. AI does not bypass role-based access, workspace boundaries, or view permissions.
6. What security controls does Ardoq use to govern AI interactions?
AI Evaluations and AI quality metrics
Ardoq has implemented processes and tools for quality controls of AI features, we refer to this as AI Evaluations. Evaluations assure that AI generated responses are relevant, accurate and grounded in real Ardoq data, not the AI model’s training data. It also checks for bias and unwanted behavior. We have the following AI Evals capabilities:
Human Evaluations when implementing new AI features
Automated Regression Evaluations that assures AI performance over time
Alerts for unexpected behaviour and errors
AI quality metrics
Guardrails
Ardoq uses system prompts and guardrails to restrict the behaviour of AI models. For example it will stop dangerous and unwanted conversations. It also assures that the AI responds politely and stays on topic.
In-app feedback Loops
Many Ardoq AI features have in-app feedback loops where users can indicate bad AI performance. We typically provide a “report problem” feedback form with pre-selected categories of AI failure states (such as, “incomplete”, “incorrect”, “inaccurate”). We also sometimes let customers provide written feedback. We never collect information about the users in these feedback loops.
Turn off AI Features
Customers are able to turn all AI features off entirely. They opt-in to them and this is entirely at the discretion of the customer.
7. What is AI Gateway (MCP Server), and does it require REST API licensing?
Short answer
AI Gateway (MCP Server) uses Ardoq's REST APIs and authentication mechanisms under the hood.
Customers must have API access enabled to use MCP.
Why?
MCP is essentially a secure gateway for external AI tools (like Claude) to query Ardoq safely and it depends on the same access controls as our REST API.
Footnote: A customer's agreement with an external MCP tool does not extend Ardoq's obligations and role as a Data Processor. The customer's agreement with that MCP governs the data once it leaves Ardoq.
8. Does Ardoq store AI outputs?
We store AI outputs alongside other Ardoq data and in accordance with standard contract provisions. Outputs are subject to the same data retention policies as defined in the DPA, for the duration of the customer contracts and up to 90 days after termination.
Some AI features, such as Chat Assistant, store sessions to give users access to historical conversations
Ardoq may analyze anonymized usage signals and metadata from AI interactions to evaluate and improve AI feature performance. This data is never used to train AI models and is only processed in accordance with customer contracts and the DPA.
AI conversations (ie. inputs and outputs) are not shared with model providers or any other 3rd parties
AI generated content requires human approval in order to get the content into Ardoq.
9. How is Ardoq AI infrastructure hosted?
Ardoq’s AI is currently offered as SaaS in secure AWS regions.
We support:
Strict data residency options
EU-only or region-locked data processing on request
11. How does Ardoq prevent hallucinations or incorrect results?
Through four mechanisms:
Context grounding: AI uses Ardoq’s structured, graph-based relationships and metadata for context grounding, not just prompts and foundational training.
Permission-aware access: AI cannot use data the human user does not have access to
Evaluation system: Automated tests catch inconsistencies over time.
Human-in-the-loop: AI never modifies data without explicit user approval. This is a protection measure against automated decision making (which is strictly regulated under GDPR when it has impact over an individual)
12. How does Ardoq secure conversational AI (example: AI Chat Assistant, MCP)?
No user conversations are shared with model providers
Chat results are constrained to only the data the user can access
All API interactions are logged for audit
No customer data is used to retrain the LLM
13. What Ardoq AI does NOT do
Ardoq does not:
Train AI models on customer data
expose tenant data between customers
allow AI to modify data without human action
share prompts externally for model improvement
bypass access controls
operate shadow infrastructure outside contracted cloud environments
