Frequently Asked Questions: AI

Find the answers to the most common questions related to AI functionality, language model, security and privacy

Diana Nechita avatar
Written by Diana Nechita
Updated over a week ago

Questions related to AI functionality, language model, security and privacy

  • Which AI service powers Ardoq's AI features?

Ardoq's AI features are powered by Microsoft’s Azure Open AI Service, which provides secure and efficient AI capabilities. Your input data is processed within the same Microsoft Azure instance as your backup data in your ongoing Ardoq subscription. Your input data is not sent to any third party entities other than those authorized in your agreement. For more details, please visit Microsoft’s Azure Open AI Data Privacy webpage.

  • Where is the AI language model hosted geographically and where is my data processed?

The AI Language model is hosted in the same region as the base instance specified in your Ardoq subscription.

  • What data is processed by the AI service?

Only the data essential for particular requests is processed by the AI service, which includes relevant information from your Ardoq instance, but without sharing any user account information.

  • Does anyone outside of our company have access to the processed data?

No, access to the processed data is not granted to any third party other than those specified in your subscription agreement.

  • Is any of my data used to train or improve Microsoft, any 3rd party product or services?

No, your data is not used to train or improve any 3rd party products or services.

  • Do Ardoq’s generative AI supported features impact my compliance with GDPR?

We are committed to helping our customers stay compliant with GDPR and their local requirements. As we do today for all of our features, we will process and transmit data for Ardoq’s generative AI supported features in accordance with the terms of your Ardoq subscription.

Ardoq's Generative AI Features

  • Which AI features are available in Ardoq?

The AI beta features currently available for you to test are:

AI-Powered Description in Surveys

Increase survey completion with less effort through auto-generated descriptions. Save time for survey respondents and boost completion rates with a single click.

Best Practice Assistant

Get actionable advice on leveraging Ardoq’s best practices and use cases instantly with our AI-powered assistant. Ask any question for quick, comprehensive answers based on extensive training from the Ardoq Help Center.

Stay tuned for more exciting AI features coming soon.

  • How can our organization enable Ardoq's generative AI features?

At Ardoq, we prioritize providing our customers with control over their platform settings. Currently, to enable AI-powered features, we'll need your consent for a modification to your existing subscription agreement. This modification includes the addition of a new data processing purpose by Microsoft. Contact your customer success manager to sign the updated Data Processing Agreement (DPA).

As soon as the amendment to your DPA is signed, you'll gain complete control and the ability to manage AI features directly within Ardoq via our in-platform feature toggles. This setting is designed to give admins the flexibility to activate AI functionalities quickly and efficiently, ensuring that your team can leverage the full potential of Ardoq’s platform whenever needed. Simply navigate to Preferences > Organization settings > Feature settings and choose the AI-powered features you want to enable in your organization.

  • What if my organization wants to disable Ardoq’s generative AI features?

AI-powered features within your organization can only be managed by admin users. To adjust these settings, go to Preferences > Organization settings > Feature settings and disable the AI-powered features you no longer wish active in your organization. Deactivating these features will remove access for all users in your organization, including admins, writers, readers, and contributors, to the AI functionalities.

  • How does Ardoq tackle the challenge of hallucinations in Large Language Models (LLMs) to ensure data quality?

At Ardoq, we recognize that while not all our AI features are dependent on Large Language Models (LLMs), addressing the issue of "hallucinations" or inaccurate outputs from generative AI is crucial for maintaining high data quality. Our approach emphasizes enhancing the efficiency of user workflows without aiming to fully automate them, ensuring that AI acts as a support rather than a replacement.

To mitigate the risk of hallucinations, we incorporate a human validation step in our process. This ensures that data inputs generated by AI are reviewed and validated by a human, maintaining a high level of accuracy and reliability. We're also exploring future enhancements such as change approvals, to add an additional layer of scrutiny to AI-generated changes, and data lineage to allow for tagging and easier identification of AI-generated information.

Moreover, Ardoq employs Retrieval Augmented Generation (RAG) technology wherever possible. This technique allows our AI to ground the generated text in factual information, relying on known truths to provide accurate and reliable outputs. Through these measures, we aim to harness the benefits of AI while minimizing the risks associated with LLM hallucinations, ensuring our platform remains a reliable and efficient tool for our users.

Did this answer your question?