Explore how Alteryx integrates Responsible AI practices across its suite of products. Our detailed AI Fact Sheets provide a thorough overview of our AI models, including our transparency, data handling, and accountability measures. The FAQs address common questions about data usage, encryption methods, and user controls. Refer to the legend for explanations of key terms used throughout our AI documentation. Learn more and understand how Alteryx ensures trust and reliability in its AI-driven solutions.
General Background | |
Description | Allows customers to embed requests to their own LLMs directly into their designer workflows, helping to solve advanced data problems like mapping data from one format to another and cleaning up messy categorical data. |
Is PII used in the training or operation of this model? | PII data is not used in the operation of this model. |
Base Model | Customers can use any model they choose from a list of supported model providers |
Model Type | LLM (Large Language Model) |
Model Customization | No |
Third-Party LLM Responsibility | |
To the extent that this product or feature utilizes a third-party LLM, please refer to the respective provider’s documentation for information on their data handling practices. This document describes how Alteryx’s product interacts with and uses the LLM, but the model’s management of data is governed by the third-party provider. | |
Transparency and Explainability | |
Model Outputs Explained | Yes |
Human Agency and Oversight | |
Is the Feature Optional? | Yes |
Human in the Loop? | Yes |
Trust and Accountability | |
Base model trained with customer data? | No |
Training data anonymized? | This is managed by the model provider chosen by the customer and may vary based on which model provider is selected. |
Customer data shared with model vendor? | This is managed by users who can elect to send raw data or metadata to the model. No data is required to be shared with the model vendor. |
Data deletion? | Gen AI Tools does not store any data sent to the LLM or any responses received from the LLM. That data only exists at rest in the user’s workflow on their machine. Users can choose to delete data from their local machine. |
Data retention? | Prompts and generated content are retained by the model vendor for 72 hours, can change depending on the LLM provider chosen by the customer. No data is retained by Alteryx. |
Data processing location? | Data is processed in the region where the Analytics Cloud environment is deployed (e.g., US East, EU Central). |
Data storage details? | Gen AI Tools does not store any data sent to the LLM or any responses received from the LLM. That data only exists at rest in the user’s workflow on their machine. LLM connection information and credentials are stored in AAC. |
Data encrypted in transit and at rest? | Yes |
Reliability and Safety | |
Logging and Auditing Mechanisms Available? | Yes |
Guardrails? | Yes |
Impact Assessment Conducted? | Yes |
Compliant with Applicable Regulations? | Yes |
Input/Output Consistency? | Yes |
Fairness and Inclusivity | |
Data Sources? | The feature compiles prompts for the LLM using user-provided inputs and metadata from the uploaded data, such as column names and sample values. See ‘Customer data shared with model vendor’ for more details. |
Bias detection and mitigation in place? | Yes |
Empower Social Good | |
Designed for Ethical Use? | Yes |
What is the AI feature, and what is its intended use and purpose?
The Gen AI Tools Palette enables customers to connect to a large language model (LLM) directly within a Designer workflow. This allows users to quickly prepare and transform data with advanced capabilities such as auto-mapping different datasets, identifying variations of the same value in a column, and creating streamlined workflows with minimal manual effort
What data does the AI system require?
Personally Identifiable Information (PII) is not used in the operation of these models. Only metadata, such as column names and sample values, or the prompts provided by the customer, are sent to the model. Raw underlying data is not included. The shared data is only used during the operation of the feature and is not retained for future fine tuning. Customers do not need to provide any additional information.
Can users disable the AI features?
Yes, the Gen AI Tools Palette is an optional feature that can be disabled by the Organization Admin. It is provided as a separate installer package, similar to the Alteryx Intelligence Suite, giving customers full control over enabling or disabling its functionality
How is the data processed and stored as it flows through the AI system?
Gen AI Tools do not store any data sent to the LLM or the responses received from the LLM. Data only exists within the user’s workflow on their machine. LLM connection information and credentials are securely stored in Alteryx Analytics Cloud (AAC). Data processing occurs in the region where the AAC environment is deployed (e.g., US East, EU Central).
What encryption methods are used to protect data at rest and in transit?
What testing and validation are performed throughout the AI model’s lifecycle?
Gen AI Tools are designed to connect customers to their own provisioned AI models. Lifecycle management, testing, and validation of those models are managed by the customer. Alteryx ensures tool’s compatibility and provides thorough testing to ensure the quality and reliability of the integration.
For more AI FAQs, please visit the Alteryx Artificial Intelligence FAQ page.
General Background | |
Description | Allows customers to quickly translate the findings of their Report into actions by summarising, rephrasing, translating or creating an executive summary. |
Is PII used in the training or operation of this model? | PII data is not used in the operation of this model. |
Base Model | Azure OpenAI – GPT 4 and GPT 3.5. |
Model Type | LLM (Large Language Model) |
Model Customization | No |
Third-Party LLM Responsibility | |
To the extent that this product or feature utilizes a third-party LLM, please refer to the respective provider’s documentation for information on their data handling practices. This document describes how Alteryx’s product interacts with and uses the LLM, but the model’s management of data is governed by the third-party provider. | |
Transparency and Explainability | |
Model Outputs Explained | Yes |
Human Agency and Oversight | |
Is the Feature Optional? | Yes |
Human in the Loop? | Yes |
Trust and Accountability | |
Base model trained with customer data? | No |
Training data anonymized? | Yes. Anonymization is managed by the underlying GPT-4 and GPT-3.5 models from Azure OpenAI. |
Customer data shared with model vendor? | On the request of the user, all content and insights displayed in the Report are shared with the model vendor to facilitate summarization, translation, or rephrasing. Raw underlying data is not included. However, if the user opts to display row-level data within the Report, such as in a table, this data will also be shared with the model vendor. |
Data deletion? | Users can choose to delete this generated content. Additionally, it is deleted when the report is deleted. All reports are removed when the user account is removed. |
Data retention? | Prompts and generated content are retained by the model vendor for 30 days to detect and mitigate abuse. |
Data processing location? | Data is processed in the region where the Analytics Cloud environment is deployed (e.g., US East, EU Central). |
Data storage details? | Data is stored within the same regional infrastructure as processing. For more details, refer to the Data Storage and Residency Documentation. |
Data encrypted in transit and at rest? | Yes |
Reliability and Safety | |
Logging and Auditing Mechanisms Available? | Yes |
Guardrails? | Yes (e.g., data curation, content filtering, bias mitigation, ethical guidelines, user feedback loops, continuous monitoring, collaborative research etc.). |
Impact Assessment Conducted? | Yes |
Compliant with Applicable Regulations? | Yes |
Input/Output Consistency? | Yes |
Fairness and Inclusivity | |
Data Sources? | The feature compiles prompts for the LLM using user-provided inputs and metadata from the uploaded data, such as column names and sample values. See ‘Customer data shared with model vendor’ for more details. |
Bias detection and mitigation in place? | Yes |
Empower Social Good | |
Designed for Ethical Use? | Yes |
What is the AI feature, and what is its intended use and purpose?
The AI feature allows users to quickly translate the findings of a report into actionable insights by generating an automated Executive Summary, rephrasing content, or translating text into another language.
What data does the AI system require?
Selected content and insights displayed in the report are shared with the model vendor to enable summarization, translation, or rephrasing. Raw underlying data is not included. However, if users choose to display row-level data within the report (e.g., in a table), this data will also be shared with the model vendor. The shared data is only used during the operation of the feature and is not retained for future fine-tuning.
Can users disable the AI features?
Yes, the feature is optional and can be disabled by the Organization Admin via a toggle in the Admin Portal.
How is the data processed and stored as it flows through the AI system?
When a user selects a portion or the entire report for summarization, rephrasing, or translation, the system generates a prompt based on the user’s input and the desired action. This prompt is sent to Azure’s OpenAI service, where the model processes it. The response is returned to Auto Insights and displayed to the user, who can choose to replace the original text or insert the AI-generated text into the report. Prompts and generated content are retained for 30 days for monitoring purposes.
What encryption methods are used to protect data at rest and in transit?
What testing and validation are performed throughout the AI model’s lifecycle?
Logging, auditing, and bias mitigation processes are in place. Our team conducts rigorous manual testing across various scenarios and datasets to ensure quality and consistency. When a new model is available, extensive testing is performed, and upgrades are implemented only if the model meets our quality standards.
For more AI FAQs, please visit the Alteryx Artificial Intelligence FAQ page.
General Background | |
Description | Supports customers identifying high value analytics use cases tailored to their specific business, role or problem and creates a synthetic dataset to match the use case to build a proof-of-concept Mission. |
Is PII used in the training or operation of this model? | PII data is not used in the operation of this model. |
Base Model | Azure OpenAI – GPT 4 and GPT 3.5. |
Model Type | LLM (Large Language Model) |
Model Customization | No |
Third-Party LLM Responsibility | |
To the extent that this product or feature utilizes a third-party LLM, please refer to the respective provider’s documentation for information on their data handling practices. This document describes how Alteryx’s product interacts with and uses the LLM, but the model’s management of data is governed by the third-party provider. | |
Transparency and Explainability | |
Model Outputs Explained | Yes |
Human Agency and Oversight | |
Is the Feature Optional? | Yes |
Human in the Loop? | Yes |
Trust and Accountability | |
Base model trained with customer data? | No |
Training data anonymized? | Yes. Anonymization is managed by the underlying GPT-4 and GPT-3.5 models from Azure OpenAI. |
Customer data shared with model vendor? | Yes, only metadata such as column names and representative sample values are shared with the model vendor. Raw underlying data is not shared. |
Data deletion? | Playbooks stores the content generated by the LLM. Users can choose to delete this generated content. Additionally, it is deleted when the user is deleted. |
Data retention? | Prompts and generated content are retained by the model vendor for 30 days to detect and mitigate abuse. |
Data processing location? | Data is processed in the region where the Analytics Cloud environment is deployed (e.g., US East, EU Central). |
Data storage details? | Data is stored within the same regional infrastructure as processing. For more details, refer to the Data Storage and Residency Documentation. |
Data encrypted in transit and at rest? | Yes |
Reliability and Safety | |
Logging and Auditing Mechanisms Available? | Yes |
Guardrails? | Yes (e.g., data curation, content filtering, bias mitigation, ethical guidelines, user feedback loops, continuous monitoring, collaborative research etc.). |
Impact Assessment Conducted? | Yes |
Compliant with Applicable Regulations? | Yes |
Input/Output Consistency? | Yes |
Fairness and Inclusivity | |
Data Sources? | The feature compiles prompts for the LLM using user-provided inputs and metadata from the uploaded data, such as column names and sample values. See ‘Customer data shared with model vendor’ for more details. |
Bias detection and mitigation in place? | Yes |
Empower Social Good | |
Designed for Ethical Use? | Yes |
What is the AI feature, and what is its intended use and purpose?
The feature supports customers identifying high-value analytics use cases and generating synthetic data, missions and reports, tailored to their needs in minutes. Allowing users to easily see how Auto Insights can help them on their data journey.
What data does the AI system require?
PII is not used in the operation of these models, and only metadata (e.g., column names and sample values) or the prompt that the customer inputs is shared with the vendor. Raw underlying data is not included. The shared data is only used during the operation and not for future fine tuning. The user does not need to provide anything else.
Can users disable the AI features?
The feature is optional, indicating it can be disabled in Admin Portal by the Organisation Admin only via a toggle. It is split up so that Playbooks with your own data and Playbooks with synthetic data can be enabled/disabled individually. Meaning customers can have both enable or either just one or the other.
How is the data processed and stored as it flows through the AI system?
What encryption methods are used to protect data at rest and in transit?
What testing and validation are performed throughout the AI model’s lifecycle?
Logging, auditing, and bias mitigation processes are in place. On our end we have done months and months of manual testing where we entered various scenarios and tested with many different datasets to evaluate the results until we were satisfied with the quality and consistency. Once a new model becomes available, we will do extensive testing again and only if we are satisfied with the quality do we upgrade to the new model.
For more AI FAQs, please visit the Alteryx Artificial Intelligence FAQ page.
General Background | |
Description | Alteryx Copilot is an AI-powered workflow assistant that enables users to interact in a conversational way to streamline workflow creation and receive tailored recommendations using Generative AI. |
Is PII used in the training or operation of this model? | No |
Base Model | Google Gemini |
Model Type | LLM |
Model Customization | The model is provided with information about Alteryx Designer as context, including how to configure Alteryx Designer Tools. |
Third-Party LLM Responsibility | |
To the extent that this product or feature utilizes a third-party LLM, please refer to the respective provider’s documentation for information on their data handling practices. This document describes how Alteryx’s product interacts with and uses the LLM, but the model’s management of data is governed by the third-party provider. | |
Transparency and Explainability | |
Model Outputs Explained | The model will explain its thinking process and why it made certain recommendations in the chat response sent to the user. |
Human Agency and Oversight | |
Is the Feature Optional? | Yes |
Human in the Loop? | Yes, Copilot will often ask the user for confirmation or feedback before taking any action on the canvas. |
Trust and Accountability | |
Base model trained with customer data? | No |
Training data anonymized? | N/A |
Customer data shared with model vendor? | Yes, workflow metadata as well as raw chat messages sent by the user are sent to the model vendor for use in preparing the response. |
Data deletion? | Copilot stores conversation message history including sanitized workflow information. Conversation history is retained for two years and then deleted, but users can request it to be deleted on demand. Data is deleted when a user is deleted. |
Data retention? | Conversation history is retained for two years and then deleted unless the user requests deletion sooner. Prompts are retained by the model vendor for 30 days to detect and mitigate abuse. |
Data processing location? | Data is processed in the region where the Analytics Cloud environment is deployed. Copilot Trials operate in the US1 AAC environment. |
Data storage details? | Metadata that helps Copilot operate is stored in the AAC Control Plane in the environment being used. Customer-provided chat messages and Copilot responses (conversation history) are stored in the AAC Data Plane associated with the workspace selected by the customer, or in the US1 AAC control plane for Copilot trials. |
Data encrypted in transit and at rest? | Yes |
Reliability and Safety | |
Logging and Auditing Mechanisms Available? | Copilot is designed to provide reasoning and recommendations in its responses to help users understand the rationale behind its actions. However, responses may vary based on the complexity or context of the user’s input. New models or updates are tested to ensure they meet quality standards before being made available to customers. |
Guardrails? | Yes |
Impact Assessment Conducted? | Yes |
Compliant with Applicable Regulations? | Yes |
Input/Output Consistency? | Yes, although because Generative AI models are nondeterministic, users should expect some variability in response to a given prompt. |
Fairness and Inclusivity | |
Data Sources? | Copilot creates prompts to send to the underlying model from information such as: the chat message entered by the user, workflow metadata sent by Designer Desktop during a chat session, conversation history, information about how to configure tools in Designer, Alteryx Help Documentation, Alteryx Community posts, and Alteryx Knowledge Base. |
Bias detection and mitigation in place? | Yes |
Empower Social Good | |
Designed for Ethical Use? | Yes |
What is the AI feature, and what is its intended use and purpose?
Alteryx Copilot is an AI-powered workflow assistant designed to help you build workflows more efficiently. You can ask Copilot questions about Designer or get assistance with adding tools to your workflow in a natural, conversational way. Copilot uses Generative AI to analyze your current workflow and provide tailored recommendations. It can even add preconfigured tools directly to the canvas. With Alteryx Copilot, you can spend less time building workflows and get to actionable insights faster.
What data does the AI system require?
To produce responses, Alteryx Copilot uses:
This data is used only during the operation of the feature and is not retained for future fine-tuning.
Can users disable the AI features?
Yes, during the Public Preview, Copilot is accessed via the Alteryx Marketplace, where users can individually download and install it. For General Availability (GA), Copilot will be included as part of a paid add-on bundle, and it will only be enabled for paying Alteryx Copilot customers.
How is the data processed and stored as it flows through the AI system?
When a customer initiates a conversation with Copilot by opening the Copilot Extension in Designer and starting a trial or connecting it to their AAC Workspace, user and conversation metadata are stored in the AAC Control Plane.
When a chat message is sent:
Messages are then stored in the appropriate AAC Data Plane. For trials, messages are stored in the AAC Control Plane. Conversation message history is deleted under the following conditions:
What encryption methods are used to protect data at rest and in transit?
What testing and validation are performed throughout the AI model’s lifecycle?
Logging, auditing, and bias mitigation processes are in place. Alteryx performs extensive manual and automated testing to compare Copilot chat responses against expected outcomes to ensure quality and consistency. When developing a new Copilot agent, the same testing process is followed, and the new agent is only released if it meets Alteryx’s quality standards.
For more AI FAQs, please visit the Alteryx Artificial Intelligence FAQ page.