When AI Goes Wrong: Compliance Lessons from Air Canada’s Chatbot Turbulence

An Air Canada airplane tail featuring a Canadian maple leaf design, with registration "C-FIUL".

As artificial intelligence (AI) is reshaping our interactions with the world, Air Canada’s chatbot saga is a stark reminder that we need to ensure the application of AI is safe, secure, and hallucination-free.

The Air Canada AI chatbot incident began innocuously enough. 

Jake Moffatt, a British Columbia resident, contacted Air Canada via their online chatbot to inquire about a bereavement fare. Guided by its responses, Moffatt was led to believe he could apply for a refund after his travel. 

So far, so normal. But the information the chatbot gave was hallucinatory – it didn’t align with Air Canada’s actual policies

Faced with a lawsuit, Air Canada presented a defense that seemed to border on the surreal – the chatbot, they claimed, was a “separate legal entity,” responsible for its actions. 

This submission not only challenged the conventional understanding of AI but also raised critical questions about the extent of a company’s responsibility for its automated systems.

The Air Canada situation suggests that the veil of AI cannot be used as a shield against corporate responsibility. 

Tribunal member Christopher Rivers –who decided the case in favor of Moffatt – suggests that “while a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website.” 

Given this, compliance professionals being wary when assessing AI tools should come as little surprise – even when under pressure from senior leadership to actively embrace it. 

But though Air Canada’s incident serves as a cautionary tale for the use of generic AI systems, compliance professionals shouldn’t read this story and conclude that AI is inherently risky.

Where Air Canada’s AI Went Wrong

Anyone who has played around with ChatGPT or Google’s Bard knows that hallucinations are common. 

Ask for statistics or quotes and it will make them up. Ask it to create a policy document to align with new legislation and it will be woefully inaccurate. 

In compliance, this is, of course, what we need to guard against. When we’re considering using AI to speed up processes and give guidance to employees on company policies, we need to make sure that it overcomes these challenges. 

Ira Parghi, a lawyer with expertise in information and AI law, sums it up succinctly, telling Canada’s CTV News “If an organization or a company decides to go down that road [using AI], it has to get it right.” 

Compliance AI is Here – What to Look Out For

So how do we get it right? 

The answer to avoiding AI missteps like those seen in the Air Canada incident lies in finding AI solutions designed specifically with compliance in mind, like BRYTER’s Policy AI.

Compliance teams have unique and specific needs. A simple “ChatGPT for Compliance” isn’t enough. Compliance teams need to ensure the following:

  1. Applicability: The AI needs to understand the context of your organization: what are the hierarchies and applicability rules of your policies? What types of compliance documents do you use? How are different company entities and operating geographies connected?
  2. Context: The AI needs to understand your organization-specific conventions like abbreviations.
  3. Consistency: For Compliance, it is important that the same questions are answered in the same way – every time. You need an AI solution that can guarantee this with certain pre-defined responses.
  4. Risk: A chatbot that operates “in the wild” may create more areas of risk than it mitigates. Compliance teams need to be alerted on specific high-risk or harmful topics immediately.
  5. No hallucinations: As Air Canada has shown us, you need to ensure that your AI is trained on your company policies and regulations and will not give answers unless the information is found in these documents.

So, to avoid falling into the same trap as Air Canada in using a generic chatbot, make sure any AI solution adopted by the Compliance teams meets these needs.

BRYTER’s Policy AI, for example, takes the power of generative AI and makes it safe for use by Compliance teams by including Safeguard Agents to manage each of those issues: applicability, context, consistency, risk and no hallucinations. It also automatically links to source materials to verify answers.

Equally, the success of an AI system in compliance hinges on its ability to learn and improve. When assessing tools, ensure they incorporate a feedback loop, allowing you to tell the system when answers to compliance questions don’t align with your expectations. 

Data privacy and security are also critical. The policies and data used to train and operate the system need to remain within your company’s control. That way you can address concerns about confidentiality and misuse of sensitive information.
 
That being said, why embrace AI at all? How would it positively impact your organization?

The Benefits of Compliance AI

As organizations like Air Canada grapple with the complexities of AI, compliance-specific AIs demonstrate a path forward for forward-thinking professionals. 

To illustrate the potential of AI in Compliance, let’s explore five benefits of a tool like Policy AI.

1. Save time answering policy questions: Tools like Policy AI revolutionize how compliance questions are handled. By integrating company policies into a secure Policy Domain, they provide instant, precise answers to compliance questions for both employees and compliance teams – saving time and unblocking the business.

2. Detect and improve compliance gaps: A crucial aspect of compliance management is recognizing areas of weakness. Policy AI helps compliance professionals identify these gaps and assists in refining and improving compliance processes.

3. Show Regulators that your processes work: Policy AI ensures meticulous record-keeping of every inquiry and response, offering a complete and detailed audit trail. 

4. Customized AI for Compliance Needs: Unlike generic AI solutions, BRYTER’s Policy AI is specifically designed to tackle the unique challenges of compliance. It only ever refers to your organization’s policies, avoiding risks similar to those experienced by Air Canada.

5. Uncompromised Confidentiality and Security: Trust and security are paramount in compliance. Policy AI embodies this principle. Hosted on Azure cloud infrastructure in the EU, and with comprehensive SOC 2 Type II and ISO27001 certifications, it ensures the highest levels of data protection and confidentiality.

Final Thoughts

The Air Canada chatbot incident is an illustration of the potential pitfalls in applying generic AI to complex fields like compliance. 

It underscores the necessity for organizations to approach AI with caution, emphasizing accuracy, reliability, and legal responsibility. 

The case serves as a learning opportunity, highlighting the importance of AI systems that are specifically designed for the unique demands of compliance, trained on the company’s policies and regulations, and equipped with compliance-specific guardrails like BRYTER’s Safeguard Agents.

As we reflect on the lessons from Air Canada’s experience, it’s clear that the future of AI in compliance is not in avoiding AI altogether but in embracing it prudently, with compliance-specific solutions.

Tools like BRYTER’s Policy AI, which are tailored to the specific needs of compliance and equipped with safeguards against inaccuracies, enable organizations to leverage the benefits of AI without increasing risk.  Compliance AI tools can be a powerful asset, transforming how organizations manage compliance responsibilities. But as ever with compliance, choose wisely.

Book a personalized demo