AI Hallucinations: Corporate Risks - Lessons from Deloitte's $440K Refund Case

Published: 2025-10-26

Deloitte Australia refunds approximately $440,000 due to AI report errors. A thorough examination of AI hallucination risks and countermeasures that Japanese companies cannot ignore

The Collapse of Trust Caused by AI

In 2025, Deloitte’s Australian division found itself in a situation where it had to refund approximately AUD 440,000 (about $440,000) to the government due to errors in an AI-generated report.

The report contained references to non-existent individuals and fabricated references, revealing significant flaws in the deliverable from a 7-month project submitted to Australia’s Department of Employment and Workplace Relations.

This incident is far from irrelevant to Japanese companies advancing their AI adoption.

Hallucination - AI’s Dangerous “Lies”

The phenomenon where generative AI creates plausible-sounding information that differs from facts is called “hallucination.”

In Deloitte’s case, a report using Azure OpenAI GPT-4o contained over 12 non-existent references and footnotes, along with multiple typographical errors.

Out of 141 references, 14 were found to be incorrect, and citations were fabricated within the main text.

The problem was uncovered through observations by Christopher Raj from the University of Sydney. The expert’s eye revealed that the report’s claims were not based on appropriate evidence.

AI Litigation Cases Occurring Worldwide

US Lawyer Submits Fictitious Case Precedents

In January 2025, a US federal court recommended sanctioning a lawyer with approximately $15,000 in fines. Fictitious case precedents were included in legal documents created using a generative AI chatbot.

A similar incident occurred in 2023, where a New York lawyer submitted documents citing non-existent case law and was sanctioned with a $5,000 fine.

The structural problem of even lawyers at top law firms trusting AI’s fluent responses without question has been highlighted.

Air Canada’s Court Loss

At Air Canada, an AI chatbot provided incorrect information about bereavement discounts, resulting in a court order to pay damages.

Air Canada argued that “the AI chatbot is separate from the company’s judgment,” but the court ruled that “the company is responsible for all information on its website.”

As a result, Air Canada was ordered to pay $650.88, and the chatbot was suspended after the incident.

Risks Facing Japanese Companies

Courts have shown the judgment that “if a company delegates part of its business to generative AI, the company should bear responsibility for the output results.”

This ruling serves as an important guideline for Japanese companies deploying AI services in the market.

Anticipated Damages

Continuing to use AI without understanding hallucinations risks damaging corporate and individual trust.

Specific risks include:

Loss of Trust: Various failure cases exist, such as citing survey results output by AI in internal meeting materials only to find non-existent statistical data, or product names listed in documents being non-existent.

Economic Loss: Risks include collapse of strategic planning, decreased operational efficiency, and loss of trust from customers and business partners, leading to deterioration of sales and brand image.

Security Breaches: When fraudulent links such as phishing scams or inaccurate advice about information security software is generated, risks of personal information leakage and malicious attacks increase.

Causes of Hallucination Occurrence

1. Learning Data Issues

When data that generative AI uses as information sources contains errors, it outputs incorrect information. Much inaccurate information exists on the internet, and learning from these results in hallucinations.

2. Prompt Ambiguity

With ambiguous prompts where “he” is unclear, AI generates sentences using words with high probability in learned texts, resulting in information that differs from facts.

3. Context-Focused Mechanisms

AI sometimes generates responses prioritizing context over information accuracy, and information content may change in the process of trying to respond naturally.

Measures Japanese Companies Should Take

1. Establishing Clear Guidelines

The “AI Business Guidelines (Version 1.1)” published by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry in March 2025 requires the formulation, implementation, and disclosure of risk management policies.

Companies should pay attention to the following points:

  • Clarifying the purpose and scope of AI use
  • Ensuring data accuracy
  • Conducting regular risk assessments

2. Human Verification Systems

The simplest and most immediately effective method is for humans to check the correctness of AI responses and make appropriate corrections when hallucinations are found.

In addition to checks by individual employees using generative AI, double-checking by legal departments can reduce the risk of damages occurring from hallucinations.

3. Specific and Clear Prompt Design

Including conditions such as “based on facts” or “prioritize primary information” in prompts is effective for reducing misinformation. Another method is limiting the target data itself to trustworthy sources.

4. Utilizing RAG Technology

Using RAG (Retrieval-Augmented Generation) technology to provide accurate information from official documents can reduce the risk of hallucinations.

5. Thorough Employee Education

When companies use generative AI for business, they need to share the existence of hallucinations with employees.

Implementing AI literacy training and ensuring employees understand AI’s limitations and appropriate usage methods is important.

Hallucinations Cannot Be Completely Prevented

Among AI researchers, it is considered extremely difficult with current technology to reduce hallucinations to zero in large language models due to the ambiguity of learning data and mechanisms dependent on self-prediction.

At present, no method exists to completely solve hallucinations themselves. Understanding AI’s technical nature and controlling risks while maximizing returns, it is important to explore solutions from the perspective of how to deal with hallucinations.

Summary: Overcoming the AI Utilization Dilemma

Deloitte’s $440,000 refund case, US lawyer sanctions, and Air Canada’s court loss - these cases demonstrate the weight of corporate responsibility in AI utilization.

While generative AI performance can be improved by increasing the amount of data and parameters used for learning or changing algorithms, it is important to have the premise that hallucinations will inevitably occur at a certain probability.

For Japanese companies to succeed in AI utilization:

  1. Correctly Recognize Risks: Acknowledge that hallucinations will occur
  2. Appropriate Governance Structure: Build clear guidelines and responsibility systems
  3. Human Oversight: Introduce mechanisms to always check AI outputs
  4. Continuous Improvement: Respond to the latest technology trends and guidelines

AI security cannot be an afterthought. From the initial stage of AI implementation, it is important to have a perspective that identifies risks and incorporates necessary guardrails into the design.

AI holds the potential to dramatically enhance corporate competitiveness, but if not handled correctly, it is a double-edged sword that can conversely expand risks. Now is the time to determine the optimal approach to AI utilization for your company and put it into action.