News

Who Is Liable In A Workplace Deepfake Fraud Incident?

In January 2024, a finance employee at Arup, the renowned British engineering firm, received an email purportedly from the company's chief financial officer based in the United Kingdom. The message requested a confidential transaction requiring urgent execution. The employee initially suspected the communication was a phishing attempt. But then came the video conference call.​

The call participants included the CFO and several senior colleagues. They looked authentic. They sounded authentic. Their facial movements synchronised with speech. Body language appeared natural. The employee recognised these faces and heard familiar voices discussing the transaction in measured, professional terms. Everything satisfied the verification instinct that security experts recommend when something seems suspicious, see and hear the person making the request.​

The employee authorised 15 transfers totalling HK$200 million to five local bank accounts. It was only upon checking back with the head office that the fraud unfolded: every single participant on that video call was a deepfake. The CFO did not exist on that screen. Nor did the colleagues. Fraudsters had downloaded videos of genuine individuals and used artificial intelligence to synthesise realistic voices and facial movements, creating deepfakes convincing enough to fool an employee actively verifying the request's legitimacy.​

Hong Kong police classified the incident as "obtaining property by deception." But beyond criminal charges against unidentified perpetrators overseas, the case raised complex questions about civil liability for deepfake fraud and whether negligence law can attribute fault to employees, employers, or both.​

Was the employee liable in negligence?

The starting point for employee liability is surprisingly permissive. Courts recognise that fraud comes in degrees, and negligence assessment depends on the circumstances of the particular deception rather than the employee's actions alone.​

Slight negligence typically does not trigger employee liability. After all, everyone makes mistakes. An employee who checks an email address against the CEO's real address before forwarding a document has performed some diligence, even if the address was spoofed. However, without specific anti-deepfake training, employees reasonably might not take further verification steps.​

Moderate negligence occurs when an employee fails to act with reasonable care, breaching their duty, and causing harm or loss to another. For instance, an employee who ignores the company's existing multi-level authorisation process when receiving an urgent voice call, despite knowing such procedures exist, engages in moderate negligence. Partial liability may attach depending on circumstances.​

Gross negligence involves significant disregard for required care, such as when an employee trained explicitly on deepfake risks and provided clear verification protocols deliberately disregards these procedures to process a large transfer. This triggers full liability exposure.​

The crucial variable is the sophistication of the deepfake. A poorly made deepfake with obvious flaws arguably places reasonable employees on notice. But a highly sophisticated deepfake that is nearly indistinguishable from reality, i.e., synthesising voice characteristics, speech patterns, facial expressions, and body language with sufficient fidelity to fool careful verification attempts, makes it much harder to prove an employee breached their duty of care.

Could the employer be liable in a fraud case involving deep fakes?

Employers bear primary responsibility for preventing deepfake fraud through multi-layered measures. This includes employee training to raise awareness of the existence and risks of deepfakes, clear guidelines encouraging verification through known communication channels, and technological controls to detect synthetic media.​

Employers who fail to warn employees about the risks of business email compromise and neglect necessary safety precautions may be liable for breaching directors' duties.

What about the liability of the fraudsters?

Civil fraud requires proving, on the balance of probabilities:

  • intentional misrepresentation
  • reliance on that misrepresentation, and
  • resulting loss.

The deepfake fraudsters clearly intended deception. They used AI technology to impersonate the CFO and colleagues. The employee relied on the deepfake's authenticity. A loss resulted in the form of HK$200 million transferred.​

Almost two years after the fraud was committed, the perpetrators remain unidentified. No arrests have been announced in the Arup investigation. However, Hong Kong police arrested eight individuals in April 2025 for related deepfake-enabled fraud schemes involving the use of lost identity cards to open bank accounts. Fraudsters typically operate overseas, making enforcement of civil judgments difficult even if perpetrators are identified. Tracing such highly organised criminals, who often operate in gangs, requires exceptional cooperation among international agencies, robust forensic practices to trace financial trails, and dedicated reporting channels.

Practical Implications

The Arup shows that deepfake fraud has moved from a theoretical risk to a concrete business threat. The sophistication of the attack far exceeds earlier generations of fraud. It also suggests that traditional verification procedures, whilst valuable, offer incomplete protection.

Companies must recognise that the duty to prevent fraud extends beyond email authentication and password protocols. Multi-layered defences, including deepfake detection technology, callback procedures using independently verified contact information, and training on the specific risks and warning signs of deepfake-enabled fraud, offer more realistic protection.

When it comes to liability, the courts currently carefully consider the facts of the case, the sophistication of the deepfake, the employee's training, the employer's preventive measures, and the degree of due diligence exercised. But as deepfake technology advances and becomes more commonplace, courts will likely hold employers to higher standards of prevention and employees to correspondingly higher standards of verification.

Frequently Asked Questions

What is a deepfake and how was it used in the Arup fraud?

A deepfake is synthetic media generated with artificial intelligence that manipulates visual and audio content to create convincing false representations of real people. In the Arup case, fraudsters downloaded publicly available videos of the company's CFO and colleagues, then used AI technology to synthesise their faces, voices, and body movements in a video conference call. The deepfakes were sophisticated enough to convince an employee that the call participants were genuine, leading her to authorise HK$200 million in transfers.

Can an employee be held liable for falling victim to a deepfake scam?

Employee liability depends on the degree of negligence. Slight negligence or minor oversights in verification typically do not trigger liability. However, an employee who ignores clear company protocols or who has received specific anti-deepfake training and deliberately disregards verification procedures may face partial or full liability. Critically, courts recognise that highly sophisticated deepfakes can mislead even careful employees attempting to verify legitimacy, which weighs heavily against finding liability when the deepfake is of exceptional quality.

What legal theories can be used to pursue deepfake fraudsters?

Criminal "obtaining property by deception" charges (as Hong Kong classified the Arup case) and civil fraud claims are available. Civil fraud requires proving intentional misrepresentation, reliance, and resulting loss, all elements easily established in deepfake scenarios. However, perpetrators are often unidentified and located overseas, making enforcement difficult despite clear legal theories for recovery.

What responsibilities do employers have to prevent deepfake fraud?

Employers must implement multi-layered preventive measures, including employee training on deepfake risks, clear verification protocols that encourage callbacks using independently verified contact information, and technological controls to detect synthetic media. Courts increasingly find employers negligent when they fail to warn employees about business email compromise risks or to provide necessary safety precautions. This creates dual liability exposure: potential criminal prosecution of identified perpetrators and civil negligence claims from injured parties.

How has the Arup case influenced corporate security practices?

Arup's disclosure of the incident, made to raise awareness of deepfake sophistication, signalled that traditional verification procedures, such as requesting video calls when suspicious, offer only incomplete protection. Companies now recognise the need for deepfake-specific detection technology, multi-person verification requirements for large transfers, and employee training focused specifically on the limitations of visual and audio verification in an age of synthetic media.

    Close

    How can we help?

    Please fill in the form and we’ll get back to you as soon as we can





    We have partnered with Law Share from JMW Solicitors LLP to refer instructions and clients to them, when we are unable to act. By answering yes to this question, you agree that we may pass your details on to Law Share in such circumstances. You are under no obligation to instruct JMW Solicitors LLP after being referred. We may receive a payment from JMW Solicitors LLP further to this referral.