Published on

Due Diligence is Not Due Care: The AI Compliance Gap

Authors
  • avatar
    Name
    Parminder Singh
    Twitter

Last year, researchers disclosed EchoLeak (CVE-2025-32711), a zero-click Indirect Prompt Injection in Microsoft 365 Copilot. A poisoned email forced the AI assistant to silently exfiltrate sensitive business data to an external URL. The user never saw it, never clicked a link, and never authorized the transfer, but the data left anyway.

While Microsoft provides robust platform-level guardrails, they cannot account for the specific implementation context of every enterprise. In the eyes of a regulator, the platform's security is a baseline, but the ultimate responsibility lies with what you have deployed on top of it.

Most leaders I talk to think they are "covered" because their LLM provider is SOC2 compliant or has a signed DPA. However, in the eyes of the law, the liability remains with the deployer.

AI

Photo by EJ Strat on Unsplash

Due diligence is not due care

There are two distinct obligations that most organizations are collapsing into one.

Due diligence is evaluating your vendor. Did they sign the DPA? Do they have a BAA? What is their data retention policy? Do they use prompts for training? This is necessary administrative work, but it is not technical compliance.

Due care is what you do after you've selected the vendor. It consists of the controls you implement, the policies you enforce, and the records you keep. Regulators don't audit your vendor selection process; they audit your active controls.

When something goes wrong with an AI system, the legal exposure lands on whoever deployed it, not whoever built the underlying model.

Framework requirements

  • GDPR Article 5: This places accountability squarely on the Data Controller. Your AI vendor is a Processor. Their compliance posture covers their obligations to you, but it does not satisfy your obligations to the data subject. You must demonstrate that you implemented appropriate technical and organizational measures independent of the processor's promises.
  • The EU AI Act: This distinguishes between Providers and Deployers deliberately. Deployer obligations: transparency, human oversight, and continuous risk management. These exist regardless of the safety features built into the model. A well-governed model from a reputable provider does not discharge your duty as a deployer.
  • HIPAA: A Business Associate Agreement (BAA) governs your vendor's liability. It does not substitute for the Technical Safeguards that apply to you directly, specifically access controls, audit controls, and transmission security. Under the Security Rule, you must demonstrate "Reasonable and Appropriate" safeguards. A vendor contract does provide a legal shield but cannot be considered a technical control.

The bottom line is: vendor agreements govern what your vendor must do. They say nothing about the technical controls you have in place.

Investigation questions

In 2023, Samsung engineers pasted proprietary source code into ChatGPT for debugging help. The model behaved exactly as designed and OpenAI did nothing wrong, yet Samsung suffered an intellectual property exposure event.

The internal post-mortem didn't ask "Did ChatGPT's guardrails fail?" It asked "Why did we have no controls preventing sensitive IP from reaching a public model in the first place?"

That is the question regulators ask. Not whether your vendor's product worked as advertised, but whether you had independent controls operating on your side of the API, and whether you can prove it.

EchoLeak presents the same question at scale. When an AI deployment exfiltrates data via Indirect Prompt Injection, the forensic audit will ask:

  • did the enterprise have per-user policy enforcement on AI requests?
  • did they have a record of what data classifications were present in that session?
  • did they have an independent audit trail beyond what the vendor's infrastructure produced?

If the answer to these is "no", the exposure is yours regardless of the vendor agreement.

Controls at the AI Layer

One of my previous posts covered why model guardrails are not a security control. This is the compliance corollary: even if guardrails were 100% reliable, they still wouldn't satisfy your regulatory obligations because they are not your controls.

To satisfy Due Care, your AI layer requires a dedicated control plane that provides:

  • Identity-Aware Enforcement: Per-request policy enforcement tied to user identity and role. This is not a static service credential shared by the entire application.
  • Semantic Inspection: PII and sensitive data detection that triggers before the prompt reaches the model.
  • Model Agility: The ability to swap underlying LLMs (e.g., moving from OpenAI to a private Llama instance) without losing your security posture or rewriting your audit logic.
  • Evidence-Grade Audit Records: A tamper-proof, per-decision record of who made the request, under which policy, what data was present, and the outcome.

A per-decision audit record proves authorization and policy compliance in a form that doesn't require forensic reconstruction after an incident. And it's the only thing that survives an investigation.

DeepInspect is a model-agnostic AI control plane for regulated environments. We provide the "Due Care" layer that vendor agreements ignore. By decoupling your security policy from the underlying LLM, DeepInspect ensures your compliance remains consistent even as your AI stack evolves. We enforce per-request usage policies and generate the audit records you need to prove you are in control on your side of the line.