Published on

Fannie Mae LL-2026-04: What the First Sector-Specific AI Governance Mandate Requires from Your Platform

Authors
  • avatar
    Name
    Parminder Singh
    Twitter

On April 8, Fannie Mae issued Lender Letter LL-2026-04, a governance framework for AI and ML in mortgage origination and servicing. It takes effect August 8, 120 days after publication. Freddie Mac already enforces similar requirements under Section 1302.8 since March 3.

Both GSEs now require approved seller/servicers to operate an auditable AI governance program. If you sell to Freddie Mac, you are already subject to these requirements, the March 3 deadline has passed.

The scope extends well beyond underwriting. Document processing, fraud detection, quality control, customer communications, vendor tools. If it uses AI and touches a loan, it's in scope.

Attorney James Brody put it directly: "AI governance is not a future compliance project. It is a present-tense operational requirement."

Photo by Thomas Be on Unsplash Photo by Thomas Be on Unsplash

Mandate

Fannie Mae takes a principles-based approach. Freddie Mac is prescriptive. Attorneys advising lenders recommend building to Freddie Mac's stricter standard because it satisfies both sets of requirements. That makes Freddie Mac's Section 1302.8 the de facto compliance baseline.

The combined requirements break down into four pillars.

Pillar 1: AI inventory

Every AI and ML tool must be documented. Each entry requires:

  • Business purpose
  • System owner
  • Connection to origination or servicing activities
  • Provider (internal or vendor)

This includes vendor-provided AI tools. If your document processing vendor uses ML for data extraction, it's in your inventory.

The inventory must be producible on demand when the GSE inquires.

Pillar 2: Risk management

Lenders must map, measure, and manage AI risks across three categories:

  • Bias and fairness: Fair lending implications of AI-driven decisions
  • Security vulnerabilities: Prompt injection, data leakage, model manipulation
  • Performance degradation: Model drift, accuracy decay, edge case failures

Risk controls must be calibrated to the company's tolerance level. Freddie Mac specifically requires segregation of duties and documented escalation paths.

Pillar 3: Governance structure

  • Designate an executive owner for AI risk
  • Review AI policies at least annually
  • Document roles, responsibilities, and escalation paths
  • Ensure transparency for personnel with AI responsibilities
  • Comply with 36-hour incident notification requirements for AI-related incidents

Pillar 4: Audit-ready documentation

This is where the mandate gets teeth. Lenders must:

  • Demonstrate compliance and operational controls on demand
  • Maintain audit trails for AI-assisted decisions
  • Disclose types of tools in use, their providers, and safeguards upon GSE inquiry
  • Prove that vendor AI usage is supervised and compliant

One critical detail: lenders are liable for AI mistakes by subcontractors and vendors. Your obligation to supervise vendor AI tools persists regardless of the vendor's SOC 2 status. I wrote about why vendor due diligence alone fails to satisfy due care - the same principle applies here.

Compliance gap

Most lenders cannot satisfy any of these four pillars today.

The inventory problem. AI usage is scattered across vendor SaaS, internal copilots, analyst workflows, and customer-facing chatbots. The typical "inventory" is a spreadsheet maintained by someone in compliance who asks teams to self-report quarterly. That inventory is unauditable, incomplete, and blind to every vendor tool that uses AI under the hood. I covered this visibility gap in detail in Shadow AI to $670,000 Blind Spot.

The telemetry problem. AI API calls produce zero security-relevant events in most SIEM deployments. When an underwriter uses an AI assistant to summarize a loan file, the interaction is invisible to your security and compliance infrastructure. Who made the request, what data was in the prompt, what the model returned - all of it goes unrecorded.

The vendor problem. Lenders use AI-powered tools for document classification, income verification, fraud scoring, and borrower communications. These tools call models on the lender's behalf, often using the lender's data. The lender is blind to what the model sees, what it returns, and whether the interaction complies with fair lending requirements.

The audit trail problem. When the GSE asks "show me every AI-assisted decision on this loan file," most organizations are stuck. The record linking a specific AI interaction to a specific loan, a specific user, and a specific policy outcome simply does not exist.

Mandate vs. Compliance

LL-2026-04 is principles-based and technology-agnostic. It requires policies, training, designated oversight, vendor accountability, and disclosure on demand. You could read it and conclude that a governance PDF, an annual review, and a spreadsheet inventory will get you through.

That reading is technically correct until the GSE shows up.

Disclosure test

Both GSEs expect lenders to "quickly disclose the types of tools in use, their providers, and the safeguards put in place to mitigate risks" upon inquiry. Freddie Mac's Section 1302.8 goes further: produce evidence demonstrating all governance elements during a review.

That is a live audit. The GSE will ask questions like:

  • Which AI tools touched this loan file?
  • Who used them, and when?
  • What data was in the prompt? Was borrower NPI involved?
  • What safeguards prevented misuse?
  • Can you prove those safeguards were active at the time of the interaction?

A policy document and a quarterly self-reported spreadsheet will fail this test. You have a policy that says "employees must use AI responsibly." You have zero evidence that they did.

Vendor liability

The mandate holds lenders responsible for AI mistakes made by subcontractors and vendors. That means when your document processing vendor uses ML to extract borrower data, or your QC vendor uses AI to flag loan defects, you own the compliance outcome.

Ask yourself: can you tell the GSE exactly which AI models your vendors ran against your borrower data last quarter? Can you show what data went in and what came out? Most lenders are unable to answer either question. Few vendors provide that level of transparency today. The liability is yours regardless.

Compliance gap

The mandate requires policies, oversight, and disclosure on demand. The how is left entirely to you. That gap is where organizations get exposed.

Satisfying the letter of the mandate is straightforward: write a policy, designate an overseer, train your staff. Satisfying the spirit of it, and surviving an actual GSE review, requires the ability to:

  • Know every AI tool in your environment, including the ones your vendors run on your behalf, continuously and automatically
  • Tie every AI interaction to a specific user identity, so you can answer "who did what, when" for any loan file
  • Classify the data flowing through AI systems in real time, so you know when borrower NPI or PII is involved
  • Enforce your policies at the point of use, so compliance is a guardrail, not an honor system
  • Produce audit-ready evidence on demand, without a two-week forensic investigation

None of this is explicitly mandated. All of it is implicitly required the moment someone asks you to prove compliance. The mandate gives you the "what." The infrastructure gives you the "how." I walked through what that infrastructure looks like architecturally in Securing the Inference Lifecycle.

DeepInspect

Building this capability in-house is a platform engineering program. AI traffic visibility, identity integration, data classification, policy enforcement, audit logging, vendor oversight. Each one is a project. Together, they are a multi-quarter initiative requiring dedicated headcount and infrastructure investment.

For an August 8 deadline, that timeline is already gone.

This is the problem DeepInspect was built to solve. A model-agnostic AI control plane for regulated environments that enforces usage policies in real time and produces audit-ready evidence for every AI decision - exactly what LL-2026-04 and every regulation after it will ask for.

If you are facing the August deadline, .

Beyond mortgage lending

Fannie Mae and Freddie Mac are the first sector-specific AI governance mandates with hard compliance deadlines. They will not be the last.

  • EU AI Act: High-risk system requirements take effect August 2, 2026. Credit scoring is classified as high-risk. The traceability and documentation requirements mirror what the GSEs are asking for.
  • Texas TRAIGA: The Responsible AI Governance Act took effect January 1, 2026. It regulates AI use with civil penalties and AG enforcement.
  • California AI Transparency Act: Effective January 1, 2026. Mandates disclosure for AI systems with 1M+ monthly users.
  • U.S. Treasury: Released an AI Risk Management Framework for financial services, with additional guidance on governance, data integrity, fraud, and operational resilience expected to follow.

The infrastructure LL-2026-04 demands is the same infrastructure every one of these frameworks will require. DeepInspect covers all of them today.