HaystackID® in the EDRM Illumination Zone: Jeffrey Fleming and Aleida Gonzalez

HaystackID® in the EDRM Illumination Zone: Jeffrey Fleming and Aleida Gonzalez
Image: HaystackID.

[EDRM Editor’s Note: This article was first published here on February 11, 2026, and EDRM is grateful to Trusted Partner HaystackID for permission to republish.]


HaystackID Editor’s Note: Organizations deploying AI systems now face scrutiny from multiple directions: courts applying decades-old civil rights and consumer protection laws to algorithmic decisions, European regulators enforcing the AI Act, and US states advancing independent legislative frameworks with overlapping obligations. The consequences of inadequate governance are materializing in litigation, regulatory investigations, and failed customer due diligence. This article examines what governance failures look like when exposed and what operational infrastructure prevents them. Drawing on insights from HaystackID® Managing Directors Jeffrey Fleming and Aleida Gonzalez shared during the EDRM Illumination Zone podcast, the discussion explores why policy documents fail without testing protocols, how vendor liability is shifting to deployers, and what security and fairness validation must demonstrate to survive legal scrutiny. Organizations still have time to build these capabilities proactively, but that window is closing as regulatory deadlines are set, and litigation expands.


Building AI Governance That Enables Confident Deployment

A recurring line of questioning has begun to surface in many recent employment and civil-rights litigation: Can the company explain why its algorithm made the decisions it did when using AI?

In recent years, regulators and plaintiffs have challenged the use of automated hiring and screening tools that allegedly disadvantage protected classes, raising questions about how those systems were trained, tested, and monitored. In several matters, organizations have struggled to produce documentation explaining how AI-driven decisions were made or what steps were taken to evaluate bias before deployment. Those gaps have become central to many regulatory inquiries and civil claims.

The AI systems processing thousands of candidate applications can often operate as black boxes, tools purchased for efficiency without the governance infrastructure to explain, test, or validate their decision-making. When litigation demands documentation, the gaps can create liability exposure.

“Courts are starting to hold [companies] liable,” said Jeffrey Fleming, Managing Director, HaystackID, during a recent conversation on the EDRM Illumination Zone podcast. “You have organizations buying products that may or may not know that AI is involved. And there’s just a lot of different parts and pieces. No one knows whose fault it is, but the courts aren’t letting anyone off the hook.”

Fleming and his colleague Aleida Gonzalez, also a Managing Director at HaystackID, have watched this governance crisis unfold from the front lines. Their work involves helping organizations implement AI governance after systems are already deployed, often after liability questions have already surfaced. The patterns they’re seeing reveal why reactive compliance fails and what organizations need to do differently.

When Laws Written in 1964 Meet Algorithms Built in 2024

The litigation exposure catching organizations off guard isn’t coming from new AI-specific regulations. It’s coming from employment discrimination laws, fair lending requirements, and consumer protection statutes that have been on the books for decades.

“If you look at some of the lawsuits filed just in the United States last year and judgments being entered, it’s not even AI laws that are generally there,” Fleming said. “It’s pointed back to decades-old civil rights things that were done because of AI, hiring discrimination based on protected classes and things.”

This legal landscape creates a problem for organizations that thought they were managing AI risk through traditional contract terms and vendor due diligence. Those approaches assume you can identify and allocate risk through negotiation. But when you can’t explain what the algorithm does or why it makes specific decisions, you can’t meaningfully assess the risk you’re accepting.

“As a company, when you bring that model in, you have to own that risk and account for it,” Fleming explained. “The question becomes: how do you implement it securely enough that bad actors can’t exploit your system?”

This litigation-driven accountability in the United States contrasts sharply with the regulatory approach taking shape across the Atlantic. While US organizations face lawsuits applying decades-old statutes to new technology, European regulators have built comprehensive AI-specific legislation from the ground up, and the compliance timelines are already running.

The EU Mandate vs. The US Patchwork

While litigation drives accountability in the United States, Europe has taken a legislative approach. Upon entering into force in 2024, the EU AI Act began applying in phases, with the first obligations—such as bans on certain prohibited practices—taking effect on February 2, 2025. The Act establishes mandatory requirements with penalties that get attention: up to €35 million or 7% of global annual turnover, whichever is higher.

Gonzalez, whose background includes policy work at the Department of Defense, explained the fundamental difference between the regulatory approaches: “The key thing to remember between those two is the EU AI Act is a mandate. It’s a legal requirement, whereas the NIST AI Risk Management Framework serves as guidance.”

For organizations doing business in Europe or impacting EU citizens, the AI Act’s requirements aren’t suggestions; they’re obligations with teeth. The Act takes a risk-based approach, categorizing AI systems from prohibited to minimal risk, with the heaviest obligations falling on high-risk systems that affect employment, credit, law enforcement, and critical infrastructure.

In the United States, the regulatory landscape is more fragmented. Absent comprehensive federal legislation, states are pursuing independent approaches. Colorado’s SB 24-205 takes effect June 30, 2026, while other states are advancing their own frameworks. Despite jurisdictional variations, Gonzalez noted a pattern: “If you really look at all of these laws in general, they’re all fairly similar, and some are nearly verbatim compared to other states.”

The challenge for multi-state organizations is threading between jurisdictions without maintaining entirely separate governance programs for each state.

“In the United States, we have more flexibility because we have no standardized requirements,” Gonzalez said. “Any changes or any differences are fairly granular.”

But that flexibility comes with complexity. Organizations must track which state laws apply to which operations, monitor new legislation as it emerges, and build governance frameworks that can adapt as requirements evolve.

Both the EU’s comprehensive mandates and the United States’s fragmented state-by-state approach share a common prerequisite: organizations must know what AI systems they’re actually operating before they can assess regulatory exposure or implement controls. That seemingly simple requirement is where most governance efforts immediately hit obstacles.

What’s Actually in Your AI Portfolio?

Before organizations can manage AI risk, they must understand their complete AI footprint. Establishing that visibility proves more complex than most companies anticipate.

AI systems typically enter organizations through three distinct pathways: systems the organization develops and trains internally; AI capabilities embedded within enterprise platforms, and third-party tools procured by individual business units.

Internally developed AI provides the greatest control, enabling organizations to document training data, conduct bias testing, validate security, and explain decision-making. However, internal development accounts for only a fraction of enterprise AI deployments.

Embedded platform AI—Microsoft 365® Copilot, Salesforce Einstein, Adobe Sensei—creates visibility gaps. These capabilities activate through standard subscriptions, often without organizations realizing they’ve deployed AI systems requiring risk assessment.

Business-unit purchases pose the highest risk. Marketing, HR, and operations independently acquire AI-powered tools through standard procurement channels. Without AI-specific due diligence, these systems bypass governance entirely.

“Companies have reported IP leaks because of that risk,” Fleming said, referring to employees using consumer AI tools like ChatGPT or Claude to complete work tasks, inputting company data into systems with no enterprise controls.

Fleming’s approach to helping organizations tackle this inventory challenge starts with structured discovery: “We come in and help them get a map of what they have, where it’s located, and share potential regulatory environments you may be operating in or need to be prepared to operate in.”

That inventory becomes the foundation for risk classification, prioritization, and governance planning.

With visibility into their AI landscape, organizations face the next question: how do we govern these systems? This is where most fall into a familiar pattern that doesn’t work.

Policies ≠ Governance

Organizations instinctively reference policy documents when facing compliance requirements. It’s a familiar pattern: identify the requirement, draft a policy addressing it, route through legal review, publish it to the intranet, and check the compliance box.

This approach often fails for AI governance.

“Compliance is great on policy,” Fleming said. “But if you’re not testing it, auditing it, and most importantly, able to explain why your model or tool made the decision it made, it’s not true governance.”

Compliance is great on policy. But if you’re not testing it, auditing it, and most importantly, able to explain why your model or tool made the decision it made, it’s not true governance.

Jeffrey Fleming, Managing Director, HaystackID.

The gap between policy and practice manifests in three critical ways:

  1. Policies describe intent; governance requires evidence of execution. When regulators or opposing counsel request documentation demonstrating that bias testing was conducted, fairness metrics were evaluated, and results were reviewed before deployment, organizations need more than policies stating testing “shall be performed.” They need test results, validation reports, and approval records.
  2. Policies treat AI as static, while real systems evolve continuously. Models get updated, training data shifts, performance drifts, and vendors release new versions. Governance must account for ongoing monitoring, change control, and re-validation triggers—none of which flows naturally from policy documents.
  3. Policies assume clarity about requirements. In AI governance, most organizations are still determining what compliance actually demands.

Gonzalez emphasized the distinction between documentation and operational compliance: “It’s one thing to check the box and say, well, okay, I have this document. This will cover me with this particular requirement. But it’s another thing when you actually use that document in practice.”

The organizations that succeed build governance into delivery workflows rather than creating parallel compliance processes that people route around when deadlines tighten.

The Vendor Due Diligence Illusion

Ten years ago, organizations started building third-party cybersecurity risk management programs. The logic was clear: if a vendor gets breached and attackers access your data through their systems, you own the consequences. As a result, organizations created security questionnaires, required attestations, and built vendor risk assessment processes.

The same logic applies to AI, but the questions are different and the stakes potentially higher.

The same pattern applies to AI,” Fleming said. “Unless your organization developed the model, you can’t control the base algorithm, training data sources, or fine-tuning processes. You’re implementing a system whose fundamental behaviors were determined by the vendor.”

That lack of control means vendor due diligence must go deeper than traditional technology assessments.

Organizations need to ask key questions like:

  • What data was used to train this model, and where did it come from?
  • Has bias testing been conducted? What are the results? What protected characteristics were evaluated?
  • Who bears liability for the outcomes the model produces?

That last question matters enormously, especially as the liability landscape shifts.

“The laws are shifting from developer liability to deployer liability,” Gonzalez said. “In the near future, I expect liability to become more evenly distributed between vendors and organizations deploying AI systems.”

This evolving framework makes vendor contracts even more critical. Fleming noted that courts aren’t accepting “the vendor’s fault” as a defense. Organizations deploying AI systems must ensure contractual terms establish mutual accountability rather than one-sided risk acceptance.

“Without teams of data science and AI experts, it’s challenging to build effective governance structures,” Fleming said. “You need that expertise, either internally or through partners.”

That expertise becomes particularly critical when organizations move beyond procurement and contracts to validation; the technical work of proving AI systems operate as intended and produce fair outcomes.

Test AI Systems Before Liability Finds You

As governance frameworks specify testing requirements, the challenge lies in executing testing that produces defensible evidence.

AI testing comprises two categories: security and fairness. Each address distinct risks.

Security testing for AI goes beyond traditional application security, including areas like:

  • Adversarial robustness testing.
  • Model extraction attempts.
  • Data leakage assessment.
  • API security validation.

Fairness testing evaluates whether AI systems produce discriminatory outcomes across protected characteristics and other relevant groups, which entails tasks such as running the model on test datasets representing different demographic groups, measuring performance differences between cohorts, calculating disparate impact ratios, and analyzing whether certain groups face systematically different error rates.

Organizations deploying AI in employment, credit, insurance, healthcare, and public-sector decision-making face increasing expectations for defensible fairness testing. Courts and regulators want evidence, not assurances.

Testing must happen before deployment. But that’s not the end. Models change. Data drifts. Performance degrades. Ongoing monitoring and periodic re-testing become part of operational governance.

Fleming’s aforementioned point about explainability connects directly to testing: if you can’t explain how a model makes decisions, you can’t design meaningful tests to validate that those decisions are fair, accurate, and secure.

The Cost Question That Misses the Point

Organizations evaluating AI governance programs inevitably focus on expenses: budget requirements, resource allocation, and implementation timelines. But that framing obscures what governance actually delivers. In some instances, organizations respond by limiting or delaying AI deployment altogether, but that approach comes with its own trade-offs in terms of competitiveness and operations.

Gonzalez flipped the calculus: “In the long run, the cost of conservative compliance is generally far less than minimal or insufficient compliance.”

In the long run, the cost of conservative compliance is generally far less than minimal or insufficient compliance.

Aleida Gonzalez, Managing Director, HaystackID.

Fleming approached it from a competitive angle: “Companies that succeed are going to be good at it, and they need to know how to use it. Having this proper governance type set will help them wield it as an organizational enabler and tool instead of just having it in the background, introducing risk and extra liability they don’t need.”

The return on governance investment can result in faster deployment cycles because governance is embedded in delivery workflows rather than bolted on at the end. Additionally, companies may experience reduced friction in vendor due diligence because reusable assurance packages answer customer questions upfront. Another benefit may include improved sales enablement when enterprise customers demand evidence of responsible AI practices, and lower legal exposure because testing and documentation create defensible records of reasonable care.

Organizations that view governance as overhead miss these benefits. Those who recognize governance as a capability-building position themselves to deploy AI with confidence, while competitors hesitate or stumble.

What Separates Working Governance Programs from Those That Fail

The governance programs that deliver returns share common characteristics. They’re not the ones with the longest policy documents or the most elaborate frameworks. They’re the ones that embed governance into how work actually gets done.

Effective programs start with visibility and move quickly to action. HaystackID’s approach begins with structured inventory and risk classification: mapping AI systems across the organization, identifying regulatory exposure, and producing prioritized roadmaps that show which systems demand immediate attention.

From there, governance is embedded into procurement workflows, so questions about training data, bias testing, security validation, and contractual liability are addressed before vendors are selected. And critically, organizations move from policies to evidence: security testing that evaluates adversarial robustness and data leakage risks, fairness testing that measures disparate impact across protected classes, and validation records designed to answer the questions regulators and opposing counsel will eventually ask.

Fleming’s advice for where to begin is direct: “If companies aren’t educating their workforce, putting the compliance and policy framework in place, doing the testing, able to answer why your AI answers the prompt this way—if you can’t explain how it did that—then you’re going to have a problem.”

If companies aren’t educating their workforce, putting the compliance and policy framework in place, doing the testing, able to answer why your AI answers the prompt this way—if you can’t explain how it did that—then you’re going to have a problem.

Jeffrey Fleming, Managing Director, HaystackID.

For organizations uncertain where governance gaps exist, scoping assessments provide that clarity in weeks. For those ready to build operational programs, advisory implementations translate requirements into repeatable, scalable workflows. And for high-risk systems requiring validation, testing services produce the evidence that supports defensible decisions.

The Window That’s Closing

Organizations still have time to build governance programs before facing material consequences, but that window is narrowing. Regulatory timelines are set. Litigation is expanding. Customer expectations are rising.

For many regulated industries, AI governance is increasingly viewed as table stakes. The only choice left is whether organizations build these capabilities proactively or reactively. After a regulatory investigation opens, litigation gets filed, or a customer contract is lost because governance evidence doesn’t exist.

“Organizations that thrive in regulatory environments are those that have a compliance architecture at the core front and the establishment of their businesses,” Gonzalez said.

Organizations with operational governance can explain exactly how their AI systems make decisions, what testing validated that those decisions are fair and accurate, and what evidence demonstrates responsible deployment.

The ones that can’t have simply postponed finding out what happens when those questions get asked in court.


More About Jeffrey Fleming

Jeffrey Fleming is an experienced cybersecurity professional with a proven track record of delivering excellence in client relations and operational success. With over a decade of experience in contract oversight and strategic leadership, he specializes in transforming challenges into solutions that yield exceptional results. Fleming’s ability to navigate complex contractual landscapes and build strong client relationships has been instrumental in driving organizational growth and success. As an adjunct professor, he is passionate about sharing his knowledge and expertise in cybersecurity with the next generation of professionals. By providing hands-on guidance and mentorship, Fleming empowers students to excel in navigating the ever-evolving cybersecurity landscape and contribute meaningfully to the industry. In addition to his expertise in contract management and cybersecurity, Fleming brings a wealth of knowledge in cloud solutions architecture. Leveraging his background in cloud technologies, he integrates innovative solutions to optimize operations, enhance efficiency, and drive strategic initiatives forward. By staying abreast of the latest advancements in cloud computing, Fleming ensures that organizations are equipped with the tools and resources needed to thrive in today’s digital age.

More About Aleida Gonzalez

An accomplished attorney and military intelligence officer, Aleida Gonzalez brings a rare combination of legal expertise, national security expertise, and strategic advisory capability that uniquely positions her to lead in the field of AI governance. With nearly a decade of service as a prosecutor and extensive military experience, Aleida has operated at the intersection of law, policy, and security—advising senior leaders, managing complex investigations, and shaping outcomes in high-stakes environments. Building on her AI Governance certification and prior service as a prosecutor, Aleida’s professional development reflects decades of deliberate preparation for managing high-risk, high-consequence systems where accountability, policy, and operational disciplines intersect.


The podcast is available on your favorite listening app, including Spotify, Apple Podcasts, and Google Play. The podcast is also available on the EDRM website and is provided below for convenience.

Join HaystackID’s experts as they share actionable insights on today’s most material topics—from how GenAI is reshaping legal data strategies to the latest approaches in digital forensics. Explore our full library of EDRM Illumination Zone podcast episodes.

Read the original article here.


Source: HaystackID
Assisted by GAI and LLM Technologies per EDRM’s GAI and LLM Policy.

Author

  • HaystackID Logo

    HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface and eDiscovery AI™ technology. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.

    View all posts