AI, AML 8 min read

Managing AI model risk in AML: A step-by-step guide for banks and fintechs

Dustin Eaton

By Dustin Eaton, Principal of Fraud & AML at Taktile

This is part two of our blog series for financial institutions navigating new AI model risk mandates in AML use cases.


Part 1 of this series established a new regulatory mandate: SR 11-7’s model risk management framework now applies to AML AI systems, requiring the same rigor applied to credit models. 

We examined the framework’s three core pillars: documentation and use, independent validation, and governance and controls. We also highlighted the unique challenges AML models face, from ambiguous ground truth to rapidly evolving money laundering techniques.

Part 2 focuses on implementation, offering a practical roadmap for building compliant model risk management capabilities. We’ll walk through each essential step from conducting AI inventories and building validation functions, to implementing ongoing monitoring, and managing third-party relationships. We’ll also explore what to expect from regulators and why strong model risk management can become a competitive advantage. 

Whether you’re starting from scratch or strengthening existing practices, this guide offers actionable steps for meeting regulatory expectations while enabling more effective AML operations.

Meet new regulatory expectations for AI in AML with our step-by-step guide.

From regulation to implementation: How to put SR 11-7’s mandates into practice

Translating regulatory expectations into operational practice requires systematic implementation. The following approach provides a practical guide for building MRM capabilities for AML AI.

Begin by conducting an AI inventory across AML functions

Before building validation capabilities, it’s helpful to begin by assessing what it is you’re validating. The AI inventory initiative should:

  • Catalog all quantitative systems influencing AML decisions, including both formally designated “models” and embedded AI/ML components within broader platforms.
  • Document, for each system, the business purpose, mathematical approach, data inputs, deployment date, model owner, and current validation status.
  • Risk-rank each system using a standardized framework (e.g., complexity, materiality, data quality, regulatory sensitivity).
  • Identify gaps, such as models in production without validation, models with overdue validations, models lacking documentation.

This inventory can help organizations uncover potential gaps in existing model risk management protocols, for example: AI components deployed without validation, vendor models whose inner workings are opaque, or “temporary” models that have operated for years without formal review. Once the inventory is complete, you’re ready to begin building MRM capabilities.

Build an independent validation function to ensure AML model efficacy

Model validation for AI in AML can be more effective when teams have a blend of technical, analytical, and domain expertise. For example:

  • Technical skills: Familiarity with statistical modeling, machine learning algorithms, programming (Python/R), and data analysis.
  • AML domain knowledge: Understanding of money laundering typologies, regulatory requirements, and investigation workflows.
  • Validation methodology: Training in model validation frameworks, testing approaches, and documentation standards.

Organizationally, the validation function should be independent of model development and business line management. The OCC emphasizes that “staff conducting validation work should have explicit authority to challenge developers and users and to elevate their findings, including issues and deficiencies.” There are several ways organizations can put this into practice:

  • Independent risk management: Reporting to the Chief Risk Officer and operating separately from business lines.
  • Centralized model risk group: Dedicated MRM function validating all models enterprise-wide.
  • Third-party validation: Engaging external validators with specialized expertise (often necessary for smaller institutions).

Create standardized templates for AML model documentation

Standardized templates help ensure that model documentation is consistent and complete. Most organizations would benefit from creating the core templates below:

  • Model development document: Captures model purpose, theoretical foundation, data sources, methodology, limitations, and implementation controls.
  • Validation report template: Surfaces validation findings addressing conceptual soundness, outcomes analysis, ongoing monitoring, identified issues, and recommendations.
  • Ongoing monitoring dashboard: Tracks model performance over time with standardized metrics.
  • Model change request: Requires formal documentation and approval for model modifications.

Enable ongoing monitoring with specific metrics, thresholds, and escalation procedures

Ongoing monitoring transforms MRM from point-in-time validation into continuous oversight. Key monitoring metrics typically include:

  • Alert volumes: Total alerts generated, trends over time, and seasonality patterns.
  • False positive rates: Percentage of alerts closed without SAR filing.
  • SAR conversion rates: Percentage of alerts resulting in SAR filing.
  • Model drift indicators: Statistical measures assessing whether the data distributions or model behavior is changing.
  • Coverage metrics: Assessment of whether all customer segments and transaction types are being monitored.
  • Performance against thresholds: Assessment of whether alerts are disproportionately concentrated in specific typologies or customer segments.

When designing your model monitoring framework, consider building thresholds that trigger escalation. For example, if false positive rates increase by more than 20% month-over-month, or if SAR conversion rates fall below institutional targets, this can trigger investigation and potential model recalibration to more effectively mitigate risk.

Remain compliant with MRM standards by understanding common implementation challenges

Due to the complexity of AML AI systems, several common challenges arise when putting SR 11-7’s mandates into practice. Understanding and anticipating these challenges can be helpful for strengthening your overall MRM operations. They include: 

  1. Viewing AI as “just software”: Treating AML AI as an IT system rather than a model subject to validation can create governance gaps. For example, software quality assurance testing does not necessarily substitute for AML model validation.
  2. Insufficient validation resources: Model validation can be more effective when teams possess specialized skills. Assigning validation responsibilities to personnel with adequate training will help give you confidence in model efficacy. 
  3. Incomplete inventories: In cases where not all AI/ML components are identified and documented, there can be risk of unknown models operating without oversight. To be certain of inventory completeness, the inventory should be comprehensive and regularly updated.

Forecasted changes in the regulatory landscape—and how institutions can stay ahead

Regulatory examination trends

Regulatory agencies are actively enhancing examiner training on AI and model risk, signaling increasing pressure on financial institutions to remain compliant with new MRM standards.

Teams can expect model risk questions to feature prominently in AML examinations going forward. For example, examiners are likely to look for the following:

  • Model inventory completeness.

  • Validation independence and technical adequacy.

  • Board-level understanding of model risk.

  • Remediation of identified model issues.

  • Vendor model due diligence and ongoing monitoring.

International developments

The U.S. regulatory approach to AI governance forms part of a global trend. The European Union's AI Act establishes comprehensive requirements for “high-risk” AI systems, including those used in compliance functions. The Act mandates risk management systems, data governance, technical documentation, human oversight, and conformity assessments—requirements that substantially overlap with the SR 11-7's framework.

As other jurisdictions adopt similar frameworks, multinational institutions will face the complexity of complying with multiple overlapping AI governance regimes. Building robust MRM capabilities that exceed any single jurisdiction's requirements may prove more efficient than jurisdiction-specific approaches.

Model risk management as an enabler, not an obstacle

While this paper has emphasized regulatory mandates, robust MRM creates strategic advantages:

  • Faster deployment: Institutions with mature MRM capabilities can rapidly develop, validate, and deploy new AI capabilities, shortening time-to-value.
  • Regulatory confidence: Demonstrating proactive, sophisticated model risk management builds examiner confidence, potentially reducing examination burden and enabling more ambitious AI initiatives.
  • Better outcomes: Rigorous validation and monitoring improve model performance, reducing false positives while enhancing detection—simultaneously lowering costs and reducing risk.

Organizations that view MRM as enabling AI adoption, rather than constraining it, will differentiate themselves. Research on AI governance frameworks demonstrates that institutions with strong AI governance frameworks can pursue aggressive innovation while maintaining regulatory confidence—a competitive advantage in an industry where technology deployment speed increasingly determines market position.

The path forward begins with establishing well-functioning MRM processes that can be refined over time. To summarize, these are three recommended steps to begin that journey:

  1. Conduct the inventory: Catalog all AI/ML deployed in AML functions, risk-rank each system, and identify validation gaps.
  2. Build or acquire validation capacity: Whether through hiring, training, or third-party engagement, establish credible independent validation capabilities.
  3. Start validating: Begin with highest-risk models, document findings, track issues to resolution, and build organizational muscle memory for ongoing MRM.

Forward-thinking AML leaders are getting ahead of this mandate, building mature MRM capabilities and enabling more effective AI operations in AML.The convergence of AI and model risk management in AML represents a true opportunity for innovation.

Enhance your AML strategy with AI.

Discover Taktile