FINRA Regulatory Notice 24-09 Compliance Checklist: Deploying Generative AI and LLMs in Brokerage Operations

September 7, 2025

FINRA Regulatory Notice 24-09 Compliance Checklist: Deploying Generative AI and LLMs in Brokerage Operations

FINRA's Regulatory Notice 24-09 dropped in March 2024, and it's been keeping compliance teams busy ever since. The notice doesn't ban generative AI outright, but it makes one thing crystal clear: if you're using AI tools in your brokerage operations, you better have your compliance house in order. (FINRA Compliance Solutions)

We've seen firms scramble to understand what this means for their day-to-day operations. Some are hitting pause on AI initiatives entirely, while others are pushing forward without proper guardrails. Neither approach is probably the right one. The key is understanding exactly what FINRA expects and building systems that let you use AI responsibly while staying compliant.

The notice covers everything from governance frameworks to ongoing monitoring requirements. But here's what makes it tricky: FINRA doesn't give you a simple yes-or-no checklist. Instead, they outline principles and expectations, leaving firms to figure out how to implement them in practice. That's where this guide comes in.

Understanding FINRA's AI Expectations

FINRA's approach to AI regulation builds on existing rules rather than creating entirely new ones. The notice emphasizes that current regulations like Rules 2210 (communications), 3110 (supervision), and 4511 (recordkeeping) still apply when you're using AI tools. (Luthor - FINRA Advertising Rules)

What's different is the level of scrutiny FINRA expects. Traditional compliance processes might not catch AI-specific risks like model drift, biased outputs, or hallucinations. The notice makes it clear that firms need enhanced oversight when AI is involved.

The regulatory focus isn't just on customer-facing AI applications either. Internal tools like research assistants, compliance monitoring systems, and trading algorithms all fall under the scope. If an AI system could impact customer outcomes or market integrity, FINRA wants to see proper controls. (Future-proofing Your RIA with Generative AI)

One area that's causing confusion is the "grey zone" applications. Think about an LLM that helps analysts write research reports or a chatbot that answers basic account questions. These tools don't directly execute trades or provide investment advice, but they still influence customer interactions and business decisions.

The 10-Step FINRA AI Compliance Checklist

Step 1: Establish AI Governance Framework

Your first move should be creating a formal AI governance structure. This isn't just about having policies on paper, it's about designating specific people who are responsible for AI oversight. FINRA expects clear accountability chains.

Start by identifying who in your organization has the authority to approve new AI tools. This person needs to understand both the technology and the regulatory implications. Many firms are creating AI steering committees that include compliance, technology, and business representatives.

Document your decision-making process for AI adoption. What criteria do you use to evaluate new tools? How do you assess risk? FINRA examiners will want to see that you have a systematic approach, not just ad-hoc decisions. (What is a Compliance Checklist)

Step 2: Conduct Pre-Deployment Risk Assessments

Before any AI tool goes live, you need a thorough risk assessment. This goes beyond typical vendor due diligence. You're looking at model-specific risks like bias, accuracy, and explainability.

Create a standardized assessment template that covers data sources, training methodologies, and potential failure modes. For each AI application, document what could go wrong and how you'll detect problems. This documentation becomes critical during FINRA examinations.

Pay special attention to customer-facing applications. A research assistant that occasionally provides inaccurate information might be manageable, but a customer service chatbot that gives wrong account balances could trigger regulatory action. (Banking Regulatory Agencies)

Step 3: Map AI Applications to FINRA Rules

Every AI tool in your organization should be mapped to relevant FINRA rules. This isn't always straightforward, especially for newer applications that don't fit traditional categories.

Rule 2210 applies to any AI that generates customer communications. This includes chatbots, automated emails, and even AI-assisted social media posts. Rule 3110 covers supervision requirements, which means you need human oversight of AI decisions. Rule 4511 requires you to keep records of AI-generated content and decisions.

For grey-area applications, err on the side of caution. If an AI tool could reasonably be seen as influencing customer decisions or market activities, treat it as if it's fully regulated. The cost of over-compliance is usually much lower than the cost of enforcement action. (Luthor - RIA Compliance)

Step 4: Implement Human-in-the-Loop Controls

FINRA expects meaningful human oversight of AI systems. This doesn't mean someone needs to review every AI output, but there should be clear points where humans intervene in the process.

Define what level of human review is appropriate for different types of AI outputs. High-risk decisions might require pre-approval, while lower-risk activities might use post-hoc sampling. The key is having a documented rationale for your approach.

Train your staff on what to look for when reviewing AI outputs. Generic "check for accuracy" instructions aren't enough. People need specific guidance on identifying AI-generated content that could be problematic from a regulatory perspective.

Step 5: Document Model Prompts and Configurations

FINRA examiners will want to see how your AI systems are configured. This includes the prompts you use, the parameters you set, and any fine-tuning you've done. Treat these as you would any other compliance documentation.

Create a central repository for AI system documentation. Include version control so you can track changes over time. When you update a prompt or change a model parameter, document why you made the change and what impact you expect.

For LLMs, pay particular attention to system prompts that define the AI's role and constraints. These prompts often contain your compliance guardrails, so they need to be carefully crafted and regularly reviewed. (SEC Rule 17a-4)

Step 6: Establish Ongoing Monitoring Procedures

AI systems can drift over time, so one-time testing isn't sufficient. You need ongoing monitoring to catch problems before they impact customers or trigger regulatory issues.

Set up automated alerts for unusual AI behavior. This might include accuracy metrics falling below thresholds, unusual patterns in outputs, or system errors. The goal is early detection of problems.

Regularly sample AI outputs for manual review. Even if you can't check everything, periodic sampling can help you identify emerging issues. Document your sampling methodology and findings. (Luthor - AI-powered compliance platform)

Step 7: Create AI-Specific Training Programs

Your staff needs to understand both the capabilities and limitations of the AI tools they're using. Generic AI awareness training isn't enough, you need role-specific education.

Develop training modules that cover regulatory requirements for AI use. Include real examples of compliant and non-compliant AI applications. Make sure people understand when they need to escalate AI-related issues.

Update your training regularly as you add new AI tools or as regulations evolve. FINRA's expectations around AI are still developing, so your training needs to stay current.

Step 8: Implement Recordkeeping for AI Decisions

FINRA Rule 4511 requires firms to maintain records of business-related communications and decisions. When AI is involved, this gets more complex because you need to capture not just the output but also the process.

For AI-generated communications, keep records of the original prompts, the AI's response, and any human modifications. For AI-assisted decisions, document what information the AI considered and how the final decision was made.

Consider the retention period for AI-related records. Some firms are keeping AI documentation longer than the standard three years, especially for systems that could impact customer outcomes. (SEC Rule 17a-4)

Step 9: Plan for AI System Failures

AI systems can fail in ways that traditional software doesn't. You need contingency plans for when AI tools produce incorrect outputs, become unavailable, or behave unexpectedly.

Create incident response procedures specifically for AI failures. Include steps for isolating the problem, notifying affected customers, and reporting to regulators if necessary. Practice these procedures before you need them.

Maintain backup processes that don't rely on AI. If your AI-powered compliance monitoring system goes down, you should have manual procedures to maintain oversight. This redundancy is especially important for critical compliance functions.

Step 10: Prepare for FINRA Examinations

FINRA examiners are increasingly asking about AI use during routine examinations. You need to be ready to demonstrate your AI governance and controls.

Create an AI inventory that lists all AI tools in use, their purposes, and their regulatory implications. Keep this updated as you add or remove AI systems. Examiners will want to see that you have visibility into your AI landscape.

Prepare documentation packages for each AI system that include risk assessments, approval records, monitoring reports, and incident logs. The goal is to show examiners that you have comprehensive oversight of your AI use. (Luthor - Compliance Solutions)

Navigating Grey-Area AI Applications

Some AI applications don't fit neatly into traditional regulatory categories. These grey-area tools require extra careful analysis to determine appropriate compliance requirements.

LLM-powered research assistants are a common example. These tools help analysts gather information and draft reports, but they don't directly provide investment advice to customers. The question is whether their outputs could influence customer decisions.

FINRA's approach seems to be: if it could reasonably impact customers or markets, treat it as regulated. So even if your research assistant is just helping with internal analysis, you probably need some level of oversight and documentation.

Chatbots present another grey area. A bot that only provides account balances might seem low-risk, but what if it occasionally provides incorrect information? What if customers rely on its responses for investment decisions? The safest approach is to treat any customer-facing AI as subject to communications rules. (FINRA Advertising Rules)

Common Implementation Challenges

Firms are running into several common challenges when implementing AI compliance programs. Understanding these pitfalls can help you avoid them.

One frequent issue is treating AI compliance as purely a technology problem. While technical controls are important, compliance is fundamentally about processes and oversight. You need business procedures that work regardless of the underlying technology.

Another challenge is trying to apply traditional compliance approaches to AI without modification. AI systems behave differently than traditional software, so your compliance procedures need to account for things like model drift, probabilistic outputs, and emergent behaviors.

Many firms also struggle with the documentation requirements. AI systems can be complex, and it's not always obvious what documentation FINRA will want to see. The key is being comprehensive and systematic in your approach. (RIA Compliance Software)

Building Sustainable AI Compliance Programs

The goal isn't just to check boxes for FINRA, it's to build sustainable programs that let you use AI effectively while managing risk. This requires thinking beyond immediate compliance requirements.

Consider how your AI compliance program will scale as you add more tools and use cases. Manual review processes that work for one AI application might not work for ten. Build systems that can grow with your AI adoption.

Integrate AI compliance into your broader risk management framework. AI risks shouldn't be managed in isolation, they should be part of your overall approach to operational and regulatory risk.

Stay engaged with industry developments and regulatory guidance. FINRA's expectations around AI are still evolving, and you need to adapt your program as new guidance emerges. (Wealth Transfer RIA Compliance)

The Role of Technology in AI Compliance

While compliance is fundamentally about processes, technology can play a crucial role in making AI compliance more efficient and effective. The right tools can automate routine monitoring tasks and provide better visibility into AI system behavior.

Real-time monitoring platforms can track AI outputs for compliance issues, flagging potential problems before they impact customers. These systems can also maintain audit trails that make FINRA examinations easier.

Automated documentation tools can help maintain the detailed records that AI compliance requires. Instead of manually tracking every AI interaction, you can use systems that automatically capture relevant information. (GiGCXOs AI-Driven Compliance)

Measuring AI Compliance Effectiveness

You need metrics to assess whether your AI compliance program is working. Traditional compliance metrics might not capture AI-specific risks, so you may need new approaches.

Track AI system performance metrics like accuracy rates, error frequencies, and response times. Declining performance could indicate compliance risks. Also monitor human override rates, if people are frequently overriding AI recommendations, that might signal problems with the system.

Measure the effectiveness of your monitoring procedures. Are you catching AI-related issues before they impact customers? How quickly can you respond to AI system failures? These operational metrics can help you improve your compliance program over time.

Looking Ahead: Future AI Regulation

FINRA Notice 24-09 is probably just the beginning of AI regulation in financial services. Other regulators are developing their own AI guidance, and requirements will likely become more specific over time.

The SEC has indicated they're working on AI-related rules for investment advisers. State regulators are also paying more attention to AI use. Building a robust compliance program now will help you adapt to future requirements. (Automated Compliance Documentation)

International developments could also influence U.S. regulation. The EU's AI Act and other international frameworks are creating global standards that could shape FINRA's future approach.

Final Thoughts: Making AI Compliance Practical

FINRA Notice 24-09 doesn't have to be a roadblock to AI adoption. With the right approach, you can use AI tools effectively while maintaining regulatory compliance. The key is being systematic and proactive rather than reactive.

Start with a clear governance framework and build from there. Don't try to implement everything at once, focus on the highest-risk applications first. And remember that compliance is an ongoing process, not a one-time project.

The firms that succeed with AI compliance will be those that integrate it into their broader business strategy. AI compliance shouldn't be an afterthought, it should be part of how you think about AI adoption from the beginning.

If you're looking for help implementing these compliance requirements, Luthor's AI-powered platform can automate much of the monitoring and documentation work that FINRA expects. Our real-time compliance monitoring helps you catch issues before they become problems, and our automated policy drafting ensures your AI governance stays current with regulatory expectations. (Luthor - AI-powered compliance platform) Request demo access to see how we can help you turn FINRA's AI compliance requirements into a competitive advantage rather than a burden.

Frequently Asked Questions

What is FINRA Regulatory Notice 24-09 and why is it important for broker-dealers?

FINRA Regulatory Notice 24-09, issued in March 2024, provides guidance on the use of generative AI and large language models in brokerage operations. It doesn't ban AI outright but establishes clear compliance requirements for firms using AI tools. The notice emphasizes that broker-dealers must maintain proper oversight, risk management, and documentation when deploying AI technologies in their operations.

What are the key compliance requirements for using generative AI in brokerage operations?

Key requirements include establishing comprehensive AI governance frameworks, implementing robust risk assessment procedures, maintaining detailed documentation of AI systems, ensuring proper supervision and monitoring, and conducting regular compliance reviews. Firms must also ensure AI outputs meet FINRA's communication standards and maintain audit trails for all AI-generated content used in client interactions.

How can broker-dealers ensure their AI-generated communications comply with FINRA advertising rules?

Broker-dealers must implement review processes for all AI-generated communications, ensure content accuracy and compliance with FINRA advertising standards, and maintain proper documentation. According to compliance experts, firms should establish clear approval workflows and use specialized compliance software to monitor and review AI-generated marketing materials before distribution to clients.

What documentation is required for FINRA compliance when using generative AI?

Firms must maintain comprehensive records including AI system specifications, training data sources, risk assessments, approval processes, and audit trails of all AI-generated content. Documentation should cover the AI's decision-making processes, any human oversight applied, and regular compliance reviews. This ensures firms are audit-ready for regulatory inquiries.

How often should broker-dealers review their AI compliance procedures?

FINRA expects firms to conduct regular reviews of their AI systems and compliance procedures, typically on a quarterly or semi-annual basis depending on the complexity and risk level of the AI applications. Reviews should assess system performance, compliance effectiveness, and any necessary updates to policies and procedures based on regulatory changes or operational experience.

What role does a Chief Compliance Officer play in AI implementation under FINRA Notice 24-09?

The CCO plays a critical role in overseeing AI implementation, establishing governance frameworks, and ensuring ongoing compliance monitoring. They must work with dedicated compliance consultants to build custom compliance calendars, implement real-time monitoring systems, and maintain automated alert mechanisms for AI-related compliance issues, similar to comprehensive compliance solutions used by leading firms.

Table of Contents
Want to see how Luthor increases your team's marketing output while staying fully compliant?
Request a Demo