Chief Compliance Officer’s Playbook: 7 Action Items from the SEC AI Roundtable

September 21, 2025

Chief Compliance Officer's Playbook: 7 Action Items from the SEC AI Roundtable

The SEC's March 27, 2025 Artificial Intelligence Roundtable sent ripples through the compliance community. Acting Chair Mark Uyeda and Commissioners Peirce and Crenshaw laid out their vision for AI oversight in financial services, and the message was clear: the regulatory landscape is shifting fast. For Chief Compliance Officers at RIAs and broker-dealers, this isn't just another regulatory update to file away. It's a roadmap for the next 90 days.

We've distilled the roundtable's most pressing themes into seven concrete action items that CCOs can implement immediately. From AI-related fraud risks to governance expectations and disclosure trends, these takeaways translate regulatory guidance into practical compliance strategies. (Luthor)

The Regulatory Reality Check

The numbers tell the story. Businesses spend about 25% of their revenue on compliance, and nearly 1 in 5 firms estimate over half of their revenue goes to compliance-related costs. (Luthor) Meanwhile, 68% of financial services firms name AI in risk management and compliance as a top priority. (Luthor)

This creates a perfect storm. Compliance costs are already crushing margins, but AI adoption is accelerating whether firms are ready or not. The SEC's roundtable made it clear that regulators are watching closely, and they expect firms to have their AI house in order.

Action Item 1: Establish AI Governance Framework

Acting Chair Uyeda emphasized that AI governance can't be an afterthought. Firms need formal oversight structures that address AI deployment across all business functions. This means creating an AI governance committee with clear reporting lines to senior management and the board.

Your 90-day checklist:

• Document all current AI tools and applications in use

• Create an AI inventory with risk assessments for each tool

• Establish approval processes for new AI implementations

• Define roles and responsibilities for AI oversight

The challenge is that 43.12% of GRC professionals are still actively evaluating AI solutions to understand their fit and value. (MetricStream) You can't govern what you don't understand, so education becomes critical.

Luthor's AI-powered compliance platform can help automate the documentation process, creating a centralized inventory of AI tools and their associated risks. (Luthor) This gives CCOs the visibility they need to make informed governance decisions.

Action Item 2: Implement AI-Specific Risk Assessment Protocols

Commissioner Peirce highlighted the need for firms to understand AI-specific risks that traditional risk frameworks might miss. These include algorithmic bias, data quality issues, model drift, and explainability challenges.

The data is sobering. Risk managers are increasingly using generative AI for risk forecasting (30%), risk assessment (29%), and scenario planning and simulations (27%). (Riskonnect) But 80% of organizations have not taken steps to address the threats AI poses in the hands of others, including AI-driven fraud attacks. (Riskonnect)

Your implementation strategy:

• Develop AI-specific risk assessment questionnaires

• Create testing protocols for AI model performance

• Establish monitoring procedures for algorithmic bias

• Document data lineage and quality controls

AI can automatically flag policy violations in marketing content or scan data use for privacy compliance issues. (Luthor) This capability becomes essential when you're trying to monitor AI systems at scale.

Action Item 3: Strengthen Marketing and Communications Surveillance

Commissioner Crenshaw stressed that AI-generated content poses new challenges for marketing compliance. Traditional review processes weren't designed to handle the volume and variability of AI-generated materials.

The compliance burden is real. Marketing teams need to create, review, and publish content while maintaining full compliance, and AI tools promise to make this process 6x faster. (Luthor) But speed without proper oversight creates new risks.

Key implementation steps:

• Update marketing compliance policies to address AI-generated content

• Implement automated content scanning for regulatory violations

• Create approval workflows for AI-assisted marketing materials

• Train marketing teams on AI-specific compliance requirements

Smarsh's AI-powered communications surveillance reduces false positives by up to 95% and surfaces up to 5x more issues that legacy technologies may have missed. (Smarsh) This shows what's possible when AI is properly deployed for compliance monitoring.

Luthor's platform enables marketing teams to maintain compliance while leveraging AI tools effectively. (Luthor) The key is having the right oversight mechanisms in place.

Action Item 4: Enhance Data Privacy and Protection Measures

The roundtable discussion touched on data privacy concerns with AI systems. Given that the EU's General Data Protection Regulation (GDPR) can levy fines up to €20 million or 4% of annual global turnover for violations, this isn't just a compliance issue—it's an existential risk. (Luthor)

The regulatory landscape is getting more complex. As of early 2025, regulators have issued 2,245 fines totaling approximately €5.65 billion under GDPR. (Luthor) In the US, 20 states have now passed comprehensive privacy laws as of 2024. (Luthor)

Your data protection action plan:

• Audit AI systems for personal data processing

• Implement data minimization principles in AI applications

• Create consent management processes for AI-driven analytics

• Establish data retention and deletion policies for AI systems

Recent enforcement actions show regulators are serious. In 2025, the California Privacy Protection Agency hit Honda with a $632,500 fine for failing to properly honor consumer opt-outs on its website. (Luthor) In 2023, Ireland's DPA fined Meta €390 million for relying on forced consent to serve personalized ads, and later fined TikTok €345 million for mishandling children's personal data. (Luthor)

Action Item 5: Develop AI Incident Response Procedures

The SEC emphasized that firms need specific procedures for handling AI-related incidents. Traditional incident response plans don't address the unique challenges of AI system failures, bias incidents, or data breaches involving machine learning models.

AI is transforming risk management by detecting money laundering patterns, predicting and managing risks before they escalate, making complex problems more manageable, reducing the strain on compliance teams, and enabling businesses to stay ahead of emerging threats. (Comply) But this also means new types of incidents to prepare for.

Incident response framework:

• Create AI-specific incident classification systems

• Establish escalation procedures for algorithmic failures

• Develop communication templates for AI-related incidents

• Train incident response teams on AI system recovery procedures

AI can reduce human oversight workload and detect risks before they escalate. (Luthor) But when things go wrong, you need clear procedures to respond quickly and effectively.

Action Item 6: Update Disclosure and Reporting Practices

The roundtable discussion highlighted evolving expectations around AI disclosure. Firms need to be transparent about their AI use while avoiding overly technical explanations that confuse rather than inform.

90% of risk/compliance teams who have embraced AI say it's already positively impacting their work. (Luthor) This suggests that AI adoption is accelerating, making disclosure practices even more critical.

Disclosure best practices:

• Create plain-English explanations of AI system functions

• Develop standardized AI disclosure templates

• Establish review processes for AI-related disclosures

• Train client-facing staff on AI disclosure requirements

Luthor helps RIAs and broker-dealers meet compliance obligations with expert support and AI-powered workflows. (Luthor) This includes helping firms develop appropriate disclosure practices that meet regulatory expectations.

Action Item 7: Implement Continuous Monitoring and Testing

Acting Chair Uyeda stressed that AI systems require ongoing monitoring, not just initial approval. Model performance can degrade over time, and regulatory requirements continue to evolve.

48% of compliance teams believe AI could improve internal efficiency and 35% say it would help them keep up with fast-changing regulations. (Luthor) But realizing these benefits requires systematic monitoring and testing procedures.

Monitoring framework:

• Establish performance benchmarks for AI systems

• Create regular testing schedules for model accuracy

• Implement drift detection for machine learning models

• Develop remediation procedures for underperforming systems

Saifr provides AI agents to help safeguard against compliance risks, including marketing compliance review, adverse media screening alerts, and electronic communications surveillance. (Saifr) These tools can scale operations with humans in the loop, enhance efficiency, and improve risk management automation. (Saifr)

Luthor's platform offers real-time risk detection, automated policy drafting, and continuous monitoring to keep clients audit-ready. (Luthor) This continuous monitoring capability is essential for maintaining compliance in an AI-driven environment.

The Implementation Reality

35% of risk executives say compliance and regulatory risk is the greatest risk to their company's ability to drive growth. (Luthor) This creates pressure to get AI compliance right the first time. Firms can't afford to wait for perfect clarity from regulators—they need to act on the guidance available today.

The good news is that AI can be part of the solution. Giskard AI Compliance Platform provides a robust way to track the compliance level of all AI projects through its streamlined evidence retrieval system and risk mitigation capabilities. (AWS Marketplace) This shows that compliance technology is evolving to meet the challenge.

34.86% of GRC professionals are planning for AI's future potential, building roadmaps even before piloting specific use cases. (MetricStream) This forward-thinking approach is exactly what the SEC's roundtable encouraged.

Your 90-Day Action Plan

The SEC's March 27, 2025 AI Roundtable wasn't just a discussion—it was a call to action. CCOs who implement these seven action items in the next 90 days will be better positioned to handle whatever regulatory guidance comes next.

Start with governance and risk assessment. You can't manage what you don't measure, and you can't govern what you don't understand. Build your AI inventory, establish your oversight framework, and create your risk assessment protocols.

Then focus on operational implementation. Update your marketing surveillance, strengthen your data protection measures, and develop your incident response procedures. These are the blocking and tackling activities that prevent small issues from becoming major problems.

Finally, think about transparency and monitoring. Update your disclosure practices and implement continuous monitoring systems. The regulatory environment will continue to evolve, but firms with strong monitoring capabilities can adapt more quickly.

Luthor is trusted by leading firms with a combined $5.7B+ in AUM. (Luthor) This track record demonstrates that AI-powered compliance solutions can work at scale, but only when implemented thoughtfully and systematically.

Final Thoughts

The SEC's AI roundtable made one thing clear: the time for AI compliance planning is now. Firms that wait for perfect regulatory clarity will find themselves playing catch-up while their competitors gain operational advantages.

But compliance doesn't have to be a competitive disadvantage. When done right, AI-powered compliance can reduce costs, improve efficiency, and free up resources for growth initiatives. The key is taking action on the guidance we have today while building systems that can adapt to tomorrow's requirements.

Ready to turn these action items into operational reality? Luthor's AI-powered compliance platform can help you implement these seven priorities systematically and efficiently. We automatically review marketing assets for compliance, reduce the risk, effort, and time needed to tackle marketing compliance at scale. (Luthor)

Request demo access to see how we can help your firm stay ahead of the regulatory curve while reducing compliance costs and complexity. (Luthor)

Frequently Asked Questions

What were the key takeaways from the SEC's March 27, 2025 AI Roundtable for compliance officers?

The SEC's March 2025 AI Roundtable emphasized the need for proactive AI governance frameworks in financial services. Acting Chair Mark Uyeda and Commissioners Peirce and Crenshaw outlined expectations for AI oversight, risk management, and documentation requirements. The roundtable highlighted that regulatory scrutiny of AI applications in finance is intensifying, requiring CCOs to implement comprehensive AI compliance strategies immediately.

How can RIAs and broker-dealers implement AI compliance monitoring effectively?

RIAs and broker-dealers should establish AI-specific risk assessment frameworks, implement continuous monitoring systems, and maintain detailed documentation of AI decision-making processes. Modern compliance platforms like Luthor's AI-powered tools can help firms automate oversight while keeping expert humans in the loop, enabling marketing teams to create compliant content 6x faster while maintaining full regulatory adherence.

What are the most critical AI compliance risks facing financial firms in 2025?

The primary AI compliance risks include algorithmic bias in investment recommendations, inadequate model governance, insufficient data privacy protections, and lack of explainability in AI-driven decisions. According to recent surveys, 43% of GRC professionals are actively evaluating AI solutions, while 80% of organizations haven't addressed AI-driven threats like fraud attacks, creating significant compliance gaps.

How should CCOs prioritize their 90-day AI compliance implementation plan?

CCOs should first conduct comprehensive AI risk assessments across all business functions, then establish governance frameworks and documentation standards. Priority should be given to client-facing AI applications, investment advisory processes, and communications surveillance. The implementation should include staff training, vendor due diligence for AI tools, and establishing clear escalation procedures for AI-related compliance issues.

What role does AI play in enhancing compliance monitoring and surveillance?

AI significantly enhances compliance monitoring through automated anomaly detection, pattern recognition in communications surveillance, and real-time risk assessment. Companies like Smarsh report reducing false positives by up to 95% while surfacing 5x more issues than legacy technologies. AI-powered platforms can process vast amounts of data for AML monitoring, conduct screening, and regulatory change management more efficiently than traditional methods.

How can smaller RIAs compete with larger firms in implementing AI compliance solutions?

Smaller RIAs can leverage cloud-based AI compliance platforms that provide enterprise-level capabilities without massive infrastructure investments. Solutions like Luthor's AI-native compliance platform are specifically designed for modern RIAs, offering automated oversight and streamlined workflows. These platforms enable smaller firms to access sophisticated compliance tools previously available only to large institutions, leveling the competitive playing field.

Table of Contents
Want to see how Luthor increases your team's marketing output while staying fully compliant?
Request a Demo