Avoiding “AI-Washing”: Lessons from the 2024 SEC Fines and 2025 Examination Priorities

July 22, 2025

Avoiding "AI-Washing": Lessons from the 2024 SEC Fines and 2025 Examination Priorities

The SEC has made it pretty clear that they're not messing around when it comes to AI claims in 2025. After hitting firms like Delphia and Global Predictions with fines for exaggerating their AI capabilities, regulators are now flagging AI-washing as a top examination priority (The RACE to the BOTTOM). For investment advisors and broker-dealers, this means your marketing claims about AI need to be bulletproof, documented, and honestly, a lot more careful than they probably are right now.

We've seen this pattern before with other compliance sweeps. The FTC warned 670 companies about potential penalties for deceptive product claims last year, and Bank of America paid $30 million in fines for misleading marketing. But AI-washing feels different because the technology is so new that even well-intentioned firms are struggling to describe what their systems actually do versus what they hope they'll do someday.

What Actually Happened with Delphia and Global Predictions

Let's break down what got these firms in trouble, because the details matter for your own compliance strategy. The SEC's enforcement effort is focused on preventing a deceptive marketing tactic called "AI washing" (The RACE to the BOTTOM). Both companies made claims about their AI capabilities that they simply couldn't back up with actual technology or processes.

Delphia claimed their AI was making investment decisions when it was really just basic data analysis. Global Predictions went even further, suggesting their AI could predict market movements with a level of accuracy that their actual algorithms never achieved. The SEC has pursued enforcement actions against public companies and investment advisers that exaggerate their AI capabilities or falsely claim to integrate AI into decision-making processes (The RACE to the BOTTOM).

What's particularly interesting is that these weren't cases where firms had no AI at all. They had some technology, some data processing capabilities, maybe even some machine learning models. But they oversold what those systems could actually do, and that's where they crossed the line from marketing enthusiasm into regulatory violation territory.

The 2025 Examination Priorities: What Regulators Are Looking For

The SEC has increased its scrutiny of artificial intelligence fraud, particularly misleading claims about AI in the investment space (The RACE to the BOTTOM). This year, examiners are specifically trained to spot the gap between AI marketing claims and actual capabilities.

They're looking for several red flags:

Vague AI Descriptions: Terms like "AI-powered" or "machine learning enhanced" without specific explanations of what the technology actually does. If you can't explain exactly which processes use AI and how, that's a problem.

Performance Claims Without Data: Saying your AI improves returns, reduces risk, or enhances decision-making without documented evidence of these improvements. The SEC wants to see the testing data, the backtesting results, and the ongoing performance metrics.

Overstated Automation: Claiming that AI makes investment decisions when humans are still heavily involved in the process. There's nothing wrong with human oversight, but you need to be honest about where humans step in and where AI actually operates independently.

Missing Documentation: This is where a lot of firms get tripped up. You need records showing how your AI systems work, what data they use, how they're trained, and how their performance is monitored. Machine learning technologies and algorithmic models are widely used by banks and trading firms for various business activities including investment recommendations, employee hiring, HR decisions, valuation modeling, and algorithmic trading (17a-4 LLC).

Marketing Compliance Do's and Don'ts for AI Claims

The Do's

Be Specific About AI Functions: Instead of saying "our AI analyzes markets," explain that "our machine learning models process earnings data to identify potential value stocks based on historical patterns." The more specific you are, the easier it is to substantiate your claims.

Document Everything: Keep detailed records of your AI development process, training data, performance testing, and ongoing monitoring. These AI-based activities present specific compliance challenges around data governance, record retention, and supervisory control systems (17a-4 LLC). At Luthor, we understand how tough these challenges can be, which is why our AI-driven compliance platform is fully 17a-4 compliant (Luthor AI).

Use Qualifying Language: Words like "designed to," "intended to," or "may help" provide important legal protection while still allowing you to describe your AI's capabilities. This isn't about being wishy-washy, it's about being legally accurate.

Test Your Claims: Before you publish any marketing material claiming AI benefits, make sure you can prove those benefits with data. Run backtests, measure performance improvements, and document the results.

Regular Reviews: AI systems change as they learn and adapt. Your marketing claims need to be reviewed regularly to make sure they still accurately reflect what your systems actually do.

The Don'ts

Don't Use Buzzwords Without Substance: "Revolutionary AI," "cutting-edge algorithms," and "advanced machine learning" are red flags unless you can explain exactly what makes them revolutionary, cutting-edge, or advanced.

Don't Promise Specific Outcomes: Avoid claims like "our AI will increase your returns" or "reduces risk by 30%." These are performance promises that you probably can't guarantee and definitely can't substantiate for all market conditions.

Don't Ignore Human Involvement: If humans are involved in your investment process, don't pretend they're not. The SEC is specifically looking for firms that overstate the level of AI automation in their decision-making.

Don't Copy Competitor Claims: Just because another firm makes certain AI claims doesn't mean those claims are compliant or that you can make similar ones without your own substantiation.

A Rapid Self-Audit Template for AI Marketing Claims

Here's a practical checklist you can use to evaluate your current AI marketing materials:

Technical Accuracy Review

• [ ] Can you explain exactly what your AI system does in plain English?

• [ ] Do you have documentation showing how your AI models are trained?

• [ ] Can you demonstrate the specific data inputs your AI uses?

• [ ] Do you have records of your AI system's performance over time?

• [ ] Can you explain what happens when your AI system encounters new or unusual market conditions?

Marketing Claims Audit

• [ ] Are all AI-related claims specific rather than general?

• [ ] Do you use appropriate qualifying language ("designed to," "may help," etc.)?

• [ ] Can you substantiate every performance claim with actual data?

• [ ] Have you avoided promising specific investment outcomes?

• [ ] Do your claims accurately reflect the current state of your AI, not future plans?

Documentation Requirements

• [ ] Do you have written policies governing AI development and deployment?

• [ ] Are your AI systems' decision-making processes documented?

• [ ] Do you maintain records of AI system changes and updates?

• [ ] Can you produce evidence of ongoing AI performance monitoring?

• [ ] Are your AI-related records organized for regulatory examination?

Firms are keen to use rapidly evolving AI technologies but are cautious about violating SEC recordkeeping provisions (Skadden). The challenge is that AI systems generate a lot of data, and not all of it needs to be preserved, but you need to know which parts do.

Record Retention Requirements for AI Systems

This is where things get complicated, and honestly, where a lot of firms are probably not fully compliant right now. AI systems generate massive amounts of data, and the SEC's recordkeeping rules weren't written with machine learning in mind. But they still apply.

Under SEC Rule 17a-4, broker-dealers have to preserve and maintain business records and communications for regulators (Luthor AI). For AI systems, this means you need to think about what constitutes a "business record" when your AI is making or influencing investment decisions.

Model Training Records: Documentation of how your AI models were trained, including the data sets used, the training parameters, and the validation results. These records help prove that your AI actually works the way you claim it does.

Decision Logs: Records of the decisions your AI systems make or recommend, along with the data inputs that led to those decisions. If your AI recommends buying a particular stock, you need records showing why it made that recommendation.

Performance Monitoring: Ongoing records of how your AI systems perform, including any errors, anomalies, or unexpected results. This is particularly important because AI systems can drift over time as market conditions change.

System Changes: Documentation of any changes to your AI systems, including software updates, parameter adjustments, or retraining with new data. Each change could potentially affect the accuracy of your marketing claims.

Broker-dealers have significant recordkeeping workloads, and according to Section 17(a) of the Exchange Act, as well as the SEC's books-and-records rules, firms must "make, keep, and furnish" certain records (Luthor AI). Rule 17a-4 also sets minimum retention periods for these documents, from three years all the way up to the lifetime of your business, plus rules about keeping them accessible and producing them quickly if regulators ask (Luthor AI).

How Luthor's Claim-Substantiation Module Works

This is where we can actually help solve some of these problems. Our AI-driven compliance platform includes a claim-substantiation module that's specifically designed to help firms document and verify their AI capabilities (Luthor AI).

The module works by analyzing your marketing materials and flagging any AI-related claims that might need additional documentation. It then helps you organize the evidence you need to support those claims, whether that's performance data, technical specifications, or process documentation.

Automated Claim Detection: The system scans your marketing materials for AI-related claims and categorizes them by risk level. High-risk claims (like specific performance promises) get flagged for immediate review, while lower-risk claims (like general descriptions of AI use) are noted for periodic verification.

Evidence Mapping: For each claim, the system helps you identify what type of evidence you need to substantiate it. If you claim your AI improves portfolio performance, it will prompt you to provide backtesting data, live performance metrics, and statistical significance testing.

Documentation Workflows: The platform guides you through the process of collecting and organizing the documentation you need. It creates checklists, sets reminders for periodic reviews, and maintains an audit trail of all your substantiation efforts.

Regulatory Alignment: The system is designed around SEC and FINRA requirements, so the documentation it helps you create is organized in a way that regulators expect to see. This makes examinations smoother and reduces the risk of compliance issues.

Generative AI can identify inefficiencies and streamline workflows within financial firms by analyzing vast datasets of past operations (COMPLY). But the key is making sure you can document and verify what your AI actually does, not just what you hope it will do.

Practical Steps for 2025 Compliance

Immediate Actions (Next 30 Days)

Audit Your Current Marketing: Go through all your marketing materials, website content, and client communications. Flag every AI-related claim and ask yourself: "Can I prove this with data?"

Inventory Your AI Systems: Create a comprehensive list of all the AI and machine learning systems you use, what they do, and how they impact client services or investment decisions.

Review Your Documentation: Check whether you have adequate records for each AI system. If you're missing documentation, start collecting it now before regulators come knocking.

Medium-Term Planning (Next 90 Days)

Develop AI Governance Policies: Create written policies that govern how AI systems are developed, deployed, monitored, and marketed. These policies should address both the technical and compliance aspects of AI use.

Train Your Team: Make sure everyone who creates marketing content understands the new AI compliance requirements. This includes marketing staff, portfolio managers, and anyone else who communicates with clients about your AI capabilities.

Implement Monitoring Systems: Set up processes to regularly review and update your AI-related marketing claims as your systems evolve. AI systems change over time, and your marketing needs to keep up.

Long-Term Strategy (Next 12 Months)

Build Compliance into Development: Make sure compliance considerations are built into your AI development process from the beginning. It's much easier to document AI capabilities as you build them than to reverse-engineer documentation later.

Regular Compliance Reviews: Schedule quarterly reviews of all AI-related marketing claims and supporting documentation. This helps catch problems before they become regulatory issues.

Stay Current with Regulations: The regulatory environment around AI is evolving rapidly. Make sure you have processes in place to stay current with new guidance and requirements.

AI Meeting Assistants like Zoom AI Companion, Microsoft Copilot, Jump, and Otter.ai are increasingly being used by investment advisers to transcribe client and internal meetings (National Law Review). But there are currently no specific regulations pertaining to the use of AI transcripts or the recordkeeping obligations that would follow (National Law Review). This uncertainty is exactly why you need to be extra careful about how you describe and document your AI use.

Common Pitfalls to Avoid

The "Future Tense" Problem: Don't market AI capabilities you're planning to develop. Only describe what your systems can do right now, today. If you're beta-testing new AI features, be clear about their experimental status.

The "Black Box" Issue: Avoid describing your AI as mysterious or incomprehensible. If you can't explain how your AI works, regulators will assume you don't understand it well enough to make accurate claims about it.

The "One-Size-Fits-All" Mistake: Don't assume that AI claims that work for one type of client or investment strategy will work for all clients. Your substantiation needs to match the scope of your claims.

The "Set It and Forget It" Error: AI systems require ongoing monitoring and documentation. Don't assume that documentation you created when you first deployed an AI system will remain accurate indefinitely.

Building a Sustainable AI Compliance Program

The goal isn't just to avoid fines in 2025. You want to build a compliance program that can adapt as AI technology evolves and as regulatory requirements become more specific. Here's how to think about that:

Start with Strong Foundations: Good AI compliance starts with good general compliance practices. Make sure your basic recordkeeping, supervision, and marketing compliance programs are solid before you add AI-specific requirements on top.

Invest in the Right Tools: Manual compliance processes don't scale well with AI systems that generate massive amounts of data. You need technology solutions that can help you manage AI compliance efficiently. One-Compliance uses state-of-the-art AI to power financial services compliance programs and provides real-time risk notifications that identify compliance issues that usually go undetected (One-Compliance).

Build Cross-Functional Teams: AI compliance isn't just a legal or compliance issue. You need input from your technology teams, your marketing teams, and your investment teams to create a program that actually works.

Plan for Regulatory Evolution: The SEC's approach to AI regulation is still developing. Build flexibility into your compliance program so you can adapt to new requirements without starting from scratch.

Falling short of compliance requirements can mean fines, sanctions, or even losing your SEC registration (Luthor AI). But with the right approach, AI compliance doesn't have to be a barrier to innovation. It can actually help you build better, more reliable AI systems by forcing you to think carefully about what your technology actually does and how it benefits your clients.

The Broader Context: Why This Matters Now

The SEC's focus on AI-washing isn't happening in a vacuum. Regulators are seeing a pattern of technology hype that outpaces actual capabilities, and they're concerned about investor protection. ChatGPT-3 and Generative AI have been the focus of Wall Street, leading some firms to prohibit employee use (17a-4 LLC). But prohibition isn't a sustainable strategy when AI tools can provide real business value.

The key is finding the balance between innovation and compliance. You want to use AI to improve your services and operations, but you need to be honest about what your AI can and can't do. This isn't about being conservative or avoiding new technology. It's about being accurate and substantiating your claims.

Firms that get this right will have a competitive advantage. They'll be able to market their AI capabilities confidently because they know they can back up their claims. They'll avoid regulatory problems that can be expensive and damaging to their reputation. And they'll build trust with clients who are increasingly sophisticated about AI and skeptical of overblown claims.

Final Thoughts: Making AI Compliance Work for Your Firm

The 2024 SEC fines were a warning shot, and the 2025 examination priorities are the follow-through. If you're using AI in your investment process or marketing AI capabilities to clients, you need to take this seriously. But you also don't need to panic.

Start with an honest assessment of your current AI use and marketing claims. Identify the gaps between what you're saying and what you can prove. Then build the documentation and processes you need to close those gaps. It's not glamorous work, but it's necessary work.

The firms that will thrive in the AI era are the ones that can combine genuine innovation with rigorous compliance. They'll use AI to provide real value to clients while maintaining the trust and regulatory standing that their business depends on.

At Luthor, we've built our platform specifically to help firms manage these challenges. Our AI-driven compliance tools can help you automatically review marketing assets for compliance, reducing the risk, effort, and time needed to tackle marketing compliance at scale (Luthor AI). We understand that compliance isn't just about avoiding problems, it's about enabling your business to grow sustainably.

If you're ready to get serious about AI compliance, we'd love to show you how our platform can help. Request demo access to see how we can help you document your AI capabilities, substantiate your marketing claims, and build a compliance program that scales with your technology.

Frequently Asked Questions

What is AI-washing and why is the SEC cracking down on it?

AI-washing is a deceptive marketing tactic where firms exaggerate their AI capabilities or falsely claim to integrate AI into decision-making processes. The SEC has increased scrutiny of this practice, pursuing enforcement actions against companies like Delphia and Global Predictions for misleading AI claims. The regulator views AI-washing as a form of securities fraud that can mislead investors about a firm's technological capabilities and competitive advantages.

What were the key violations in the SEC's 2024 AI-washing enforcement cases?

The SEC fined firms like Delphia and Global Predictions for making false or misleading statements about their AI capabilities. These violations typically involved overstating the role of AI in investment processes, claiming AI integration that didn't exist, or misrepresenting the sophistication of their AI systems. The enforcement actions demonstrate that the SEC will hold firms accountable for any material misstatements about their use of artificial intelligence technology.

How can financial firms ensure compliance when using AI tools for recordkeeping?

Firms must ensure AI-generated content complies with SEC recordkeeping rules under regulations like Rule 17a-4. This includes maintaining proper records of AI-assisted communications, meeting transcripts, and investment recommendations. Companies should implement robust data governance frameworks, establish clear supervisory control systems, and ensure all AI-generated business records are properly retained and accessible for regulatory examination.

What documentation should firms maintain to avoid AI-washing violations?

Firms should maintain comprehensive documentation of their actual AI capabilities, including technical specifications, implementation timelines, and performance metrics. This includes keeping records of AI system limitations, human oversight processes, and any marketing materials that reference AI. Proper documentation serves as evidence that marketing claims align with actual technological capabilities and can protect firms during SEC examinations.

How do SEC advertising compliance requirements apply to AI-related marketing claims?

SEC advertising rules require that all marketing communications, including AI-related claims, be truthful and not misleading. Similar to other regulated industries like banking and real estate, financial firms must ensure their AI marketing claims are substantiated and comply with disclosure requirements. Firms should review their advertising compliance programs to address AI-specific risks and ensure all promotional materials accurately represent their technological capabilities without exaggeration.

What are the SEC's 2025 examination priorities regarding AI compliance?

The SEC has flagged AI-washing as a top examination priority for 2025, focusing on firms' AI-related disclosures and marketing claims. Examiners will likely scrutinize whether firms' actual AI capabilities match their public statements and marketing materials. This increased focus means firms should proactively review their AI compliance programs, conduct self-audits, and ensure their supervisory procedures adequately address AI-related risks before regulatory examinations occur.

Table of Contents
Want to see how Luthor increases your team's marketing output while staying fully compliant?
Request a Demo