AI firms and their executives poured $83 million into federal elections in 2024. For 2026, they've already committed over $185 million, with projections north of $300 million by November. The money is flowing from both sides of the AI regulation debate: OpenAI co-founder Greg Brockman and Andreessen Horowitz are backing Leading the Future with $100 million-plus, while Trump ally David Sacks just launched Innovation Council Action with another $100 million. Anthropic broke ranks and gave $20 million to Public First Action, a group backing candidates who favor regulation. There is no federal law governing AI in political ads—the FEC voted not to create one.

1. This Money Is Working (AI Industry, Campaign Strategists, Fox News)

Of 20 AI-backed candidates in the Texas and North Carolina primaries, 19 won. The industry isn't guessing—it's buying proven results.

The win rate speaks for itself. In the first major test of AI industry political spending, 19 of 20 candidates backed by AI-connected money won their primary races in Texas and North Carolina. Leading the Future ended 2025 with $39 million banked and expected at least $50 million more from Brockman and Andreessen Horowitz in Q1 2026. Marc Andreessen and Ben Horowitz each personally gave $12.5 million. This isn't passive investment—it's the crypto playbook scaled up, after crypto's $130 million spend in 2024 proved the model works.

The spending strategy is deliberate misdirection. Pro-AI super PACs are flooding the election with ads about everything except artificial intelligence. They're running ads on immigration, crime, and culture-war issues to elect candidates who happen to be friendly to AI deregulation. NBC News found that the industry's messaging avoids AI entirely, focusing instead on hot-button issues that move voters. The goal isn't to win the AI argument—it's to make sure nobody has to have it.

AI is also transforming the campaigns themselves. Down-ballot races are using AI tools to produce TV-quality ad creative at a fraction of the traditional cost. Political data vendors now deliver 50-70% match rates for voter files, and geotargeted connected TV ads enable precise voter targeting through streaming platforms. Texas primaries saw AI used to mock opponents, dramatize attacks, create parody content, and enhance speeches. The technology isn't just funding campaigns—it's running them.

2. This Is Dangerous (NBC News, Built In, Pro-Regulation Advocates)

The industry isn't spending to win arguments about AI. It's spending to make sure no one in Congress is brave enough to regulate it.

The quiet part is the whole strategy. NBC News analysis found that AI industry super PACs are deliberately avoiding AI as a campaign issue, instead flooding races with ads on unrelated hot-button topics. The result is that candidates get elected owing favors to an industry whose product they never had to defend publicly. Built In reported that pro-AI spending could cause some lawmakers to soften their AI stances in fear of inviting a flood of campaign spending against them.

Anthropic broke ranks for a reason. Anthropic donated $20 million to Public First Action specifically because it opposes the deregulatory consensus among its competitors. Public First Action was created by former Representatives Chris Stewart (R-UT) and Brad Carson (D-OK) and plans to back 30-50 candidates from both parties who favor giving the public more visibility into AI companies, opposing preemption of state-level AI regulation, export controls on AI chips, and regulation of high-risk uses like AI-enabled biological weapons. The fact that an AI company is funding the opposition to AI deregulation tells you something about what the rest of the industry is buying.

The targets confirm the strategy. State Assembly member Alex Bores, a former Palantir engineer who pushed one of the first state-level AI safety laws (the RAISE Act), earned opposition from Leading the Future for his pro-regulation stance. The industry isn't just supporting friendly candidates—it's actively targeting the few lawmakers trying to write rules. When the people who build AI are spending to defeat the people who regulate AI, the spending isn't about elections. It's about impunity.

3. This Isn't What Voters Want (NBC News Polling, Election Integrity Groups, Purdue University)

57% of voters say AI risks outweigh the benefits. 89% want disclosure labels. The industry is spending hundreds of millions to elect candidates who'll ignore both numbers.

The public opinion gap is staggering. A majority of registered voters—57%—say AI risks outweigh the benefits, versus 34% who say the opposite. Eight in ten voters are extremely or very concerned that AI is eroding trust in news and social media. And 89% of voters believe people should be clearly told when content is created with AI. Over 70% of likely voters favor state and federal regulators having a hand in AI policy. The industry is spending against the expressed preferences of the electorate it's trying to influence.

The deepfake problem is already here. The NRSC released an AI-generated ad showing Texas Democrat James Talarico appearing to stand in front of a Texas flag reciting old social media posts, with "AI GENERATED" in small font that appeared initially for about three seconds before shrinking to near-invisible text for the remainder of the ad. Research published in the Journal of Creative Communications found that people struggle to identify deepfake videos and that their opinions are affected by misinformation. Disclaimers don't prevent persuasion. Purdue University's Daniel Schiff warned that AI could "very much risk being supercharged" in its ability to misinform voters.

And the regulatory gap is exactly what the AI industry wants. More than two dozen states have passed or are considering AI disclosure laws for political ads, but most just require labels. Minnesota is the exception—it bans unauthorized deepfakes of candidates outright. Texas, where the Talarico deepfake ran, tried to pass a disclosure bill but it stalled in the state Senate. At the federal level, the FEC voted not to create new rules and instead issued an interpretive rule saying existing fraud laws already apply. The FCC proposed disclosure requirements for broadcast ads but has no jurisdiction over social media or streaming—where most political ads now run. The result: $185 million in spending, no federal rules, and disclaimers that don't work even where they're required.

Where This Lands

The AI industry learned from crypto's 2024 playbook: spend big, spend early, spend on everything except your actual product. Nineteen of twenty backed candidates won their primaries, and the industry hasn't even started spending for November. On the other side, Anthropic's $20 million bet on regulation and Public First Action's bipartisan candidate slate represent a genuine counter-strategy—but they're outgunned roughly five-to-one. Whether this money buys the regulatory blank check the industry wants, or whether voter demand for AI rules (57% say risks outweigh benefits, 89% want disclosure) eventually forces Congress to act, depends on whether $300 million in campaign spending can outrun the deepfakes that keep proving the public's point.

Sources