Meet Meta's AI lobbying army
With 30 lobbyists and seven agencies, the company is primed to push its agenda on Washington
Meta is investing heavily in shaping US AI policy, with 30 well-connected lobbyists working across seven firms, according to an analysis of the company's AI-related lobbying disclosures for Q1 2024. The lobbyists include President Trump’s former deputy chief of staff and people with close ties to the lawmakers leading congressional AI efforts.
In total, Meta spent $7,640,000 on lobbying in the first quarter of 2024, according to OpenSecrets and my own analysis of lobbying disclosures. Of that, $315,000 went to firms lobbying on AI on its behalf, and a substantial fraction of its $6,693,750 internal lobbying spending was likely focused on AI, too.
With plans to regulate AI continuing to progress in Congress and the White House, Meta’s thirty-strong army is likely pushing the company's views — which push against many AI regulation efforts — to lawmakers.
Meta’s substantial lobbying force is split between its 15 in-house lobbyists, and 15 external lobbyists working across seven different firms (Avoq, Blue Mountain Strategies, Elevate Government Affairs, Jeffries Strategies, Mindset Advocacy, and Stewart Strategies and Solutions). Meta spent a cumulative $315,000 with these agencies in Q1, led by an $80,000 contract with Avoq.
Transformer reached out to Meta and the various lobbying agencies for comment; none provided an on-the-record statement.
Along with “continued conversations on artificial intelligence”, Meta is specifically lobbying on bills relating to AI labelling, deceptive AI and Section 230 immunity for AI.
Notable lobbyists include Rick Dearborn, formerly the deputy chief of staff to President Trump and executive director of the 2017 Trump transition team and now a partner at Mindset. Others include Luke Albee, former chief of staff to Sen. Mark Warner; Courtney Temple, former legislative director for Sen. Thom Tillis; Daniel Kidera and Sonia Gill, both former aides to Sen. Chuck Schumer; and Chris Randle, former legislative director to Rep. Hakeem Jeffries.
Those ties make Meta particularly well placed to influence AI policy. Sen. Schumer is leading Senate efforts to regulate AI, while Sens. Warner and Tillis recently proposed the Secure AI Act. Democratic leader Jeffries, meanwhile, announced a House AI Task Force earlier this year. And former President Trump has indicated he would repeal the White House Executive Order on AI if re-elected this November.
Meta, which builds the Llama series of open-source large language models, has been one of the most vocal opponents to major AI regulation efforts. Nick Clegg, the company's president of government affairs, has warned governments that regulation could stifle innovation. Yann LeCun, Meta's chief AI scientist, has said he opposes attempts to regulate foundation models, saying that “regulating research and development in AI is incredibly counterproductive”. Last year, LeCun signed a letter to the White House criticising President Biden’s executive order on AI. The letter, which was organised by venture capital firm Andreessen Horowitz, claimed that the “black box” nature of AI models had been “resolved”. That claim is strongly disputed by many leading AI experts.
LeCun has also argued that national security risks from AI can be addressed by “my good AI against your bad AI”, elaborating that “if you have badly-behaved AI, either by bad design or deliberately, you’ll have smarter, good AIs taking them down”.
Notably, Meta's practices are seen as some of the least safe in the industry. The company's practice of releasing its models' weights means that any safeguards to prevent misuse can be easily circumvented. Researchers have shown that for less than $200, one can reverse all of the safety training on Meta’s Llama 2 model. Cybersecurity company CrowdStrike has said that Meta’s Llama 2 model is likely being used by cybercriminals.
In a letter to Meta in June 2023, Sens. Richard Blumenthal and Josh Hawley criticised the company for its deployment policy, writing: “By purporting to release LLaMA for the purpose of researching the abuse of AI, Meta effectively appears to have put a powerful tool in the hands of bad actors to actually engage in such abuse without much discernable forethought, preparation, or safeguards.”
Meta’s efforts significantly outgun most other attempts at AI lobbying and advocacy. OpenAI, which has 11 registered lobbyists, spent $340,000 in Q1; Anthropic spent $100,000 across five lobbyists. Alphabet, which owns Google DeepMind and spent a total of $3,650,000 on lobbying in Q1, had 23 registered AI lobbyists — though eight of those worked purely on issues related to its self-driving car division, Waymo. Seemingly the only company with a bigger AI lobbying force is Microsoft, which has 68 lobbyists registered to work on AI. It spent $2,557,764 in Q1, of which $635,000 went to outside agencies working on AI.
Advocacy efforts pale in comparison. AI safety advocacy groups the Center for AI Safety and Center for AI Policy spent $110,000 and $77,501 respectively in Q1; combined, they have ten registered lobbyists. The Mozilla Foundation, another non-profit advocating on AI, spent $30,000; the Electronic Frontier Foundation spent $10,000.
This analysis excludes the $631,250 Meta spent on 14 agencies (and 33 lobbyists) lobbying on issues other than artificial intelligence in Q1. However, it is still an imperfect method for assessing Meta's AI lobbying efforts, as companies are not required to provide detailed breakdowns on how lobbying efforts are split across specific issues. The disclosures included in the analysis mention lobbying on a wide range of issues, of which artificial intelligence is just one.
Nevertheless, performing the same analysis for Q1 2023 shows how Meta's priorities are shifting: a year ago, Meta didn’t have a single external lobbyist working on AI issues.