The “ethics vs safety” fight misses the real enemy: Big Tech
While advocates for regulation squabble, Big Tech is pushing for no rules at all
If you follow the AI policy space, you’ve likely heard that the AI safety ecosystem is ramping up its DC advocacy work. People like Suresh Venkatasubramanian and Deborah Raji accuse the safety community — of which I count myself as a member — of pushing an ideologically-driven focus on speculative risks that distracts from present-day harms like algorithmic bias.
There’s a fight for the future of AI regulation, they say, and safety folks are trying to derail it.
It’s a compelling story. But this “ethics vs safety” framing misses the real story. While the two camps bicker, a much bigger threat looms: the tech industry’s all-out push to avoid regulation.
Yes, AI safety groups are spending more on policy advocacy than they were. But their spending pales in comparison to that of the tech giants. Last week, Politico reported that companies are pouring “tens of millions of dollars into an all-hands effort to block strict safety rules on advanced artificial intelligence and get lawmakers to worry about China instead — and so far, they seem to be winning”. That should worry ethics and safety advocates alike.
In the US, billionaire Marc Andreessen is spending millions to advance his bizarre techno-libertarian agenda (“trust and safety” and “tech ethics” are the “enemy”, he says) — which just so happens to benefit his firm’s gigantic AI portfolio. He’s retained a small army of lobbyists to peddle his false claims that interpretability has been “resolved”.
IBM’s “AI Alliance”, meanwhile, may sound responsible — but it appears to be little more than a lobbying operation, pouring money into Politico ads and sponsoring fancy events at Davos and SXSW. IBM lobbyist Chris Padilla has been popping up all over the media recently, fearmongering about “regulatory over-reach”; one wonders what he’s saying to senators behind closed doors. Padilla recently told Politico that the company has around ten full-time lobbyists working on AI. And my own analysis this week found that Meta, another anti-regulation advocate, has a small army of thirty lobbyists pushing its AI agenda.
And of course there’s OpenAI who, despite public posturing around wanting regulation, lobbied to water down the EU AI Act. The US, where it has hired a swathe of top-tier lobbyists, is likely next.
There’s also tech trade association NetChoice, Republican think tank R Street and the Koch-funded Americans for Prosperity Foundation and Abundance Institute, all of which are trying to undermine last year’s executive order on AI. Even at the state level, companies like Workday and trade associations like the BSA and Chamber of Commerce are all intimately involved in shaping legislation to their interests. Just last month, the Consumer Tech Association appeared to successfully lead a campaign to kill Connecticut’s AI bill.
The sheer scale of this lobbying was on full display in the fight over the EU AI Act. Investigations from the Corporate Europe Observatory revealed that two-thirds of meetings with MEPs about AI in 2023 were held with industry and trade associations. According to some MEPs, tech companies were using “shell organisations” in “covert misleading ways”. Google reportedly proposed “copy-pasting a phrase from Google’s ethics guidelines directly into EU recommendations”; while Meta bombarded policymakers with a 134-page document containing its suggestions for how to rewrite the Act. And let’s not forget Mistral, whose close ties to Emmanuel Macron almost torpedoed the Act altogether.
The real voice shaping AI policy is neither the safety nor the ethics “faction”. It’s the giant tech companies with a vested interest in lobbying against regulation that threatens their bottom line. Yet instead of joining forces against this common adversary, the two camps are busy fighting among themselves.
It’s a dynamic favoured by Big Tech. Meta’s Nick Clegg, for instance, says we shouldn’t focus on “speculative future risks” from AI, instead paying attention to “current problems – for example, on the transparency and detectability of AI-generated content”. Do we really think that Meta, of all companies, actually cares about the present-day harms from AI? Or could it perhaps be that stoking infighting among those advocating for AI regulation is a rather good way to stop any regulation from being passed at all?
Unfortunately, the tactic seems to be working. Sen. Todd Young told Politico that he’s “apprehensive about constraining innovation”, saying that “the more people learn about some of these [AI] models, the more comfortable they are that the steps our government has already taken are by-and-large appropriate steps”. That “learning”, of course, takes the form of closed-door meetings with lobbyists pushing a very particular angle. In the same piece, IBM’s Padilla gloated about getting one up over advocates.
Ethics and safety supporters don’t agree on everything. But both believe that some regulation is needed, and when policy proposals come on the table, there’s often unified support. Both the EU AI Act and the US executive order are good examples of how legislation can rein in the power of tech companies, and each received the support of both communities. Everyone — except the tech companies — agrees that we need mandatory evaluations and standards for AI systems, and more democratic oversight over how AI tech is developed and deployed.
Instead of letting Big Tech divide and conquer, people who want AI regulation should join forces to combat the industry’s giant lobbying machine. Anything else is playing right into Big Tech’s hands.