Supervised influence: when AI models our laws
Big tech remains hell-bent on celebrating an AI revolution that has yet to fully manifest [1][2][3]. Everyday, consumers are pummeled with promotion for generative tools [4][5]. While AI may offer coherent potential for the future, the industry’s biggest bulls have churned out an over-served market complete with skyrocketing valuations and eye-watering sums spent on AI products. Whatever the future holds, right now, we’re in a hype cycle. Bubbles carry their own repercussions down the road, but one very real disaster is here and notably absent from the public discourse: weaponized lobbying.
Technology companies have been market makers for years, and the AI inflection has only amplified their influence to volatile levels. The S&P 500 reached record highs this year, with only three companies, Microsoft, Nvidia, and Apple, accounting for roughly a third of its gains. Alongside this is a significant increase in lobbying spend. According to IssueOne, tech giants have spent $51 million in lobbying this year, a 14% increase from 2023. This may not seem enormous, but for firms like Meta, which increased spend by 29% compared to 2023, that’s a whole new brigade of anti-regulation lawyers just for the United States.
Don’t touch our stuff
This expense is to fight legislation like the Kids Online Safety and Privacy Act in US Congress (which passed 91–3 in the Senate) or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in California (considered a modest mandate to avoid “critical harm”) [6].
It is also to tussle with regulators in the EU deploying the General Data Protection Regulation (GDPR) privacy enforcement and EU AI Act development laws. In dissent of measures to initiate privacy, public safety, and developer guardrails around an undeniably latent technology, ‘stifling innovation’ is the rallying argument of tech leaders. “Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era,” says Mark Zuckerberg in an open letter signed by several tech CEOs asking for less ‘fragmented’ regulation.
Fragmentation is defined in this letter as inconsistent and unpredictable, which the author of this article would argue is wordplay marketing genius on par with popularizing the term ‘hallucinations’ to describe illogical goop created by model confabulations. Much like the alignment problem engineers behind this letter confront developing their own AI systems, insisting upon fully actualized and invariable parameters is a senseless approach. Protective regulation has to start somewhere.
A game of definitions
Big tech’s dismissive stance on regulation is a present danger. This attitude is often reflected by anti-regulation lobbyists skilled at spreading sweeping definitions of innovation, framing it as a force that democratizes fundamental rights, like free speech.
Daniel Leufer, who appeared in our podcast The Good Robot episode “The EU AI Act part 1”, has seen this performance hands-on:
“I think [innovation is] used in two different ways that are not mutually compatible. It has a very neutral meaning, which is just ‘something new’. And then it has a more loaded meaning, which is ‘something new that’s good’. And often industry will say, “we need more innovation”. But do we need innovation in chemical weapons? Do we need innovations in ways to undermine people’s rights, etc.? Or do we need socially beneficial innovation? I find that if you listen to anti-regulation lobbying lines, et cetera, it switches between those two meanings depending on what’s convenient. So I often like to say to them you need to pin down what you mean here. Are you using the word innovation in a value-laden way?”
Animosity may be expected between regulators and technologists, but indifference to the details that enable mutual consideration shows something darker. It appears that AI leaders are not merely determined to remove guard rails; they portray an active refusal to take part in the collaborative system meant to protect its users. This is also demonstrated in the enactment of policy. During the EU AI Act’s formation, industry lobbyers argued for a transparency loophole in the passing of an amendment to the Act’s foundational risk Article 6 from Section 1. This article codified that potentially high-risk systems must be subject to distinct transparency requirements and responsible development practices. However, the loophole stipulation added that “A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system.” The amendment, which passed to the astonishment of Daniel and his colleague Caterina Rodelli, effectively permitted developers in the high-risk category to “decide whether you pose a risk” (Daniel Leufer).
In the United States, regulators have been brought to heel through an inexhaustible army of lobbyers skilled at injecting skepticism and using technical expertise to recast themselves as educators to policymakers. Craig Albright, a top lobbyist in Washington, D.C., said this scrutable educating was “the primary thing that we do.” This state of collaboration, seen in the EU and the US, reveals a rooted willingness of policymakers to collude in an asymmetric dialogue to the detriment of civilian liberties throughout the Western world. A lack of understanding is no excuse to exercise restraint. In many cases, it is unnecessary to understand the intricacies of AI systems to enforce essential protections. An example of this, provided by Daniel Leufer, is the use of facial recognition in publicly accessible spaces, using unique identifiers to mark individuals from a watch list. This method blatantly undermines pre-existing human rights protections that already exist in public spaces but remains challenged or outright ignored by large-scale developer demands.
Bridging policy and power
As AI development continues to surge in spend and market support, how can public advocated bring big tech back to the table? Innovation, in whatever definition, will remain a tantamount value to tech leaders in any case brought against them.
Lawmakers in the United States are swayed by lobbying culture, facing gridlock on any regulation of AI, including privacy protections. The challenge is to fight back in a landscape defined by deep pockets. While tech giants continue to exert influence on concerted reform, lawmakers can repurpose existing regulations and levy injunctions or fines when user protections are ostensibly abused. To fight undue collaboration with tech, compromised lawmakers can be rebuked by investing in institutional developer advocates to teach non-biased AI fundamentals. A paper by Stuart Russell et al. provides one such framework (When code isn’t law: rethinking regulation for artificial intelligence). Where federal policy fails, states and cities can apply precise enforcement where AI systems are deployed, an increasing trend [7][8]. Finally, persistence. Politicians and activists continue to lead the charge, including a new congressional group determined to pass bipartisan reform.
Today, humanity is disposed to a revolution it did not ask for, handled by a collective it does not trust. But the origins of AI are not dependent on the influence of its developers, nor are they reliant upon the surveillance business model that came to define our internet. According to Sarah Myers West, “AI has meant lots of different things over the course of almost 70 years” (EU AI Act Part 2 ep). To win out AI as a force for the public good, regulators are in need of a war chest that cuts through hype, calls out mendacious negotiation, and brute forces developer choices.
Further reading & listening
- Redirecting Europes AI Industrial Policy | AI Now
- The Good Robot EU AI Act Episodes: Part 1 (Daniel Leufer and Caterina Rodelli) | Part 2 (Amba Kak and Sarah Myers West)
- The Tech Giants’ Anti-regulation Fantasy | The Atlantic
- Regulators Are Finally Catching Up With Big Tech | Wired
- A history-lover’s guide to the market panic over AI | The Economist