As expected, the Labour Party won a massive majority in the UK’s general election last night. Here’s a quick summary of what that means for AI:
Later today, Peter Kyle is expected to be appointed as the new Secretary of State for Science, Innovation and Technology, putting him in charge of DSIT, the government department looking after AI policy.
Labour’s manifesto commits the party to introducing “binding regulation” on the companies developing the most powerful AI models, a ban on deepfake creation, and making it easier to build data centres.
Kyle has also said that Labour will put the AI Safety Institute on “a statutory footing” and “legislate to require the frontier AI labs to release their safety data”.
The regulation is likely to look like a formalised version of the voluntary commitments developers made at Seoul, committing companies to produce and stick by responsible scaling policies, and conduct dangerous capability evaluations (presumably in partnership with AISI). But that’s aspirational: it’s a pretty safe bet that companies will lobby like hell to water any regulation down.
The timing of all this is unclear. Politico previously reported that an AI bill is unlikely to make it into Labour’s first King’s Speech (which will set the legislative priorities for the first year of their government). But Politico has also reported that DSIT civil servants have already started drafting legislation for Labour.
Labour’s getting support on AI policy from a wide range of external figures, too. Labour Together’s Kirsty Innes has been reported to be “effectively writing the party’s AI policy”, while Faculty AI had a staff member seconded into Kyle’s office to work on AI policy.