At a busy markup session on Wednesday, the Senate Commerce Committee passed a heap of AI bills.
Most notable was the Future of AI Innovation Act, which would formally establish AISI, tell it to develop AI standards and voluntary guidelines, and create “new AI testbeds with national laboratories to evaluate AI models”. Earlier this week, lots of tech companies — including OpenAI, Meta, and Microsoft — endorsed the bill.
Several of Sen. Ted Cruz’s amendments to the bill passed, though, including one which targets lots of AI ethics provisions. The amendment tells the President to “issue a technology directive … that prohibits any [actions, rules or guidance] by a Federal agency” that says things like AI systems should be “designed in an equitable way” or that AI developers should “conduct disparate impact or equity assessments”.
In his opening remarks, Cruz issued some choice quotes:
“Some wealthy and well-connected AI entrepreneurs and their corporate allies hype up AI inventions as uniquely powerful and dangerous. Of course, these existential dangers are all theoretical; we’ve never actually seen such damage from AI. Nevertheless, these companies say they need to be saved from themselves by the government because their products are so unsafe…”
“China is just as happy to sit back and let the U.S. Congress do the work of handicapping the American AI industry for it … To avoid the U.S. losing this race with China before it has even hardly begun, Congress should ensure that AI legislation is incremental and targeted.”
Aside from the Future of AI Innovation Act, the committee also passed the Validation and Evaluation for Trustworthy AI Act, which directs NIST to “develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested”.
It also passed the AI Research, Innovation, and Accountability Act, which would require NIST to “develop recommendations to agencies for technical, risk-based guardrails on ‘high-impact’ AI systems”, and require “companies deploying critical-impact AI to perform detailed risk assessments”.
And if that wasn’t enough, it passed the NSF AI Education Act, and CREATE AI Act, too.