Subscribe
Sign in
Home
Weekly Briefing
Analysis
About
Analysis
Latest
Top
The way we evaluate AI model safety might be about to break
As systems become more capable, researchers think we need a new type of safety evaluation
7 hrs ago
•
Lynette Bye
Does Elon still care about AI safety?
He’s doing a terrible job of showing it.
Jan 20
•
Shakeel Hashim
New AI export controls have leaked. Here’s what you need to know.
The new rules are squarely focused on the frontier
Jan 10
•
Shakeel Hashim
The media needs to start taking AGI seriously
An essay for Nieman Lab's 2025 Predictions series
Dec 30, 2024
•
Shakeel Hashim
OpenAI's new model tried to avoid being shut down
o1 attempted to exfiltrate its weights to avoid being shut down
Dec 5, 2024
•
Shakeel Hashim
The 2024 Transformer Gift Guide
The must-buy items for the AI policy people in your life
Dec 3, 2024
•
Shakeel Hashim
Synthetic data is more useful than you think
Fears of ‘model collapse’ are overstated — at least for now
Nov 20, 2024
•
Lynette Bye
New OpenAI emails reveal a long history of mistrust
Greg Brockman and Ilya Sutskever had questions about Sam Altman's intentions as early as 2017
Nov 15, 2024
•
Shakeel Hashim
Yet another AI safety researcher has left OpenAI
Richard Ngo resigned today, saying it has become "harder for me to trust that my work here would benefit the world"
Nov 14, 2024
•
Shakeel Hashim
Meta’s AI ‘safeguards’ are an elaborate fiction
Meta cannot prevent misuse, despite what it might pretend
Nov 13, 2024
•
Shakeel Hashim
Why AI companies are eyeing the Middle East
And what the US government is doing about it
Nov 7, 2024
•
Lynette Bye
What Trump means for AI safety
A repeal of last year's executive order, for one thing
Nov 6, 2024
•
Shakeel Hashim
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts