I want to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent;
I will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so;
Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”;
I think OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes;
I’m interested in talking to folks who might want to advise or collaborate on my next steps.
See the full post here: https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im?utm_source=substack&utm_medium=email