Join Anthropic as a Fellow to conduct full-time AI safety research with mentorship, funding, and access to collaborative workspaces.
Your Role
Here's what you will be doing:
- Conduct 4 months of full-time empirical AI research using external infrastructure aligned with Anthropic's research priorities.
- Produce public research outputs such as paper submissions.
- Work under direct mentorship from Anthropic researchers.
- Collaborate within a shared workspace in Berkeley, California or London, UK, or remotely within the US, UK, or Canada.
- Engage with the broader AI safety and security research community.
- Participate in an interview process including application review, reference checks, technical assessments, and research discussions.
- Choose from multiple Fellows workstreams, including AI Safety, AI Security, ML Systems & Performance, Reinforcement Learning, and Economics & Societal Impacts.
About You
The company is looking for candidates who:
- Are motivated to ensure AI is safe and beneficial for society.
- Have a strong technical background in computer science, mathematics, or physics.
- Are fluent in Python programming.
- Are available to work full-time for 4 months on the Fellows program.
- Thrive in fast-paced, collaborative environments and communicate clearly.
- Have experience or a strong interest in empirical AI research and related disciplines.
- Possess work authorization and reside in the US, UK, or Canada during the program.
- Hold a minimum of a Bachelor's degree or equivalent education, training, or experience relevant to the role.
Compensation & Benefits
- Weekly stipend of 3,850 USD for 40 hours per week over 4 months.
- Funding for compute resources (~$15,000/month) and other research expenses.
- Access to shared workspaces with options for remote work within eligible countries.
- Benefits vary by country.
Training & Development
- Direct mentorship from experienced Anthropic researchers.
- Opportunity to produce public research outputs and contribute to open-source projects.
- Connection to a broad AI safety and security research community.
Career Progression
- Strong performance may lead to consideration for full-time roles at Anthropic.
- Previous cohorts saw 25-50% of fellows receive full-time offers.
- Support for career advancement in AI safety and security fields, including opportunities at other organizations.
How to Apply
- Submit an initial application including references.
- Complete technical assessments and interviews.
- Participate in a research discussion as part of the interview process.
- Indicate workstream preferences in the application.
- Applications are reviewed on a rolling basis for cohorts starting in July 2026 and beyond.
This job may close before the stated closing date, you are encouraged to apply as soon as possible.
Report this job