Research Engineer / Scientist, Safeguards (San Francisco)

Anthropic
San Francisco, CA

About Anthropic

Anthropics mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

The Robustness Team is part of the Alignment Science team , and conducts critical safety research and engineering to ensure AI systems can be deployed safely. As part of Anthropic's broader safeguards organization, we work on both immediate safety challenges and longer-term research initiatives, with projects spanning jailbreak robustness, automated red-teaming, monitoring techniques, and applied threat modeling. We prioritize techniques that will enable the safe deployment of more advanced AI systems (ASL-3 and beyond), taking a pragmatic approach to fundamental AI safety challenges while maintaining strong research rigor.

You take a pragmatic approach to running machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. Youll both focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), as well as better understanding risks occurring today. You will work in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.

Note: Currently, the team has a preference for candidates who are able to be based in the Bay Area. However, we remain open to any candidate who can travel 25% to the Bay Area.

Representative projects:

  • Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
  • Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
  • Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
  • Write scripts and prompts to efficiently produce evaluation questions to test models reasoning abilities in safety-relevant contexts.
  • Contribute ideas, figures, and writing to research papers, blog posts, and talks.
  • Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.

You may be a good fit if you:

  • Have significant software, ML, or research engineering experience
  • Have some experience contributing to empirical AI research projects
  • Have some familiarity with technical AI safety research
  • Prefer fast-moving collaborative projects to extensive solo efforts
  • Pick up slack, even if it goes outside your job description
  • Care about the impacts of AI

Strong candidates may also:

  • Have experience authoring research papers in machine learning, NLP, or AI safety
  • Have experience with LLMs
  • Have experience with reinforcement learning
  • Have experience with Kubernetes clusters and complex shared codebases

The expected salary range for this position is:

$315,000 - $560,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable, trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn aboutour policy for using AI in our application process

Create a Job Alert

Interested in building your career at Anthropic? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

First Name *

Last Name *

Email *

Phone *

Resume/CV *

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

(Optional) Personal Preferences *

How do you pronounce your name?

Website

Publications (e.g. Google Scholar) URL

When is the earliest you would want to start working with us?

Do you have any deadlines or timeline considerations we should be aware of?

AI Policy for Application * Select...

We believe that AI will have a transformative impact on the world, and were seeking exceptional candidates who collaborate thoughtfully with Claude to realize this vision. At the same time, we want to understand your unique skills, expertise, and perspective through our hiring process. We invite you to review our AI partnership guidelines for candidates and confirm your understanding by selecting Yes.

In a paragraph or two, why do you want to work on AI safety at Anthropic? *

Given your understanding of our teams priorities, what are three projects youd be excited about working on at Anthropic that are aligned with those priorities? (1-2 sentences each)

You can learn about the team's work here:

Share a link to the piece of work you've done that is most relevant to the Robustness team, along with a brief description of the work and its relevance.

Whats your ideal breakdown of your time in a working week, in terms of hours or % per week spent on meetings, coding, reading papers, etc.?

In one paragraph, provide an example of something meaningful that you have done in line with your values. Examples could include past work, volunteering, civic engagement, community organizing, donations, family support, etc.

Will you now or will you in the future require employment visa sponsorship to work in the country in which the job you're applying for is located? * Select...

Additional Information *

Add a cover letter or anything else you want to share.

LinkedIn Profile

Please ensure to provide either your LinkedIn profile or Resume, we require at least one of the two.

Are you open to working in-person in one of our offices 25% of the time? * Select...

Are you open to relocation for this role? * Select...

What is the address from which you plan on working? If you would need to relocate, please type relocating.

Team Matching *

Pre-training The Pre-training team trains large language models that are used by our product, alignment, and interpretability teams. Some projects include figuring out the optimal dataset, architecture, hyper-parameters, and scaling and managing large training runs on our cluster.

AI Alignment Research the Alignment team works to train more aligned (helpful, honest, and harmless) models and does alignment science to understand how alignment techniques work and try to extrapolate to uncover and address new failure modes.

Reinforcement Learning Reinforcement Learning is used by a variety of different teams, both for alignment and to teach models to be more capable at specific tasks.

Platform The Platform team builds shared infrastructure used by Anthropic's research and product teams. Areas of ownership include: the inference service that generates predictions from language models; extensive continuous]]> <

Posted 2025-08-10

Recommended Jobs

Maintenance Engineer

The Hoxton
Los Angeles, CA

Job Description Job Description Company Description We are looking for a Maintenance Engineer to join our amazing Hoxton Team! The role is be based within the property and works directly wit…

View Details
Posted 2025-07-30

Validation Engineer/CSV Engineer

Katalyst Healthcares & Life Sciences
Walnut Creek, CA

Job Description Job Description Company Description Katalyst Healthcares & Life Sciences is hiring entry level candidates for several positions for contract research in Clinical trials of drug…

View Details
Posted 2025-07-30

AI Engineer, .RAG (San Francisco)

Eloquent AI
San Francisco, CA

Meet Eloquent AI At Eloquent AI, were building the next generation of AI Operatorsmultimodal, autonomous systems that execute complex workflows across fragmented tools with human-level precision. Ou…

View Details
Posted 2025-08-10

Assistant Boutique Director

Christian Dior
Beverly Hills, CA

Christian Dior Couture seeks an Assistant Boutique Director in Beverly Hills to oversee the client experience and boutique operations. This role demands a seasoned professional with extensive retail a…

View Details
Posted 2025-07-30

Bookkeeper

Consultative Search Group
Encino, CA

  A fast-growing business management firm is looking for an Account Executive / Bookkeeper to join their dynamic team. The selected candidate is a confident and professional communicator, with excep…

View Details
Posted 2025-07-29

Relationship Manager

Fiserv
California

Calling all innovators - find your future at Fiserv. We're Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial …

View Details
Posted 2025-07-29

Assistant Station Manager #2210

Chevron
Union City, CA

Chevron Stations Inc. is looking to hire a Full-Time Assistant Station Manager. The ideal candidate must have experience in Customer Service Management or Retail Management. Candidates with no…

View Details
Posted 2025-07-29

Physical Therapist Assistant (PTA) - Santa Clara/San Jose

CVHCare
San Jose, CA

CVHCare, a leader in Home Health Clinical services , headquartered in beautiful  San Ramon CA , is currently seeking an  Physical Therapist Assistant (PTA)  to join our Home Health Care Agency loca…

View Details
Posted 2025-07-30

Restaurant Senior Kitchen Manager - Full Service - Fresno, CA

HHB Restaurant Recruiting
Fresno, CA

Job Description Job Description Are you a hardworking, service-minded leader with a real passion for the hospitality industry? Are you looking to take a step towards building your restaurant …

View Details
Posted 2025-07-30

Phlebotomist

Medical Devices Company
San Jose, CA

Roles & Responsibilities Valid phlebotomy license, CPT-1 or CPT-2 Bachelor’s degree in a scientific discipline a plus Proof of Hepatitis B vaccination or titer Current certificate in adult…

View Details
Posted 2025-08-08