Find your job

OpenAI red team contractor program in 2026.

OpenAI's red team program runs through specialty consultancy partners and direct relationships. Here's the structure, pay, and realistic entry paths.

OpenAI's red team contractor program is structured differently from Anthropic's. More partners, more disclosure-bounty programs, less direct contracting. Here's what we know about pay, structure, and entry paths in 2026.

Three program types

OpenAI engages security and red team contractors through three structures:

  1. Direct contractor relationships for senior researchers — small pool, hardest entry.
  2. Specialty consultancy partners (Apollo Research, METR, Trail of Bits, Apollo, Lakera) — most common path for working contractors.
  3. Bug bounty / responsible disclosure programs — open to anyone, scaled rewards.

Direct contractor pay

  • Senior security researcher: $130–$220/hr
  • Domain specialist (chemistry, bio, cyber): $150–$300/hr
  • Frontier evaluation researcher: $180–$320/hr

The pool is small. Most direct contractors are former security researchers, ex-employees, or have published substantial alignment / evaluation work.

Specialty consultancy pay (more accessible)

Consultancies that hold OpenAI contracts typically pay their contractors:

  • Generalist red team: $90–$150/hr
  • Domain specialty: $130–$200/hr
  • Senior with security background: $150–$240/hr

This is the more realistic entry path. The consultancies have their own application processes; OpenAI is the end client but you're employed by the consultancy.

Specialty consultancy income$140/hr × 14 hrs/wk = $7,800/month with project bursts.
Open calculator →

Bug bounty program (open to all)

OpenAI's bug bounty program runs through Bugcrowd. Rewards in 2026:

  • Low-severity findings: $200–$2,000
  • Medium-severity findings: $2,000–$10,000
  • High-severity findings: $10,000–$50,000
  • Critical findings: $50,000–$200,000+

Open to any researcher. Specific scope (model behavior, infrastructure security, account abuse) is published. Realistic for security researchers; unrealistic as primary income for non-specialists.

How specialty consultancies hire

Apollo Research, METR, Trail of Bits, Lakera, Apollo, and similar firms run their own contractor pipelines:

  1. Application via firm's careers page.
  2. Technical screen — typically a security challenge, evaluation exercise, or take-home assessment.
  3. Behavioral / fit interview with the consulting firm's lead.
  4. Project assignment — typically 4–12 week engagements with a frontier lab as end client.

Each firm has its own specialty: Apollo Research focuses on AI safety / alignment; METR focuses on evaluation; Trail of Bits focuses on security; Lakera focuses on LLM-specific security.

What credentials open these doors

  • Security background: CVE credits, bug bounty history, security publications.
  • AI safety thinking: Public engagement with alignment / evaluation research.
  • Senior software engineering: Particularly in distributed systems or compilers.
  • Domain credentials: Verified expertise in chemistry, biology, cybersecurity (for those specialty teams).

How OpenAI direct vs Anthropic direct compares

  • Anthropic: More structured around explicit "AI safety contractor" identity. Public-engagement path is well-trodden.
  • OpenAI: More fragmented across specialty partners. Most contractors come through consultancies rather than direct.
  • Pay: Comparable at senior tiers. OpenAI specialty contractors via consultancies sometimes earn slightly less than Anthropic direct contractors due to the consultancy markup.

The realistic 2026 entry path

For most contractors interested in OpenAI red team work:

  1. Build a security or AI safety track record over 6–12 months.
  2. Apply to a specialty consultancy (Apollo Research, METR, Trail of Bits, Lakera) that holds OpenAI contracts.
  3. Work through the consultancy for 12+ months to build reputation.
  4. Eventually apply direct if you've built specific expertise.

Bottom line

OpenAI red team work is accessible primarily through specialty consultancy partners, not direct contracting. Pay through consultancies is competitive ($90–$200/hr range). Direct contracting requires established security or AI safety reputation. Bug bounty is open to all but realistic only for security specialists. The combined OpenAI + consultancy ecosystem is one of the highest-paying paths in AI contracting for those with the right background.

Find AI training contractsAll open roles · 9 platforms · filter by rate and hours.
Find your job

Frequently asked questions

How much does OpenAI pay red team contractors?
Direct contractors earn $130–$220/hr generalist, $150–$300/hr specialty. Through specialty consultancies (Apollo Research, METR, Trail of Bits): $90–$200/hr depending on tier. Bug bounty rewards range $200–$200,000+ per finding.
How do I apply to OpenAI's red team program?
Three paths: direct application (highest bar), specialty consultancy partners like Apollo Research or METR (most common), or bug bounty program via Bugcrowd (open to all but unrealistic as primary income).
Do I need a security background for OpenAI red team work?
Strongly preferred. CVE credits, bug bounty history, security publications, or specialty domain expertise (chemistry, biology, cybersecurity) are typically required for direct contractor or specialty consultancy roles.
Which is better — OpenAI or Anthropic contractor work?
Both pay similar senior rates. Anthropic has clearer 'AI safety contractor' pathway via public engagement. OpenAI is more fragmented across specialty partners. Pick based on which culture and specialty matches your background.