OpenAI's red team contractor program is structured differently from Anthropic's. More partners, more disclosure-bounty programs, less direct contracting. Here's what we know about pay, structure, and entry paths in 2026.
Three program types
OpenAI engages security and red team contractors through three structures:
- Direct contractor relationships for senior researchers — small pool, hardest entry.
- Specialty consultancy partners (Apollo Research, METR, Trail of Bits, Apollo, Lakera) — most common path for working contractors.
- Bug bounty / responsible disclosure programs — open to anyone, scaled rewards.
Direct contractor pay
- Senior security researcher: $130–$220/hr
- Domain specialist (chemistry, bio, cyber): $150–$300/hr
- Frontier evaluation researcher: $180–$320/hr
The pool is small. Most direct contractors are former security researchers, ex-employees, or have published substantial alignment / evaluation work.
Specialty consultancy pay (more accessible)
Consultancies that hold OpenAI contracts typically pay their contractors:
- Generalist red team: $90–$150/hr
- Domain specialty: $130–$200/hr
- Senior with security background: $150–$240/hr
This is the more realistic entry path. The consultancies have their own application processes; OpenAI is the end client but you're employed by the consultancy.
Bug bounty program (open to all)
OpenAI's bug bounty program runs through Bugcrowd. Rewards in 2026:
- Low-severity findings: $200–$2,000
- Medium-severity findings: $2,000–$10,000
- High-severity findings: $10,000–$50,000
- Critical findings: $50,000–$200,000+
Open to any researcher. Specific scope (model behavior, infrastructure security, account abuse) is published. Realistic for security researchers; unrealistic as primary income for non-specialists.
How specialty consultancies hire
Apollo Research, METR, Trail of Bits, Lakera, Apollo, and similar firms run their own contractor pipelines:
- Application via firm's careers page.
- Technical screen — typically a security challenge, evaluation exercise, or take-home assessment.
- Behavioral / fit interview with the consulting firm's lead.
- Project assignment — typically 4–12 week engagements with a frontier lab as end client.
Each firm has its own specialty: Apollo Research focuses on AI safety / alignment; METR focuses on evaluation; Trail of Bits focuses on security; Lakera focuses on LLM-specific security.
What credentials open these doors
- Security background: CVE credits, bug bounty history, security publications.
- AI safety thinking: Public engagement with alignment / evaluation research.
- Senior software engineering: Particularly in distributed systems or compilers.
- Domain credentials: Verified expertise in chemistry, biology, cybersecurity (for those specialty teams).
How OpenAI direct vs Anthropic direct compares
- Anthropic: More structured around explicit "AI safety contractor" identity. Public-engagement path is well-trodden.
- OpenAI: More fragmented across specialty partners. Most contractors come through consultancies rather than direct.
- Pay: Comparable at senior tiers. OpenAI specialty contractors via consultancies sometimes earn slightly less than Anthropic direct contractors due to the consultancy markup.
The realistic 2026 entry path
For most contractors interested in OpenAI red team work:
- Build a security or AI safety track record over 6–12 months.
- Apply to a specialty consultancy (Apollo Research, METR, Trail of Bits, Lakera) that holds OpenAI contracts.
- Work through the consultancy for 12+ months to build reputation.
- Eventually apply direct if you've built specific expertise.
Bottom line
OpenAI red team work is accessible primarily through specialty consultancy partners, not direct contracting. Pay through consultancies is competitive ($90–$200/hr range). Direct contracting requires established security or AI safety reputation. Bug bounty is open to all but realistic only for security specialists. The combined OpenAI + consultancy ecosystem is one of the highest-paying paths in AI contracting for those with the right background.