Back to Antenna

Company Profile

Review and manage the profile that powers all your tools.

Enter a company name and Claude will auto-populate the full profile: narratives, competitors, brand colors, and more.

Identity

Anthropic - AI Safety & Research

AI safety company building reliable, interpretable, and steerable AI systems.

Founded 2021 | San Francisco, CA | https://www.anthropic.com

Communications Context

Key Narratives
  • AI safety leadership - building AI that is safe and beneficial
  • Responsible scaling - scaling capabilities alongside safety research
  • Constitutional AI - training AI systems with explicit principles
  • Interpretability research - understanding how AI models think
  • Industry collaboration on AI governance and policy
Key Messages
  • Anthropic builds AI systems that are reliable, interpretable, and steerable
  • Safety and capability are complementary, not competing priorities
  • Claude is designed to be helpful, harmless, and honest
  • We believe in responsible disclosure and proactive safety research
  • AI development requires collaboration between industry, government, and civil society
Competitors
OpenAI - Primary competitor in frontier AI models
Google DeepMind - Competitor in AI research and model deployment
Meta AI - Competitor in open-source AI models
Mistral AI - European competitor in efficient AI models
Spokespeople / Leadership
Dario Amodei - CEO & Co-Founder
Company visionAI safety policyFrontier AI development
Daniela Amodei - President & Co-Founder
Business strategyAI governanceCompany culture
Chris Olah - Co-Founder
Interpretability researchNeural network visualization
Chief Communications Officer - Chief Communications Officer
Media relationsPublic communications strategyCrisis communications
Media Landscape

High-interest beat for tech reporters at NYT, WSJ, Bloomberg, The Information, Wired, and The Verge. AI policy reporters at Politico and Axios also cover regularly. Key dynamics: intense competition narrative with OpenAI, growing regulatory scrutiny, and strong interest in safety differentiation.

Industry Terms
frontier modelRLHFconstitutional AIresponsible scaling policyinterpretabilityred teamingalignmentAI safety levels (ASL)model cardsystem prompt
Sensitivities
  • Comparisons to OpenAI - avoid direct attacks, focus on Anthropic's approach
  • AI risk narratives - balance safety messaging without fearmongering
  • Regulatory discussions - support thoughtful regulation without specific policy endorsements
  • Employee matters - standard HR protocols apply
  • Unreleased model capabilities - embargo and NDA protocols
Company Boilerplate

Anthropic is an AI safety company that builds reliable, interpretable, and steerable AI systems. Founded in 2021, the company is headquartered in San Francisco and develops Claude, an AI assistant designed to be helpful, harmless, and honest. Anthropic conducts frontier research in AI safety, including work on constitutional AI, interpretability, and responsible scaling practices.

Brand

Primary#CC785C
Secondary#1a1a2e
Accent#CC785C
Accent Light#FDF0EB

Tone: professional

Output Preferences

Preferred Formats
Executive summaryBullet pointsNarrative brief

Naming: YYYY-MM-DD_type_topic | Length: standard | Boilerplate: Yes