Apr 16 / Latest News

Anthropic Releases Claude Opus 4.7 With Major Upgrades for Agentic AI Workflows

Anthropic has released Claude Opus 4.7, a significant update aimed at teams building long‑running, autonomous AI workflows. The new model delivers stronger performance in advanced software engineering, multimodal reasoning, and strict instruction adherence—capabilities that matter when AI agents must operate reliably across multi‑step tasks without supervision. Opus 4.7 is now available across all Claude products, the API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry, with pricing unchanged from Opus 4.6.

The upgrade introduces major improvements in handling complex coding tasks, including better self‑verification and more consistent execution during extended runs. Vision capabilities have also expanded, with support for images up to 2,576 pixels on the long edge—more than triple the resolution of earlier Claude models—enabling use cases such as reading dense screenshots, extracting structured data, and performing pixel‑accurate analysis. Anthropic notes that teams migrating from Opus 4.6 should expect stricter instruction‑following behavior and may need to adjust prompts accordingly.

Opus 4.7 also enhances file‑system‑based memory, allowing the model to retain and use notes across multi‑session workflows. The release carries policy significance as well: it is the first model on which Anthropic is testing new cybersecurity safeguards before expanding them to its more capable Mythos‑class systems. These safeguards automatically block high‑risk or prohibited cyber requests, while legitimate security researchers can apply to Anthropic’s new Cyber Verification Program for approved use.

Safety evaluations show a profile similar to Opus 4.6, with improvements in honesty and resistance to malicious prompt injection, though the model remains imperfect in certain harm‑reduction scenarios. Migration considerations include a new tokenizer that increases token counts by 1.0–1.35× depending on content, and higher output generation at elevated effort levels. Anthropic says internal testing shows overall efficiency gains despite the increased tokenization.