Published on

Claude Code Gets a Major Upgrade β€” Opus 4.7, /ultrareview, and Routines Set a New Bar for AI Coding

The workflow that starts with "review this code for me" may soon feel antiquated.

In May 2026, Anthropic delivered two major changes to Claude Code. First, Opus 4.7 became the default model for Max and Team Premium plans. Second, two new features landed: /ultrareview, which runs automated parallel multi-agent code review, and Routines, which fires AI agents automatically on a schedule or GitHub event. Together, they raise a question: is an AI coding tool now a teammate?


Opus 4.7: Enter the xhigh Effort Level

Claude Opus 4.7

Claude Opus 4.7, released on April 16, 2026, was built with a sharp focus on software engineering. It's already generally available through GitHub Copilot as well.

The standout addition is the xhigh effort level. Previously, there was nothing between high and max. The new xhigh sits in between and is described as the recommended setting for most coding and agentic tasks. It becomes the default the first time you switch to Opus 4.7, and you can dial it in with the /effort slider.

In practice, this means the model pauses to reason through more steps before responding, rather than firing back immediately. For coding tasks where accuracy matters more than speed, the quality difference is tangible.


/ultrareview: Solo Code Review Is a Thing of the Past

Code review is one of the most common bottlenecks in software development. It requires human attention, takes time, and gets worse as reviewer fatigue sets in.

/ultrareview attacks this with a parallel multi-agent approach. Instead of a single-pass review, multiple agents simultaneously read through code changes and surface problems from different angles β€” bugs, design flaws, security vulnerabilities β€” things that a single-pass review often misses.

"Runs a dedicated review session that reads through your changes and flags what a careful reviewer would catch." β€” Anthropic Docs

Type /ultrareview in the terminal. It's a natural fit as the final safety net before merging a PR. A key detail: it runs in the cloud, not on your local machine.


Routines: AI Agents That Work While You're Away

Claude Code Routines Workflow

Routines is the most forward-looking feature in this update. One sentence summary:

Even with your computer off, AI agents execute automatically when a scheduled time arrives or a GitHub event fires.

You define a routine in Claude Code on the web: set a prompt, specify which repos it can access, and configure the connectors it needs. From that point, the agent runs when a PR is opened, a release is published, or a webhook you defined triggers β€” no machine of yours needs to be running.

Some real-world scenarios:

  • PR opened β†’ automatic code style check + baseline review
  • Every day at 9 AM β†’ Slack summary of changes merged yesterday
  • Release tag created β†’ auto-draft the changelog

This is not plain automation. It's automation where the agent exercises judgment using context.


Other Noteworthy Updates This Month

A few features that previewed in April are worth noting too:

  • Monitor tool: Streams background process events into the conversation in real time β€” tail logs, react live.
  • Computer Use (research preview): Claude can now open native apps, click through UI, and verify changes from the CLI.
  • Native binaries: The bundled JavaScript runtime has been replaced with native binaries, improving startup speed and stability.

An EdTech CEO's Perspective

Looking at these updates, I think we've entered what I'd call the "second generation" of AI coding tools.

The first generation helped you write code. The second generation works alongside you β€” or in your place β€” even when you're not there. Routines is the clearest symbol of that shift.

But this raises a cautious question for education. What happens when students learning to code have AI doing too much of the work? Learning to use a tool is different from understanding the underlying principles. As the vibe-coding generation starts using Routines and /ultrareview, it's worth asking seriously: how do we redefine what "learning to code" means?


Tips for Getting Started

  1. After switching to Opus 4.7: Use the /effort slider to try xhigh yourself. You need to feel the quality-speed tradeoff firsthand to find the right setting.
  2. Using /ultrareview: Register it as a step before merging any PR. This way, skipped reviews become a system-level failure, not a human oversight.
  3. Starting with Routines: Begin simple. Define a routine with a clear, verifiable outcome β€” like "summarize file structure when a PR is opened" β€” so you can iterate quickly on feedback.
  4. Monitor in action: Pull build or test logs into the conversation in real time. Debugging "why did this build fail?" becomes a live conversation with Claude.

Sources

Claude Code Gets a Major Upgrade β€” Opus 4.7, /ultrareview, and Routines Set a New Bar for AI Coding | MINSSAM.COM