Claude Code has shipped Routines in research preview, and it represents a meaningful architectural shift in how the tool positions itself. A routine is a saved Claude Code session, a prompt plus repositories plus MCP connectors, packaged once and triggered automatically without a human in the loop. The triggers are: a cron-style schedule, an HTTP POST to a per-routine endpoint, or a GitHub event such as a pull request opening or a workflow completing. You can combine all three on the same routine.

The case that stands out is alert triage. Your monitoring tool calls the routine’s API endpoint when an error threshold is crossed, passing the alert body in the request. Claude pulls the stack trace, correlates it with recent commits in the repository, and opens a draft PR with a proposed fix. On-call reviews the PR rather than starting from a blank terminal. That’s a meaningfully different flow from pasting a stack trace into a chat window, and it fits naturally into tooling that already supports webhooks. The scheduled triggers cover a different class of problems: nightly backlog grooming, weekly docs-drift scans, or library porting where a merged PR in one SDK triggers a matching change in another.

A few things practitioners should understand before wiring this into anything consequential. Routines run with no permission prompts during execution, and they act as you. Commits, pull requests, and connector actions (Slack messages, Linear tickets) appear under your GitHub identity and linked accounts. The routine’s scope is determined by what repositories you add and what connectors you include, so scoping those tightly is the actual security control, not any runtime approval gate. The research-preview status also means rate limits, API surface, and trigger semantics are all subject to change; the /fire endpoint ships under a dated beta header (experimental-cc-routine-2026-04-01) and Anthropic has stated two previous header versions will remain usable during migrations.

The practical limit is prompt quality. Routines run autonomously on a static prompt you write at creation time, so the prompt needs to be self-contained and explicit about success criteria. A vague prompt that works well interactively will fail silently in an unattended run. Teams that have already developed reliable, well-scoped Claude Code prompts for specific tasks will find this a natural extension. Teams that haven’t will discover that the hard part was never the scheduling infrastructure.

Claude Code Routines are available on Pro, Max, Team, and Enterprise plans with Claude Code on the web enabled. Create them at claude.ai/code/routines or from the CLI with /schedule. If you already have a well-tested prompt for a task you do repeatedly, the scheduling layer is now a form field away.