Anthropic introduced computer use as a beta API in October 2024 and recently brought it to the consumer product. If enabled, Claude can see your screen, move your mouse, type on your keyboard. With Dispatch, you assign a task from your phone and come back to finished work on your desktop.
Our operating systems were designed for humans. Accessibility APIs help people interact with their screens. Screen sharing lets people collaborate. Computer use repurposes these primitives to give an agent full desktop control.
That works, but it’s a loose foundation. These APIs assume the actor on the other end understands context and knows what’s sensitive. Agents fail unpredictably at both. We’ve already seen what happens when capable agents run without containment. I’ve written before that agents need the inverse of human trust: not open-ended permissions, but explicit boundaries enforced structurally. Computer use gives open-ended permissions.
This post walks through what you’re concretely giving up when you hand over control.

How It Works
Claude reaches for precision tools first: connectors to services like Slack or Google Calendar that use direct API calls. These are scoped to specific services. When no connector exists, Claude falls back to controlling your browser, mouse, keyboard, and screen. That’s computer use.
The launch post explains what computer use does, not how. For that, you need the API documentation, which describes the mechanism: Claude requests a screenshot, analyzes it, sends back a tool use request (click, type, scroll), your system executes it and returns a new screenshot, and Claude decides whether to continue. Anthropic calls this the “agent loop.” For the consumer product, Anthropic’s app runs this loop on your behalf: screenshot, analyze, act, repeat, until the task is done.
What you give up depends on which path Claude takes.
A note on terminology: Cowork is Anthropic’s broader feature for Claude working alongside you on your desktop. Computer use is a specific capability within Cowork where Claude controls your screen, mouse, and keyboard. Most of the trade-offs below are specific to computer use, but the audit trail gap applies to Cowork as a whole.
You Give Up: Control Over What Leaves Your Machine
When Claude uses your computer, it captures screenshots and sends them to Anthropic’s servers for analysis. That’s how the model sees your screen. Anthropic’s privacy documentation:
Computer use will process and collect screenshots from the computer’s display that Claude uses to interpret and interact with the interface, along with the user’s Inputs and Outputs. […] By default, Anthropic will automatically delete all screenshots from our backend within 30 days. — Anthropic, Computer Use Privacy
Whatever is visible on your display at the moment of capture goes to Anthropic. Email content, Slack messages, open documents, browser tabs, internal dashboards. Claude takes screenshots continuously during a task: every action cycle produces a new one.
The feature is in beta and is excluded from Zero Data Retention. This is worth pausing on. ZDR is one of the primary controls enterprises negotiate with Anthropic: the guarantee that inputs and outputs are not stored after processing. Computer use screenshots bypass that guarantee entirely. An organization can have ZDR in its contract, an employee enables computer use, and screenshots of their screen are processed by the model and retained for 30 days regardless. The data doesn’t just pass through: it’s analyzed, used for interpretation, and stored on Anthropic’s infrastructure.
If your screen contains personal data, client information, or regulated content, the screenshots create a data processing pathway to a third party that may not be covered by your existing agreements.
You Give Up: App-Level Boundaries
Anthropic’s blog states that Claude “will always request permission before accessing new applications” and that “some apps are off-limits by default.” That’s a reasonable starting point. But it’s behavioral scoping, not structural.
Computer use works through macOS accessibility APIs, originally designed for assistive technologies: screen readers, voice control. These APIs provide the ability to see and interact with the entire display. You can grant or deny access per app, but once granted, Claude can see and interact with everything those permissions cover.

There’s no way to technically restrict Claude to “only Slack and Google Docs but not the internal portal open in another tab.” If Claude misinterprets a task and navigates to the wrong application, the permission model doesn’t prevent it. With Dispatch, this can happen while you’re not at your computer.
Connectors are scoped by design.
Computer use is scoped by behavior.
The permission dialog itself says it plainly: “Some actions can’t be undone” and “This is a research preview. Start with tasks where mistakes are easy to fix.” That’s honest, but it’s guidance, not a guardrail. There’s no structural mechanism that prevents Claude from taking irreversible actions. No confirmation step before deleting a file, sending an email, or submitting a form. The safety net is the user’s judgment about which tasks to delegate, and with Dispatch, that judgment call happens on your phone before you walk away.
You Give Up: Knowing Who Did What
This one isn’t specific to computer use. It applies to Cowork as a whole, with or without screen control enabled. Anthropic’s documentation for Team and Enterprise plans:
Cowork activity is not captured in Audit Logs, Compliance API, or Data Exports. — Anthropic, Cowork on Team and Enterprise Plans
Cowork stores conversation history locally on users’ computers. This data is not subject to Anthropic’s standard data retention policies and cannot be centrally managed or exported by admins. — Anthropic, Cowork on Team and Enterprise Plans
When Claude acts on your behalf through Cowork, your organization’s compliance systems don’t see it. If Claude deletes a file or edits a document, the system records it as a human action. “shane@company.com deleted file” looks the same regardless of whether Shane did it or Claude did.
For multi-step workflows the picture gets murkier. Claude reads an email, opens a spreadsheet, enters figures, submits a form. Each step looks like a person doing their job. If something goes wrong weeks later, the audit trail points to the human. The agent’s role is nowhere in the record.
Anthropic is upfront about this:
If your organization requires audit trails for compliance purposes, do not enable Cowork for regulated workloads. — Anthropic, Cowork on Team and Enterprise Plans
The Cowork toggle is organization-wide: all members get access or none do. No per-team or per-workload scoping.
What This Means for Regulated Work
Computer use is currently limited to Pro and Max plans. Enterprise plans don’t have access yet. But the audit trail gap applies to Cowork on Enterprise today, even without computer use. And when computer use eventually leaves beta and reaches Enterprise plans, organizations should have already assessed the trade-offs rather than discovering them at rollout.
There’s also the shadow IT angle. An employee installs Claude on their work laptop with a personal Pro subscription, enables computer use, and starts delegating tasks. IT has no visibility into this. No toggle to disable, no log to review, no policy that catches it. The screenshots of internal tools and sensitive data are already on Anthropic’s servers before anyone knows it’s happening.
For regulated industries, the trade-offs map to specific compliance concerns:
Audit: Agent actions invisible to audit logs means incomplete evidence of who did what. Any framework that requires demonstrable controls over access and actions has a gap here.
Data protection: For organizations under GDPR or similar data protection regimes, the current state of computer use is effectively a non-starter. Screenshots containing personal data are processed by the model and retained for 30 days on Anthropic’s infrastructure, with no ZDR protection. That’s personal data leaving your environment, being processed by a third party, and stored without the retention controls you’d normally require. Until the ZDR exclusion is lifted or screenshot handling gets compliant data processing guarantees, this isn’t a risk to manage: it’s a gap to close before enabling.
Regulated industries: Any sector that requires human approval trails for consequential actions has a problem. An agent acting autonomously on a desktop, triggered by a phone prompt, doesn’t produce the kind of audit trail regulators typically expect.
The Trade-Off
Enabling computer use is a decision about what control you’re willing to trade for convenience. The three things you give up: visibility into what data leaves your machine, structural boundaries around what the agent can access, and the ability to distinguish human actions from agent actions in your audit trail.
And the blast radius is large. This isn’t an agent with access to one API or one repository. It’s an agent with access to everything on your screen: your email client, your internal tools, your browser sessions, your file system. A coding agent that goes wrong can break a build. A desktop agent that goes wrong can send an email, delete files, or interact with production systems. More autonomy, more reach, more damage when it goes wrong.
An agent with access to your entire desktop needs more than a permission prompt and a recommendation to start with safe tasks.
Computer use is early: beta, macOS only, Pro and Max plans only. Most enterprises can’t enable it yet, and these trade-offs will need to be addressed before it ships on Enterprise plans.
But you don’t need to wait for that rollout to act. The audit trail gap applies to Cowork on Enterprise today, with or without computer use. And an employee with a personal Pro subscription can enable computer use on a work laptop without IT ever knowing.
Sources