anthropic ai computer use

AI can now use your computer. Anthropic have released an upgraded version of their Claude 3.5 Sonnet model that is able to not only understand but interact with any desktop app.

The new ‘computer use API’ means Claude can use computers in the same way humans do, looking at the screen, clicking buttons, moving a cursor and even typing.

These new capabilities will allow Claude to perform more complex tasks than ever before. It will be able to understand more nuanced queries and locate specific data points from large datasets.

It will also be able to take over and automate routine tasks including scheduling appointments, managing emails and drafting documents.

In a fictionalised video demonstration, Anthropic showed a ‘customer’ sending a vendor request form. The data needed to fill out the form is ‘scattered in various places’. The user then asks Claude to complete the form using data from the open spreadsheet, or to use the CRM if the data is not available. The video then shows Claude able to switch tabs to operate and type in the CRM. Claude is then able to autonomously enter the information to the form and submit it without human intervention.

Anthropic admit that the update is still experimental and may produce errors. However the applications as the technology continues to evolve are endless.

How to Download Upgraded Claude 3.5 with Computer Use

The upgraded version of Claude 3.5 is now available for anyone to download in any app that already uses Claude.

The beta testing version that features Computer Use capabilities must be built in by a developer on Anthropic API, Amazon Bedrock, or Google Cloud’s Vertex AI.

Is Claude 3.5’s Computer Use Safe?

Allowing an AI to manually use your computer may seem to many like something straight out of science fiction.

The idea that an AI could be capable of performing tasks like a human is exciting and may allow mundane administrative chores to be automated and more complicated projects to be more streamlined.

However, with this new technology that can view and interact with anything on your desktop comes understandable concerns about security, privacy, and the potential for unintended consequences.

Anthropic acknowledges the risks involved but believes that the benefits outweigh these concerns. Their statement reads ‘We judge that it’s likely better to introduce computer use now, while models still only need AI Safety Level 2 safeguards. This means we can begin grappling with any safety issues before the stakes are too high, rather than adding computer use capabilities for the first time into a model with much more serious risks.’

The company confirms that their trust and safety teams have analyzed the new models for vulnerabilities. They advise approaching the public beta with caution as they have concerns about ‘prompt injection’ attacks, this is a cyberattack that manipulates AI models by providing harmful instructions so it overrides its prior directions.

Anthropic confirms it has not trained the new 3.5 model on users screenshots and prompts, and the model was prevented from accessing the web during training.

The Anthropic team have also confirmed that they have put measures in place to monitor when Claude is asked to engage in any activity related to elections. They have also built systems for ‘nudging Claude away from activities like generating and posting content on social media, registering web domains, or interacting with government websites.’