• Quotes
  • Shortcuts
The Executive's Internet
Sat, Mar 28th
icon
GoogleAmazonWikipedia


spacerspacer

 

 TECHNOLOGY NEWS
Setup News Ticker
   TECHNOLOGY NEWS
Searching for 'Anthropic Claude'. (Return)

CNET NewsMar 28, 2026
Anthropic's Claude Can Now Take Over Your Computer to Do Tasks for You
Anthropic brings Claude into the agentic, OpenClaw-like fold.

EngadgetMar 27, 2026
Court temporarily blocks US government from labeling Anthropic as a 'supply chain risk'
The court has granted Anthropic's request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a "supply chain risk," at least for now. If you'll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons.

In response to Anthropic's refusal, the president ordered federal agencies to stop using Claude and the company's other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well.

In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would "


EngadgetMar 25, 2026
Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus
Anthropic has begun previewing "auto mode" inside of Claude Code. The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously. 

With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code. 

Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes. 

Anthropic doesn't mention a specific incident as inspiration for auto mode, but the recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected."

Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.



This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-releases-safer-claude-code-auto-mode-to-avoid-mass

  • CEOExpress
  • c/o CommunityScape | 200 Anderson Avenue
    Rochester, NY 14607
  • Contact
  • As an Amazon Associate
    CEOExpress earns from
    qualifying purchases.

©1999-2026 CEOExpress Company LLC