← Все кластеры
Anthropic Supply-Chain-Risk Designation Halted by Judge
cooling
Тип событияlawsuit
Темаartificial intelligence
ОрганизацияAnthropic
СтранаUnited States
Статей7
Уник. источников6
Важность / Момент2.03 / 0
Период26.03.2026 23:33 — 02.04.2026 20:33
Создан06.04.2026 06:31:35
Статьи в кластере 7
Заголовок Источник Дата публикации Score
S Anthropic Supply-Chain-Risk Designation Halted by Judge wired 26.03.2026 23:33 1
Embedding sim.1
Entity overlap1
Title sim.1
Time proximity1
NLP типlawsuit
NLP организацияAnthropic
NLP темаgenerative ai
NLP странаUnited States

Открыть оригинал

Paresh Dave Business Mar 26, 2026 7:33 PM Anthropic Supply-Chain-Risk Designation Halted by Judge A judge temporarily blocked the Trump administration’s designation, clearing the way for Anthropic to keep doing business without the label starting next week. Photo-Illustration: WIRED Staff; Getty Images Save this story Save this story Anthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk , potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its business and reputation. “Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.” Anthropic and the Pentagon did not immediately respond to requests to comment on the ruling. The Department of Defense, which under Trump calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic could not be trusted . Pentagon officials cited numerous instances in which Anthropic allegedly placed or sought to put usage restrictions on its technology that the Trump administration found unnecessary. The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin said the government had appeared to illegally “cripple” and “punish” Anthropic. Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued. “It does not bar any defendant from taking any lawful action that would have been available to it” on that date, she wrote. “For example, this order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.” The ruling suggests the Pentagon and other federal agencies are still free to cancel deals with Anthropic and ask contractors that integrate Claude into their own tools to stop doing so, but without citing the supply-chain-risk designation as the basis. The immediate impact is unclear because Lin’s order won’t take effect for a week. And a federal appeals court in Washington, DC, has yet to rule on the second lawsuit Anthropic filed, which focuses on a different law under which the company was also barred from providing software to the military. But Anthropic could use Lin’s ruling to demonstrate to some customers concerned about working with an industry pariah that the law may be on its side in the long run. Lin has not set a schedule to make a final ruling.
Anthropic wins injunction against Trump administration over Defense Department saga | TechCrunch techcrunch 27.03.2026 01:18 0.787
Embedding sim.0.9045
Entity overlap0.2727
Title sim.0.1259
Time proximity0.9955
NLP типlawsuit
NLP организацияAnthropic
NLP темаartificial intelligence
NLP странаUnited States

Открыть оригинал

A federal judge has sided with Anthropic in its twisty legal battle with the Trump administration, awarding the tech company an injunction against the government’s recent order that labeled it a “supply-chain risk,” The Wall Street Journal reports . On Thursday, Judge Rita F. Lin of the Northern District of California ordered the Trump administration to rescind its recent designation of Anthropic as a security risk, as well as to back off its order that federal agencies cut ties with the company. “It looks like an attempt to cripple Anthropic,” Lin reportedly said during the court proceedings. Lin ultimately argued that the government’s orders had flouted free speech protections for the company. The drama between the Pentagon and Anthropic erupted last month over a dispute concerning guidelines for the government’s usage of the AI company’s software. Anthropic had reportedly sought to enforce certain limits on how the government could use its AI models, such as banning their use in autonomous weapons systems or mass surveillance. The government disagreed with those limitations, ultimately labeling the company a supply-chain risk — a designation typically reserved for foreign actors. President Trump further ordered federal agencies to cut ties with the company. Not long afterward, Anthropic sued the agency , along with Hegseth. The White House has spent recent weeks attacking the company, characterizing it as “a radical-left, woke company” that is jeopardizing America’s “national security.” Anthropic CEO Dario Amodei, meanwhile, has called the Defense Department’s actions “retaliatory and punitive.” On the heels of Judge Lin’s ruling, Anthropic sent TechCrunch the following statement: “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.” Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW TechCrunch has separately reached out to the White House for comment. Topics AI , Anthropic , defense , pete hegseth , TC , Trump Lucas Ropek Senior Writer, TechCrunch Lucas is a senior writer at TechCrunch, where he covers artificial intelligence, consumer tech, and startups. He previously covered AI and cybersecurity at Gizmodo. You can contact Lucas by emailing lucas.ropek@techcrunch.com. View Bio April 30 San Francisco, CA StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited. REGISTER NOW Most Popular Why OpenAI really shut down Sora Connie Loizos Anthropic’s Claude popularity with paying consumers is skyrocketing Julie Bort Waymo’s skyrocketing ridership in one chart Kirsten Korosec A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know. Lorenzo Franceschi-Bicchierai The AI skills gap is here, says AI company, and power users are pulling ahead Rebecca Bellan Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’ Sarah Perez Kentucky woman rejects $26M offer to turn her farm into a data center Graham Starr Loading the next article Error loading the next article X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunch Staff Contact Us Advertise Crunchboard Jobs Site Map Terms of Service Privacy Policy RSS Terms of Use Code of Conduct Kalshi Copilot Blue Origin WordPress Bezos Tech Layoffs ChatGPT © 2026 TechCrunch Media LLC.
Judge sides with Anthropic to temporarily block the Pentagon’s ban the_verge_ai 27.03.2026 00:33 0.773
Embedding sim.0.8654
Entity overlap0.4
Title sim.0.2039
Time proximity0.994
NLP типlawsuit
NLP организацияAnthropic
NLP темаai regulation
NLP странаUnited States

Открыть оригинал

After Anthropic's weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press,'" Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order , which will go into effect in seven days. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendme … Read the full story at The Verge.
The Pentagon’s culture war tactic against Anthropic has backfired mit_tech_review 30.03.2026 15:42 0.743
Embedding sim.0.8846
Entity overlap0.1667
Title sim.0.2718
Time proximity0.4812
NLP типlawsuit
NLP организацияAnthropic
NLP темаai governance
NLP странаUnited States

Открыть оригинал

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,  sign up here . Last Thursday, a California judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering government agencies to stop using its AI. It’s the latest development in the month-long feud. And the matter still isn’t settled: The government was given seven days to appeal, and Anthropic has a second case against the designation that has yet to be decided. Until then, the company remains persona non grata with the government. The stakes in the case—how much the government can punish a company for not playing ball—were apparent from the start. Anthropic drew lots of senior supporters with unlikely bedfellows among them, including former authors of President Trump’s AI policy. But Judge Rita Lin’s 43-page opinion suggests that what is really a contract dispute never needed to reach such a frenzy. It did so because the government disregarded the existing process for how such disputes are governed and fueled the fire with social media posts from officials that would eventually contradict the positions it took in court. The Pentagon, in other words, wanted a culture war (on top of the actual war in Iran that began hours later).  The government used Anthropic’s Claude for much of 2025 without complaint, according to court documents, while the company walked a branding tightrope as a safety-focused AI company that also won defense contracts. Defense employees accessing it through Palantir were required to accept terms of a government-specific usage policy that Anthropic cofounder Jared Kaplan said “prohibited mass surveillance of Americans and lethal autonomous warfare” (Kaplan’s declaration to the court didn’t include details of the policy). Only when the government aimed to contract with Anthropic directly did the disagreements begin.  What drew the ire of the judge is that when these disagreements became public, they had more to do with punishment than just cutting ties with Anthropic. And they had a pattern: Tweet first, lawyer later. President Trump’s post on Truth Social on February 27 referenced “Leftwing nutjobs” at Anthropic and directed every federal agency to stop using the company’s AI. This was echoed soon after by Defense Secretary Pete Hegseth, who said he’d direct the Pentagon to label Anthropic a supply chain risk. Doing so necessitates that the secretary take a specific set of actions, which the judge found Hegseth did not complete. Letters sent to congressional committees, for example, said that less drastic steps were evaluated and deemed not possible, without providing any further details. The government also said the designation as a supply chain risk was necessary because Anthropic could implement a “kill switch,” but its lawyers later had to admit it had no evidence of that, the judge wrote. Hegseth’s post also stated that “No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” But the government’s own lawyers admitted on Tuesday that the Secretary doesn’t have the power to do that, and agreed with the judge that the statement had “absolutely no legal effect at all.” The aggressive posts also led the judge to also conclude that Anthropic was on solid ground in complaining that its First Amendment rights were violated. The government, the judge wrote while citing the posts, “set out to publicly punish Anthropic for its ‘ideology’ and ‘rhetoric,’ as well as its ‘arrogance’ for being unwilling to compromise those beliefs.” Labeling Anthropic a supply chain risk would essentially be identifying it as a “saboteur” of the government, for which the judge did not see sufficient evidence. She issued an order last Thursday halting the designation, preventing the Pentagon from enforcing it and forbidding the government from fulfilling the promises made by Hegseth and Trump. Dean Ball, who worked on AI policy for the Trump administration but wrote a brief supporting Anthropic, described the judge’s order on Thursday as “a devastating ruling for the government, finding Anthropic likely to prevail on essentially all of its theories for why the government’s actions were unlawful and unconstitutional.” The government is expected to appeal the decision. But Anthropic’s separate case, filed in DC, makes similar allegations. It just references a different segment of the law governing supply chain risks. The court documents paint a pretty clear pattern. Public statements made by officials and the President did not at all align with what the law says should happen in a contract dispute like this, and the government’s lawyers have consistently had to create justifications for social media lambasting of the company after the fact. Pentagon and White House leadership knew that pursuing the nuclear option would spark a court battle; Anthropic vowed on February 27 to fight the supply chain risk designation days before the government formally filed it on March 3. Pursuing it anyway meant senior leadership was, to say the least, distracted during the first five days of the Iran war, launching strikes while also compiling evidence that Anthropic was a saboteur to the government, all while it could have cut ties with Anthropic by simpler means. But even if Anthropic ultimately wins, the government has other means to shun the company from government work. Defense contractors who want to stay on good terms with the Pentagon, for example, now have little reason to work with Anthropic even if it’s not flagged as a supply chain risk. “I think it’s safe to say that there are mechanisms the government can use to apply some degree of pressure without breaking the law,” says Charlie Bullock, a senior research fellow at the Institute for Law and AI. “It kind of depends how invested the government is in punishing Anthropic.” From the evidence thus far, the administration is committing top-level time and attention to winning an AI culture war. At the same time, Claude is apparently so important to its operations that even President Trump said the Pentagon needed six months to stop using it. The White House demands political loyalty and ideological alignment from top AI companies, But the case against Anthropic, at least for now, exposes the limits of its leverage. If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).
Hegseth, Trump had no authority to order Anthropic to be blacklisted, judge says arstechnica_ai 27.03.2026 19:49 0.712
Embedding sim.0.7937
Entity overlap0.4211
Title sim.0.2075
Time proximity0.8793
NLP типlawsuit
NLP организацияAnthropic
NLP темаai safety
NLP странаUnited States

Открыть оригинал

Punishing Anthropic Hegseth, Trump had no authority to order Anthropic to be blacklisted, judge says “I don’t know”: Department of War fails to justify blacklisting Anthropic. Ashley Belanger – Mar 27, 2026 3:49 pm | 206 Secretary of War Pete Hegseth called Anthropic "arrogant" for warning of AI safety concerns. Credit: Win McNamee / Staff | Getty Images News Secretary of War Pete Hegseth called Anthropic "arrogant" for warning of AI safety concerns. Credit: Win McNamee / Staff | Getty Images News Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more Minimize to nav “Classic First Amendment retaliation.” That’s how US District Judge Rita Lin described the Department of War’s effort to blacklist Anthropic and designate it a supply-chain risk . By all appearances, “these measures appear designed to punish Anthropic,” Lin wrote in an order granting Anthropic’s request for a preliminary injunction. Officials seemingly had no authority to take such extreme actions without considering less restrictive alternatives or offering any evidence that Anthropic posed an urgent risk to national security, Lin said. Instead, “the Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’” “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin said. Anthropic’s spokesperson told Ars the firm is “grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits.” But Anthropic remains in a difficult position, still afraid that the fight will block it from competing for lucrative government contracts. In a blog earlier this month, Anthropic maintained that “Anthropic has much more in common with the Department of War than we have differences” and should be working together to deploy AI safely across government. Anthropic is still walking the same line in the aftermath of Lin’s order. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” Anthropic’s spokesperson said. DoW official calls order a “disgrace” For Anthropic this fight could be existential. After the DoW’s actions, three trade deals were promptly cancelled, while other potential partners delayed talks. The company showed it was already suffering irreparable harms that would only worsen the longer the blacklisting was upheld—including losing potentially billions in private and government contracts the company expected to sign over the next five years, Lin noted. To prevent ongoing harms, Lin ordered a preliminary injunction blocking US agencies from complying with directives from Donald Trump and Secretary of War Pete Hegseth. However, she also granted the government’s request for an administrative stay, which delays the injunction from taking effect for seven days. That gives the government a brief window to seek an emergency stay from an appeals court. Asked for comment, DoW pointed to statements on X from Under Secretary of War Emil Michael, who emphasized that the supply-chain risk designation still applies over the next week. Showing that Trump officials don’t plan to back off the fight, Michael claimed that Lin’s order was “a disgrace” and contained “factual errors” due to the judge’s supposed rush to order the injunction. According to Michael, Lin did not fully consider how disrupting Hegseth’s directive could “disrupt” how US military operations are conducted. However, Lin cited a brief filed in support of Anthropic from military leaders, who warned that letting the directive stand “will materially detract from military readiness and operational safety.” Anthropic has argued that its technology is not ready to be used for mass surveillance of Americans or in fully autonomous lethal weapons, potentially posing civil rights risks if leveraged now. Lin noted that the case touched on “an important public debate,” which is whether an AI company can dictate how the government uses its models. But it was not up to her to decide if AI firms or the government should be in charge of deciding what AI uses are safe for the public. Instead, she had to rule on whether government officials violated Anthropic’s First Amendment rights, denied Anthropic due process, or acted arbitrarily or capriciously. And at this stage of the case, Anthropic has shown enough to prove it’s likely to succeed on all claims, she said. DoW is not authorized to “designate a domestic vendor a supply chain risk simply because a vendor publicly criticized DoW’s views about the safe uses of its system,” Lin wrote. In fact, “that designation has never been applied to a domestic company and is directed principally at foreign intelligence agencies, terrorists, and other hostile actors,” she said. “I don’t know”: Lawyer has no defense of Hegseth The DoW began using Anthropic’s Claude in March 2025 and had been using it for the past year without ever raising any concerns that Anthropic’s terms limiting certain uses posed a national security risk, Lin said. Rather, the government thoroughly vetted Claude before implementing it, praised Anthropic publicly, and planned to expand the partnership. The amicable nature of the partnership only changed, the judge said, after DoW sought to deploy Claude on a military platform and Anthropic ultimately agreed to do so with “two critical exceptions: mass surveillance of Americans and lethal autonomous warfare.” Based on its testing, Anthropic could not guarantee that Americans’ civil rights would not be infringed if Claude was used for these purposes, Anthropic said. If the government disliked the terms, Anthropic repeatedly said it would understand if another vendor was selected, simply bowing out to avoid compromising on AI safety principles that might “undercut Anthropic’s core identity,” Lin wrote. Calling out Anthropic for “utopian idealism,” DoW officials blasted Anthropic for supposedly trying to get the government to let a private company decide how military operations go down. “You can’t have an AI company sell AI to the Department of War and [then say] don’t let it do Department of War things,” Michael told the press. They accused Anthropic of trying to use the DoW dispute to spin up positive press, and Trump joined the chorus. On Truth Social, he labeled Anthropic a “radical left, woke company,” allegedly putting their “selfishness” above national security. Following Trump’s post, Hegseth took to X, writing that “Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.” In both posts, officials claimed that orders to blacklist Anthropic were effective immediately, but neither cited what authority they had to do so. During oral arguments, a government lawyer later admitted that “he was not aware of any statute that gave Secretary Hegseth the authority to issue such a prohibition and agreed that the statement had ‘absolutely no legal effect at all,’” Lin wrote. Further, “when asked why Hegseth made a public statement that had no legal effect and that did not reflect the immediate intent of DoW, counsel stated, ‘I don’t know.’” Perhaps most glaringly, Hegseth seemed to contradict himself when arguing that Anthropic at once “presented a grave threat to national security” requiring a supply-chain risk designation and also “Anthropic was essential to national security” and could be compelled to provide services under the Defense Production Act. The only reason that the government gave for labeling Anthropic a national security risk was that the company could supposedly update their products and compromise systems. They claimed that Anthropic would be motivated to sabotage the military as retaliation for the directives. But Lin didn’t find that likely, either, since any other IT provider could potentially introduce the same risks. More importantly, Anthropic showed unrebutted evidence that it would be impossible to force updates or otherwise control the government’s systems. To the judge, any national security risk could be foreclosed by simply ending the military’s contract with Anthropic, which Anthropic had already agreed would be understandable. Lacking any statutory basis, it seemed clear from officials’ statements that Anthropic was being punished for publicly criticizing the military’s plans, the judge concluded. As Anthropic alleged, “defendants set out to publicly punish Anthropic for its ‘ideology’ and ‘rhetoric,’ as well as its ‘arrogance’ for being unwilling to compromise those beliefs,” the judge said. “Secretary Hegseth expressly tied Anthropic’s punishment to its attitude and rhetoric in the press.” Anthropic retaliation “deeply troubling,” judge says On top of rushing to shut down government contracts with Anthropic and influence its commercial deals with any business that also hoped to work with DoW, Hegseth also failed to give Anthropic an opportunity to defend itself from claims before taking action that the record shows wasn’t urgent. Civil rights and public safety advocates had urged the court to block the government’s actions or else risk a chilling effect preventing any AI firm from speaking up about unsafe government AI uses. Ultimately, Lin agreed that any time the government raised a red flag that a vendor was an “adversary,” it was “deeply troubling.” That could “chill open deliberation” and “professional debate” among those “best positioned to understand AI technology” and its potential for “catastrophic misuse,” the judge wrote. As the government likely moves to try to block the preliminary injunction, their argument contends that Lin’s order could force the government to pay for Anthropic products and never get that money back. They also claimed they were conducting an audit to see if any security risks currently exist that could justify the supply-chain risk designation. Lin doesn’t see it that way, though. She wrote that “the preliminary injunctive relief that the Court authorizes does not require the government to continue to use Claude on its national security systems.” She also noted that “the government ‘cannot suffer harm from an injunction that merely ends an unlawful practice.’” Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 206 Comments
[Перевод] Пентагон против Anthropic: почему этот конфликт касается каждого habr_ai 29.03.2026 08:56 0.677
Embedding sim.0.8078
Entity overlap0.1818
Title sim.0.0719
Time proximity0.6688
NLP типother
NLP организацияUnited States Department of Defense
NLP темаartificial intelligence
NLP странаUnited States

Открыть оригинал

Время от времени технический спор обнажает нечто гораздо большее. Недавняя стычка между Министерством обороны США и Anthropic — как раз такой случай. Не потому, что речь о контракте на 200 миллионов долларов. А потому, что она делает видимым новый тип корпоративного риска — тот, который большинство CEO, CTO и CIO до сих пор воспринимают как закупочную формальность. В недавнем материале « Пентагон хочет переписать правила ИИ » я сосредоточился на политическом значении ситуации, когда правительство пытается заставить ИИ-компанию ослабить собственные ограничения. Для руководителей бизнеса главный вывод — куда более практичный: если ваши ИИ-возможности зависят от условий, политик и механизмов контроля одного провайдера, ваша стратегия теперь — заложник чужого конфликта. Читать далее
[Перевод] ИИ-война: секретная система Palantir выбирает цели для ударов по Ирану habr_ai 02.04.2026 20:33 0.625
Embedding sim.0.7729
Entity overlap0.1176
Title sim.0.0866
Time proximity0.3595
NLP типother
NLP организацияPalantir
NLP темаartificial intelligence
NLP странаUnited States

Открыть оригинал

Данный материал основан на выпуске Democracy Now! от 2 апреля 2026 года. Контекст: война США и Ирана на 32-й день Представьте себе: военный конфликт, где за 24 часа наносится удар по тысяче целей. Это не фантастика и не отчеты Пентагона из далекого будущего. Это, по заявлениям администрации Трампа, реальность 32-го дня войны с Ираном. Всего за месяц армия США поразила 11 000 объектов. Но за этими цифрами скрывается нечто более тревожное, чем просто мощь американской военной машины. За ними стоят алгоритмы. Проект под кодовым названием Maven — это "Google Earth для войны". Карта, усеянная белыми точками, где каждая содержит координаты, высоту, тип объекта и пометку "свой" или "чужой". Именно эта система, под управлением ИИ, сегодня берет на себя работу, на которую раньше уходили месяцы. Как хвастается технический директор Palantir Шьям Санкар: "То, что требовало усилий 50–100 человек за полгода, сегодня делает один человек за две недели". Они называют это «костюмом Железного человека», делающим солдат в 50 раз эффективнее. Но что происходит, если «костюм» дает сбой? Читать далее