|
S
|
Dr. Will Cole Joins Forces with Dr. Josh Axe at The Health Institute to Redefine the Future of Cellular Health |
prnewswire |
30.03.2026 16:00 |
1
|
| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | leadership_change |
| NLP организация | The Health Institute |
| NLP тема | healthcare ai |
| NLP страна | United States |
Открыть оригинал
Dr. Will Cole Joins Forces with Dr. Josh Axe at The Health Institute to Redefine the Future of Cellular Health
News provided by
The Health Institute
Mar 30, 2026, 12:00 ET
Share this article
Share to X
Share this article
Share to X
NASHVILLE, Tenn. , March 30, 2026 /PRNewswire/ -- A new chapter in health has begun.
The Health Institute (THI), led by Dr. Josh Axe, announces that Dr. Will Cole has officially joined forces with the organization—marking a powerful alignment of two of the most trusted voices in functional and cellular medicine.
Continue Reading
Dr. Josh Axe & Dr. Will Cole
At a time when millions are navigating fatigue, hormone imbalance, thyroid dysfunction, and chronic illness with more confusion than clarity, The Health Institute is stepping forward with a bold mission:
To restore health at the cellular level, equip individuals with personalized, root-cause solutions, and help people reclaim the strength, energy, and vitality they were created to live with.
Dr. Will Cole, a leading functional medicine expert and bestselling author of Gut Feelings and Inflammation Spectrum , brings a track record of pioneering work in autoimmune conditions, inflammation, and metabolic health. His approach aligns seamlessly with THI's focus on identifying and addressing the underlying drivers of disease—not masking symptoms, but rebuilding the body from the inside out.
Together, Dr. Axe and Dr. Cole will contribute to THI's growing ecosystem of education, personalized programs, and advanced testing—designed to guide individuals through a clear, structured path toward healing. Through this collaboration, THI continues to strengthen its position as a leading authority in cellular health, offering a trusted alternative to fragmented and symptom-focused care.
Dr. Cole added, "People are tired of being told to live with symptoms. What drew me to THI is the commitment to uncovering why the body is struggling—and then walking with people step-by-step as they rebuild their health. That's where real transformation happens."
For those who have been searching for answers, and for those ready to take ownership of their health—this is an invitation.
Visit The Health Institute to learn more about our philosophy, explore personalized pathways to healing, and discover what is possible when the right voices come together around the right mission.
About The Health Institute (THI)
The Health Institute is a leading functional health organization dedicated to helping individuals achieve optimal wellness through cellular health. Through advanced testing, personalized coaching, and root-cause protocols, THI equips people with the tools and guidance needed to heal, restore, and thrive, starting at the foundation of the body.
SOURCE The Health Institute
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Meta's Superintelligence Lab unveils its first public model, Muse Spark |
arstechnica_ai |
08.04.2026 19:59 |
0.814
|
| Embedding sim. | 0.9223 |
| Entity overlap | 0.3043 |
| Title sim. | 0.2404 |
| Time proximity | 0.9776 |
| NLP тип | product_launch |
| NLP организация | Meta |
| NLP тема | foundation models |
| NLP страна | |
Открыть оригинал
Meta on Wednesday announced Spark , the first AI model in the Muse family that it says represents "a ground-up overhaul of our AI efforts."
Muse Spark is the first release of Meta's Superintelligence Labs, formed a little less than a year ago with the grandiose goal of "deliver[ing] on the promise of personal superintelligence for everyone." The release represents a clean break from Meta's previous work on the open source Llama model family , which has received a middling reaction both from users and on independent LLM rankings . And while Spark will be a proprietary model, Meta founder and CEO Mark Zuckerberg said in a post on Threads that the Muse family will "includ[e] new open source models" in the future.
Meta said that Muse Spark will take advantage of content posted across platforms like Instagram, Facebook, and Threads, much as xAI's Grok is integrated with content posted on X. Currently, this means Muse Spark can link to public posts related to a location or trending topic that you ask about, for instance. In the future, Meta says this will expand to "new features that cite recommendations and content people share" and "Reels, photos, and posts woven directly into your answers, with credit back to the content creators."
Read full article
Comments
|
|
|
Anthropic ups compute deal with Google and Broadcom amid skyrocketing demand | TechCrunch |
techcrunch |
07.04.2026 16:05 |
0.785
|
| Embedding sim. | 0.9029 |
| Entity overlap | 0.2308 |
| Title sim. | 0.193 |
| Time proximity | 0.911 |
| NLP тип | partnership |
| NLP организация | Anthropic |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
AI research lab Anthropic announced Monday that it signed a new agreement with Google and Broadcom for increased processing and compute capacity to power its Claude AI models. This reworking of its compute deals comes as demand for its AI models continues to soar.
The deals would expand Anthropic’s use of Google Cloud’s tensor processing units, or TPUs, the company’s advanced AI chips, and is an expansion of the deal the companies struck in October 2025 for more than a gigawatt of compute capacity.
This new compute capacity will come online in 2027, Anthropic said in the blog post. The company did not give specifics for its compute expansion, but a recent Broadcom SEC filing shows the deal includes 3.5 gigawatts of compute .
The majority of this compute will be housed in the U.S. and will be an extension of the company’s $50 billion commitment to invest in U.S. compute infrastructure , Anthropic said in the post.
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” Krishna Rao, CFO of Anthropic, said in the press release. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”
Anthropic did not respond to TechCrunch’s request for comment.
The company has seen demand for its Claude models explode in recent months, buoyed by enterprise customers and despite the U.S. Defense Department’s labeling of Anthropic a supply-chain risk . Anthropic also recently closed a $30 billion Series G funding round that valued the company at $380 billion.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
The company’s run rate revenue is now $30 billion, the company announced, marking a drastic jump from the $9 billion the company recorded at the end of 2025. Anthropic also has more than 1,000 business customers spending more than $1 million on an annualized basis.
Topics
AI , Anthropic , Broadcom , Claude , compute , Enterprise , Enterprise , Google , United States
Rebecca Szkutak
Senior Reporter, Venture
Becca is a senior writer at TechCrunch that covers venture capital trends and startups. She previously covered the same beat for Forbes and the Venture Capital Journal.
You can contact or verify outreach from Becca by emailing rebecca.szkutak@techcrunch.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything |
wired |
07.04.2026 18:49 |
0.777
|
| Embedding sim. | 0.8762 |
| Entity overlap | 0.4348 |
| Title sim. | 0.15 |
| Time proximity | 0.9951 |
| NLP тип | partnership |
| NLP организация | Anthropic |
| NLP тема | ai security |
| NLP страна | |
Открыть оригинал
Lily Hay Newman
Security
Apr 7, 2026 2:49 PM
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything
The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos Preview model to test advancing AI cybersecurity capabilities.
Photo-Illustration: WIRED Staff; Getty Images; Courtesy of Anthropic
Save this story
Save this story
Following leaked revelations at the end of March that Anthropic had developed a powerful new Claude model, the company formally announced Mythos Preview on Tuesday along with news of an industry consortium it has convened, known as Project Glasswing, to grapple with the cybersecurity implications of the new model and advancing capabilities more generally across the AI field.
The group includes Microsoft , Apple , and Google as well as Amazon Web Services , the Linux Foundation, Cisco, Nvidia , Broadcom, and more than 40 other tech, cybersecurity, critical infrastructure, and financial organizations that will have private access to the model, which is not yet being generally released. The idea, in part, is simply to give the developers of the world's foundational tech platforms time to turn Mythos Preview on their own systems so they can mitigate vulnerabilities and exploit chains that the model develops in simulated attacks. More broadly, Anthropic emphasizes that the purpose of convening the effort is to kickstart urgent exploration of how AI capabilities across the industry are on the precipice, the company says, of upending current software security and digital defense practices around the world.
“The real message is that this is not about the model or Anthropic,” Logan Graham, the company's frontier red team lead, tells WIRED. “We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months. Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”
Models developed and trained by multiple companies have increasingly been able to find vulnerabilities in code and propose mitigations —or strategies for exploitation . This creates a next generation of security's classic cat-and-mouse game in which a tool can aid defenders but can also fuel bad actors and make it easier to carry out attacks that were once too expensive or complex to be practical.
“Claude Mythos preview is a particularly big jump,” Anthropic CEO Dario Amodei said on Tuesday in a Project Glasswing launch video. “We haven't trained it specifically to be good at cyber. We trained it to be good at code, but as a side effect of being good at code, it's also good at cyber.” He adds in the video that “more powerful models are going to come from us and from others. And so we do need a plan to respond to this.”
Anthropic's Graham notes that in addition to vulnerability discovery—including producing potential attack chains and proofs of concept—Mythos Preview is capable of more advanced exploit development, penetration testing, endpoint security assessment, hunting for system misconfigurations, and evaluating software binaries without access to its source code.
In carrying out a staggered release of Mythos Preview, beginning with an industry collaboration phase, Graham says that Anthropic sought to draw on tenets of coordinated vulnerability disclosure, the process of giving developers time to patch a bug before it is publicly discussed.
“We've seen Mythos Preview accomplish things that a senior security researcher would be able to accomplish,” Graham says. “This has very big implications then for how capabilities like this should be released. Done not carefully, this could be a meaningfully accelerant for attackers.”
Project Glasswing partners, including some of Anthropic's competitors, struck a collaborative tone in statements as part of the launch.
“Google is pleased to see this cross-industry cybersecurity initiative coming together,” Heather Adkins, Google's vice president of security engineering, says in a statement. “We have long believed that AI poses new challenges and opens new opportunities in cyber defense.”
Those who maintain components of internet infrastructure and firms that develop foundational tech platforms also seem enthusiastic about the collaboration, especially given that Anthropic says use of Mythos Preview has already started to uncover thousands of critical vulnerabilities, including some decades-old bugs that have been repeatedly missed or overlooked in even the most scrutinized code.
“As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented,” Microsoft's global CISO, Igor Tsyganskiy, says in a statement. “Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft.”
Graham says his team at Anthropic, a frontier research group, feels the urgency and the need for global collaboration.
“Probably the most important thing the group needs to do is figure out all the questions that need answers and then figure out the answers,” Graham says. “Project Glasswing is the starting point. It will fail if it’s just a handful of companies using a model. It has to grow into something even larger.”
|
|
|
Meta's new model is Muse Spark, and meta.ai chat has some interesting tools |
simon_willison |
08.04.2026 23:07 |
0.775
|
| Embedding sim. | 0.859 |
| Entity overlap | 0.122 |
| Title sim. | 0.4 |
| Time proximity | 0.9588 |
| NLP тип | product_launch |
| NLP организация | Meta |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
Meta announced Muse Spark today, their first model release since Llama 4 almost exactly a year ago . It's hosted, not open weights, and the API is currently "a private API preview to select users", but you can try it out today on meta.ai (Facebook or Instagram login required).
Meta's self-reported benchmarks show it competitive with Opus 4.6, Gemini 3.1 Pro, and GPT 5.4 on selected benchmarks, though notably behind on Terminal-Bench 2.0. Meta themselves say they "continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows".
The model is exposed as two different modes on meta.ai - "Instant" and "Thinking". Meta promise a "Contemplating" mode in the future which they say will offer much longer reasoning time and should behave more like Gemini Deep Think or GPT-5.4 Pro.
A couple of pelicans
I prefer to run my pelican test via API to avoid being influenced by any invisible system prompts, but since that's not an option I ran it against the chat UI directly.
Here's the pelican I got for "Instant":
And this one for "Thinking":
Both SVGs were rendered inline by the Meta AI interface. Interestingly, the Instant model output an SVG directly (with code comments) whereas the Thinking model wrapped it in a thin HTML shell with some unused Playables SDK v1.0.0 JavaScript libraries.
Which got me curious...
Poking around with tools
Clearly Meta's chat harness has some tools wired up to it - at the very least it can render SVG and HTML as embedded frames, Claude Artifacts style.
But what else can it do?
I asked it:
what tools do you have access to?
And then:
I want the exact tool names, parameter names and tool descriptions, in the original format
It spat out detailed descriptions of 16 different tools. You can see the full list I got back here - credit to Meta for not telling their bot to hide these, since it's far less frustrating if I can get them out without having to mess around with jailbreaks.
Here are highlights derived from that response:
Browse and search . browser.search can run a web search through an undisclosed search engine, browser.open can load the full page from one of those search results and browser.find can run pattern matches against the returned page content.
Meta content search . meta_1p.content_search can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including author_ids , key_celebrities , commented_by_user_ids , and liked_by_user_ids .
"Catalog search" - meta_1p.meta_catalog_search can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector.
Image generation . media.image_gen generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images.
container.python_execution - yes! It's Code Interpreter , my favourite feature of both ChatGPT and Claude.
Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at /mnt/data/ .
Python 3.9 is EOL these days but the library collection looks useful.
I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021 ).
container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to html for websites/apps or svg for vector graphics."
container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or catalog_search_citation_id for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox.
container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar?
Tools for editing files in the container - container.view , container.insert (with new_str and insert_line ), container.str_replace . These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness.
container.visual_grounding - see below, this one is fun .
subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response."
third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook.
Image analysis in the container
Let's talk about that container.visual_grounding one. Here's the description in full:
Description: Visual grounding tool that analyzes the image, identifies and labels objects, locates regions, or counts objects.
Parameters:
object_names (array, required)
image_path (string, default "")
format_type ("bbox"|"point"|"count"|null, default "bbox")
title (string|null)[string]
So it can take an image path, optional object names and return things in format bbox , point or count .
Sounds to me like Meta's Segment Anything . I decided to take it for a spin.
Prompt :
generate a photo of a raccoon sitting on a trash can wearing trash as a hat
Response :
Here you go — king of the alley, in full regalia:
Coffee cup crown, banana peel brim, newspaper feather. Peak raccoon fashion. Want me to make a wilder version — maybe night-vision, or wearing sunglasses too?
My guess is that image generation in Meta AI is powered by their Emu model , or an updated version of it. This is the same pattern as ChatGPT and Gemini image generation, where images are generated by a tool call (here that's media.image_gen ).
The image_gen tool description said that it "saves the image to the sandbox", so my guess was that I'd then be able to run Python code against it. I tried:
use python OpenCV to analyze that image and find out neat things about it
Annoyingly it did not show me the code it ran, but it did create this image:
Pretty neat! We can both generate images and then use them with Python tools.
Next, the visual_grounding tool. Knowing the names of the tools makes it easy to refer to them in prompts:
Now try the container.visual_grounding tool
The response ( full response here ):
Done — visual_grounding pinpointed every piece of the outfit with pixel-level precision:
[...]
Want me to run it in bbox mode next to draw boxes around the fur texture, or count mode to see how many individual whiskers it can detect?
Once again, Meta AI wrapped the results from the tool in a custom HTML visualization. That's clearly point mode, but how about the others?
I prompted:
yes draw bounding boxes, can it do masks too?
Response:
Yes — bounding boxes work great. Masks? Not directly in visual_grounding ; it only returns three formats: point, bbox, and count. No pixel-level segmentation masks.
Here's the bbox output:
What you’re seeing:
raccoon : big box from (270,240) to (892,845) — covers 62% of the width, basically the whole animal
coffee cup : (424,115)-(610,273) — sits highest, 158px tall in the 0-1000 space
banana peel : (472,215)-(667,291) — overlaps the cup's base, confirming it's draped over the side
newspaper : (305,223)-(548,315) — left ear coverage
trash can lid : (107,800)-(1000,998) — spans almost the full image width at the bottom
face : (305,283)-(750,537) — tight crop around the mask
eyes : both ~40×37 pixel boxes, perfectly level at y=379
Notice how the boxes nest: face is inside raccoon, eyes inside face, hat pieces overlap each other but stay above the face box. The tool is doing real object localization, not just guessing centers.
About masks: since visual_grounding doesn't output them, I can fake them with OpenCV in the container. Two quick options:
GrabCut — I give it the raccoon bbox, it iteratively segments fur from background and returns a binary mask
SAM-style color threshold — use the bbox plus k-means to isolate the coffee cup or banana peel
Want me to generate an actual alpha mask for the raccoon (cut it out from the alley), or for the trash hat pieces separately? Takes about 10 seconds and I can give you a transparent PNG.
Here's the HTML for that bbox visualization .
Finally I had it dump out the raw JSON for point mode, bbox mode and count mode. You can see that all here . The count mode is particularly interesting - here's that in full:
[
{
"object_name" : " raccoon whisker " ,
"points" : [
{ "x" : 403 , "y" : 445 },
{ "x" : 397 , "y" : 458 },
{ "x" : 392 , "y" : 472 },
{ "x" : 391 , "y" : 485 },
{ "x" : 398 , "y" : 497 },
{ "x" : 415 , "y" : 512 },
{ "x" : 618 , "y" : 446 },
{ "x" : 622 , "y" : 461 },
{ "x" : 625 , "y" : 475 },
{ "x" : 628 , "y" : 488 },
{ "x" : 635 , "y" : 501 },
{ "x" : 651 , "y" : 518 }
],
"count" : 12
},
{
"object_name" : " raccoon paw claw " ,
"points" : [
{ "x" : 406 , "y" : 845 },
{ "x" : 424 , "y" : 847 },
{ "x" : 441 , "y" : 842 },
{ "x" : 473 , "y" : 833 },
{ "x" : 596 , "y" : 830 },
{ "x" : 635 , "y" : 844 },
{ "x" : 660 , "y" : 843 },
{ "x" : 679 , "y" : 838 }
],
"count" : 8
},
{
"object_name" : " trash item on head " ,
"points" : [
{ "x" : 514 , "y" : 191 },
{ "x" : 568 , "y" : 263 },
{ "x" : 392 , "y" : 274 }
],
"count" : 3
},
{
"object_name" : " eye " ,
"points" : [
{ "x" : 462 , "y" : 395 },
{ "x" : 582 , "y" : 398 }
],
"count" : 2
},
{
"object_name" : " ear " ,
"points" : [
{ "x" : 359 , "y" : 313 },
{ "x" : 680 , "y" : 294 }
],
"count" : 2
}
]
So Meta AI has the ability to count a raccoon's whiskers baked into the default set of tools.
Which means... it can count pelicans too!
Here's that overlay exported as HTML .
Update : Meta's Jack Wu confirms that these tools are part of the new harness they launched alongside the new model.
Maybe open weights in the future?
On Twitter Alexandr Wang said :
this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions.
I really hope they do go back to open-sourcing their models. Llama 3.1/3.2/3.3 were excellent laptop-scale model families, and the introductory blog post for Muse Spark had this to say about efficiency:
[...] we can reach the same capabilities with over an order of magnitude less compute than our previous model, Llama 4 Maverick. This improvement also makes Muse Spark significantly more efficient than the leading base models available for comparison.
So are Meta back in the frontier model game? Artificial Analysis think so - they scored Meta Spark at 52, "behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6". Last year's Llama 4 Maverick and Scout scored 18 and 13 respectively.
I'm waiting for API access - while the tool collection on meta.ai is quite strong the real test of a model like this is still what we can build on top of it.
Tags: facebook , ai , generative-ai , llms , code-interpreter , llm-tool-use , meta , pelican-riding-a-bicycle , llm-reasoning , llm-release
|
|
|
The Iranian Lego AI video creators credit their virality to ‘heart’ |
the_verge_ai |
10.04.2026 17:30 |
0.773
|
| Embedding sim. | 0.8853 |
| Entity overlap | 0.4118 |
| Title sim. | 0.1845 |
| Time proximity | 0.8329 |
| NLP тип | other |
| NLP организация | Explosive Media |
| NLP тема | generative ai |
| NLP страна | Iran |
Открыть оригинал
Donald Trump has spun the recent rescue of a downed airman whose fighter jet was destroyed behind Iranian borders as a resounding success. But the story is very different in one of the many viral, AI-generated Lego videos that have been produced by Iranian content creation group Explosive Media in the weeks since the US and Israel began dropping bombs on the country. In Explosive Media's music video take on how things played out, the US military is a joke for losing multiple planes and helicopters , and spending "$100 million just to save one guy."
The video's shots of Lego jets exploding into $100 bills and golden coins reinforce the idea …
Read the full story at The Verge.
|
|
|
Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic? | TechCrunch |
techcrunch |
09.04.2026 18:50 |
0.771
|
| Embedding sim. | 0.8984 |
| Entity overlap | 0.125 |
| Title sim. | 0.2835 |
| Time proximity | 0.7093 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | ai security |
| NLP страна | United States |
Открыть оригинал
Anthropic said this week that it limited the release of its newest model, dubbed Mythos , because it is too capable of finding security exploits in software relied upon by users around the world.
Instead of unleashing Mythos on the public, the frontier lab will share it with a group of large companies and organizations that operate critical online infrastructure, from Amazon Web Services to JPMorgan Chase.
OpenAI is reportedly considering a similar plan for its next cybersecurity tool. The ostensible idea is to let these big enterprises get ahead of bad actors who could leverage advanced LLMs to penetrate secure software.
But the “e-word” in the sentence above is a hint that there might be more to this release strategy than cybersecurity — or the hyping of model capabilities.
Dan Lahav, the CEO of the AI cybersecurity lab Irregular , told TechCrunch in March, before the release of Mythos, that while the discovery of vulnerabilities by AI tools matters, the specific value of any weakness to an attacker depends on many factors, including how they can be used in combination.
“The question I always have in my mind,” Lahav said, “is did they find something that is exploitable in a very meaningful way, whether individually or as part of a chain?”
Anthropic says Mythos is able to exploit vulnerabilities far more than its previous model, Opus. But it’s not clear that Mythos is actually the be-all and end-all of cybersecurity models. Aisle, an AI cybersecurity startup, said it was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. Aisle’s team argues that these results show there is no single deep learning model for cybersecurity, but instead depends on the task at hand.
Given that Opus was already seen as a game changer for cybersecurity, there’s another reason that frontier labs may want to limit their releases to big organizations: It creates a flywheel for big enterprise contracts, while making it harder for competitors to copy their models using distillation, a technique that leverages frontier models to train new LLMs on the cheap.
“This is marketing cover for fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill,” David Crawshaw, a software engineer and CEO of the startup exe.dev, suggested in a social media post. “By the time you and I can use Mythos, there will be a new top-end rev that is enterprise only. That treadmill helps keep the enterprise dollars flowing (which is most of the dollars) by relegating distillation companies to second rank,” said Crawshaw.
That analysis jibes with what we’re seeing in the AI ecosystem: A race between frontier labs developing the largest, most capable models, and companies like Aisle that rely on multiple models and see open source LLMs, often from China and often allegedly developed through distillation, as a path to economic advantage.
The frontier labs have been taking a harder line on distillation this year, with Anthropic publicly revealing what it says are attempts by Chinese firms to copy its models, and three leading labs — Anthropic, Google, and OpenAI — teaming up to identify distillers and block them, according to a Bloomberg report .
Distillation is a threat to the business model of frontier labs because it eliminates the advantages conveyed by using huge amounts of capital to scale. Blocking distillation, then, is already a worthwhile endeavor, but the selective release approach to doing so also gives the labs a way to differentiate their enterprise offerings as the category becomes the key to profitable deployment.
Whether Mythos or any new model truly threatens the security of the internet remains to be seen, and a careful rollout of the technology is a responsible way forward.
Anthropic didn’t respond to our questions about whether the decision also relates to distillation concerns at press time, but the company may have found a clever approach to protecting the internet — and its bottom line.
Topics
AI , Anthropic , cybersecurity , Mythos
Tim Fernholz
Tim Fernholz is a journalist who writes about technology, finance and public policy. He has closely covered the rise of the private space industry and is the author of Rocket Billionaires: Elon Musk, Jeff Bezos and the New Space Race. Formerly, he was a senior reporter at Quartz, the global business news site, for more than a decade, and began his career as a political reporter in Washington, D.C.
You can contact or verify outreach from Tim by emailing tim.fernholz@techcrunch.com or via an encrypted message to tim_fernholz.21 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Meta's new model is as open as Zuckerberg's private school |
the_register_ai |
08.04.2026 23:06 |
0.769
|
| Embedding sim. | 0.8352 |
| Entity overlap | 0.5217 |
| Title sim. | 0.3218 |
| Time proximity | 0.9746 |
| NLP тип | product_launch |
| NLP организация | Meta |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
AI + ML
9
Meta's latest model is as open as Zuckerberg's private school
9
You were the chosen one! It was said that you would destroy the proprietary models, not join them!
Tobias Mann
Wed 8 Apr 2026 //
23:06 UTC
Nearly two years after extolling the virtues of open source AI, Meta CEO Mark Zuckerberg is singing a different tune.
On Wednesday, the social media magnate unveiled its first new model developed by its Superintelligence team. But this is no Llama, and you can't download its weights.
The model, dubbed Muse Spark, is proprietary with access limited to Meta's AI portal or API access for those lucky enough to get an invite. Yep, this thing's locked down tighter than Zuck's private school .
Detailed in a blog post , Meta described the model as the "first step on our scaling ladder and the first product of a ground-up overhaul of our AI efforts."
Meta's decision not to release Muse Spark's weights comes in stark contrast to Zuckerberg's earlier position that "open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone."
Those comments came forth in a 2,000-plus-word manifesto titled "Open Source AI is the Path Forward," in which Zuckerberg waxed poetic on the merits of open source AI.
"If we were the only company using Llama, this ecosystem wouldn't develop and we'd fare no better than the closed variants of Unix," Zuckerberg wrote, drawing comparisons to the rise of the Linux operating system.
"Opening Llama doesn't undercut our revenue, sustainability, or ability to invest in research like it does for closed providers," he argued, emphasizing that Meta's business model didn't depend on selling access to its models, either.
That was the case until about a year later, when Meta launched its Llama API inference service, alongside its Llama 4 family of models.
But never fear, Zuckerberg hasn't given up on open models entirely. "Looking ahead, we plan to release increasingly advanced models that push the frontier of intelligence and capabilities, including new open source models," he wrote in a Threads post .
This kind of dual-tracking is common. In fact, Google routinely releases small open weights models derived from its larger proprietary Gemini models, with its Gemma 4 family being the latest example. OpenAI has made similar moves with gpt-oss, though it remains to be seen if that was a one-off or not.
But if Zuckerberg actually believed any of what he wrote in 2024, why bother with a closed model in the first place?
Perhaps because even though Meta hyped Llama 4's multimodal and agentic capabilities, the model never lived up to expectations, with Meta ultimately abandoning development of its largest variant, codenamed Behemoth, which would have weighed in at 2 trillion parameters.
The flop was apparently embarrassing enough that Meta started over from scratch, paying top dollar to woo top AI software engineers and executives, including Alexandr Wang , who now leads Meta Superintelligence Labs.
AI will make anyone a 10x programmer, but with 10x the cleanup
Surprise! Big Tech has been a bit rubbish at enforcing Australia's kids social media ban
Microsoft takes up residence next to OpenAI, Oracle at Crusoe's 900 MW Texas datacenter expansion
Folk are getting dangerously attached to AI that always tells them they're right
Will Zuck's new Muse Spark joy?
So it's not open source, you can't download it, but is it any good? It's a good question, because if it's not, Meta's shareholders might start to wonder why the company is so keen to light $135 billion on fire.
If Meta is to be believed, Muse Spark is a big improvement over Llama 4. The model boasts performance matching and in many cases besting the top models from OpenAI, Anthropic, and Google.
Here's how Meta says its new proprietary model compares to the AI heavyweights - Click to enlarge
But before you read too far into these benchmark numbers, remember that they're coming from a company that not that long ago was accused of pulling a bait-and-switch in order to make Llama 4 look better. Having said that, this time around Meta had the foresight to share its test methodology.
Compared to Llama 4, Meta claims Muse Spark was also more efficient to train, proving that "we can reach the same capabilities with an order of magnitude less compute than our previous model."
Meta didn't go into detail about the model's underlying architecture, describing it as a "natively multimodal reasoning model with tool-use, visual chain of thought, and multi-agent orchestration" capabilities.
Meta has also introduced what it calls a "contemplating mode," which orchestrates multiple reasoning agents working in parallel to compete with frontier models like Gemini Deep Think and GPT Pro. However, it doesn't appear that this function is generally available on day one. "Muse Spark is available now, and Contemplating mode will be rolling out gradually in meta.ai."
Spark is just the first in a new line of Muse models, with larger variants already in the works – and unlike Behemoth, we might actually get to see them. ®
Share
More about
AI
Large Language Model
Meta
More like these
×
More about
AI
Large Language Model
Meta
Narrower topics
AIOps
Amazon Bedrock
Anthropic
ChatGPT
DeepSeek
Facebook
Gemini
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Open Compute Project
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
WhatsApp
Broader topics
Andrew McCollum
Chris Hughes
Dustin Moskovitz
Eduardo Saverin
Mark Zuckerberg
Self-driving Car
More about
Share
9
COMMENTS
More about
AI
Large Language Model
Meta
More like these
×
More about
AI
Large Language Model
Meta
Narrower topics
AIOps
Amazon Bedrock
Anthropic
ChatGPT
DeepSeek
Facebook
Gemini
Google AI
GPT-3
GPT-4
Machine Learning
MCubed
Neural Networks
NLP
Open Compute Project
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
WhatsApp
Broader topics
Andrew McCollum
Chris Hughes
Dustin Moskovitz
Eduardo Saverin
Mark Zuckerberg
Self-driving Car
TIP US OFF
Send us news
|
|
|
A new Anthropic model found security problems ‘in every major operating system and web browser’ |
the_verge_ai |
07.04.2026 18:00 |
0.765
|
| Embedding sim. | 0.8642 |
| Entity overlap | 0.1935 |
| Title sim. | 0.2252 |
| Time proximity | 1 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | cybersecurity |
| NLP страна | |
Открыть оригинал
Anthropic is debuting a new AI model as part of a cybersecurity partnership with Nvidia, Google, Amazon Web Services, Apple, Microsoft, and other companies. Project Glasswing, as it's called, is billed as a way for large companies, and potentially even the government, to flag vulnerabilities in their systems with virtually no human intervention.
Anthropic is offering its launch partners access to Claude Mythos Preview, a new general-purpose model that it's not currently planning to publicly release due to security concerns. Newton Cheng, the cyber lead for Anthropic's frontier red team, told The Verge that the model will ideally give cyber …
Read the full story at The Verge.
|
|
|
Atlassian gussies up Confluence for the AI era |
the_register_ai |
08.04.2026 22:02 |
0.764
|
| Embedding sim. | 0.8463 |
| Entity overlap | 0.4138 |
| Title sim. | 0.2737 |
| Time proximity | 0.9462 |
| NLP тип | product_launch |
| NLP организация | Atlassian |
| NLP тема | enterprise ai |
| NLP страна | Australia |
Открыть оригинал
AI + ML
8
Atlassian gussies up Confluence for the AI era
8
Helps employees present data in Confluence in various ways
O'Ryan Johnson
Wed 8 Apr 2026 //
22:02 UTC
Atlassian is modernizing Confluence for the AI era, testing tools and agentic capabilities that give users the chance to turn their written notes into graphics and their ideas into software applications.
Each product is in its early stages and is being tested with a small group of customers before a wider deployment, an Atlassian spokesperson said via email. It is part of a strategy that means IT teams have to deal with fewer bespoke requests and security risks, since in each case the AI is being brought to the data inside Confluence.
For employees, Remix with Rovo – Rovo is Atlassian's AI assistant – gives them several ways to present the data that they have housed inside Confluence, the company's team workspace for managing projects.
“They can turn static docs, tables, or unstructured data into the format or workflow they need, tailored for the right audience or altitude — without leaving Confluence or opening a ticket,” the spokesperson told The Register . “Our data found that Confluence pages with visual elements are nearly two-times as likely to be read by a wider audience compared to pages without.”
The product appears to work like Google’s Notebook LM, which can also manipulate several file types to present data as a podcast, graphics, or a slide deck. However, Atlassian said Notebook LM has limits.
“Notebook LM is great for multi-modal transformation, but it works in a vacuum. The key difference is workspace-native context,” the spokesperson wrote. “Remix works within the pages, permissions, and structures teams already use — meeting notes, PRDs, runbooks — not a separate environment you copy/paste back from. With Notebook LM, users run the risk of creating an isolated artifact - with Remix, the output is always connected to the source content.”
Notebook LM, they said, is a single-player experience, while Remix outputs live inside Confluence where teams already work together. Comments, mentions, and real-time editing are native, not bolted on.
“The content stays multiplayer from the start, organized and findable by the entire team,” Atlassian’s spokesperson said.
On the agentic front, Atlassian said it has partnered with Lovable, Replit, and Gamma while promising more agent providers to come to turn data inside Confluence into software applications.
Confluence already allows users to create artifacts such as product requirements documents that can link directly to work tracked in Jira, its project management product. These new partner agents simply compress the process of taking a product requirements document, prototype, or strategy note and turning it into an app, prototype, or presentation using AI.
Atlassian to shed ten percent of staff, because AI
Atlassian built a tool to migrate Jira users to the cloud and it made the move slower
Atlassian swears it can handle AI without blowing out costs, or being swamped
Atlassian twice shunned AWS Graviton CPUs, but now runs Jira and Confluence on them
“It's not about turning Confluence into an app factory. It's about letting teams transform knowledge into whatever format it needs to be - with the source knowledge and the resulting experience connected and governed in one place,” an Atlassian spokesperson said.
The agents have no independent ability to carry out tasks without the user’s permission.
“They don't silently deploy apps or make architectural decisions on their own,” Atlassian’s spokesperson said. “The user initiates. They might say ‘turn this into an app’ or ask what's possible - and the agent suggests options and scaffolds a starting point. Users review and confirm outputs, but the experience goes further: teams can set up automations where partner agents proactively act on a schedule or trigger, without manual prompting.”
In terms of guardrails, Atlassian said its agents operate within Confluence's existing access controls, meaning that if a user can't see a page, the agent can't either. Users must also review and confirm outputs before anything is published or deployed.
“Teams that want to go further can configure agents to act proactively: triggered by a schedule, an event, or a workflow condition. The level of autonomy is yours to dial.”
In March, the company announced it would lay off about 10 percent of its staff, cutting about 1,600 jobs to fund AI initiatives. Late last year, the Australia-based company migrated more than 3,000 Jira and Confluence instances to AWS Graviton processors, with Graviton 4 handling user-facing tasks, resulting in roughly 10 percent savings, lower latency, and better customer response times. ®
Share
More about
AI
Atlassian
Confluence
More like these
×
More about
AI
Atlassian
Confluence
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Jira
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
Software
More about
Share
8
COMMENTS
More about
AI
Atlassian
Confluence
More like these
×
More about
AI
Atlassian
Confluence
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Jira
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
Software
TIP US OFF
Send us news
|
|
|
Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think |
wired |
10.04.2026 18:08 |
0.759
|
| Embedding sim. | 0.8796 |
| Entity overlap | 0.119 |
| Title sim. | 0.2047 |
| Time proximity | 0.8613 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | ai security |
| NLP страна | United States |
Открыть оригинал
Lily Hay Newman
Security
Apr 10, 2026 2:08 PM
Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think
The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who have long made security an afterthought.
Photo-Illustration: WIRED Staff; Getty Images
Save this story
Save this story
Anthropic said this week that the debut of its new Claude Mythos Preview model marks a critical juncture in the evolution of cybersecurity, representing an unprecedented existential threat to existing software defense strategies. So, is it more AI hype—or a true turning point?
According to Anthropic, Mythos Preview crosses a threshold of capabilities to discover vulnerabilities in virtually any and every operating system, browser, or other software product and autonomously develop working exploits for hacking. With this in mind, the company is only releasing the new model to a few dozen organizations for now—including Microsoft, Apple, Google, and the Linux Foundation—as part of a consortium dubbed Project Glasswing . But after years of speculation about how generative AI could impact cybersecurity, the news this week ignited controversy about whether a reckoning has really arrived and what it might look like in practice.
Some are extremely skeptical of Anthropic's claims. They argue that existing AI agents can already help users find and exploit vulnerabilities much more easily and cheaply than ever before, and that this reality is fueling refinements in how companies discover and patch their software without fundamentally changing the paradigm. And then there's the ick factor that Anthropic will almost certainly benefit financially from positioning its latest model as mysterious, uniquely powerful, and exclusive. Other researchers and practitioners, though, say that they agree with Anthropic's assessment and point out that the company has said Mythos Preview is just the first to achieve capabilities that will ultimately be widely available in other models.
“I typically am very skeptical of these things, and the open source community tends to be very skeptical, but I do fundamentally feel like this is a real threat,” says Alex Zenla, chief technology officer of cloud security firm Edera.
Zenla and others specifically point to one Mythos Preview capability as the pivot point. Generative AI, they say, is now getting more capable at identifying and developing what are known as “exploit chains,” or groups of vulnerabilities that can be exploited in sequence to deeply compromise a target—essentially Rube Goldberg–machine-style hacking. Many of the most sophisticated hacking techniques employ exploit chains, including so called zero-click attacks that compromise a system without requiring any interaction from a user.
“We are already living in the world where companies run vulnerable software, vulnerable hardware, and struggle to patch. Many companies are not capable of securing their infrastructure—that hasn’t really changed from yesterday to today,” says longtime security engineer and researcher Niels Provos. “But from what I understand, Mythos is really good at coming up with multistage vulnerabilities, and then also provides the proof of exploitation. I don’t think it intrinsically changes the problem space, but it changes the required skill level to find these vulnerabilities and exploit them.”
A limited release of Mythos Preview to Project Glasswing participants only gives defenders a small lead time to find weaknesses in their own systems using the model and start to grapple more broadly with how software development, update cycles, and patch adoption needs to change before attackers have widespread access to such capabilities themselves.
Industry leaders seem to be heeding the warning. Anthropic's frontier red team lead, Logan Graham, told WIRED on Tuesday that as the company reached out to organizations about Project Glasswing ahead of this week's announcement, the phone calls got shorter and shorter because the potential threat was becoming more obvious.
“This is an issue that involves all of the model developers. Our goal here is just to kick things off,” Graham said. “It's really important that Mythos Preview gets in the hands of defenders to give a head start.”
The people considering the impacts of Mythos Preview extend far beyond tech firms. Bloomberg reported this week that US Treasury secretary Scott Bessent and Federal Reserve chair Jerome Powell convened a meeting of finance sector leaders at the Treasury’s headquarters in Washington, DC, on Tuesday to discuss the potential impacts of models like Mythos Preview on cybersecurity.
Jeetu Patel, president and chief product officer of Cisco, which is a member of Project Glasswing, told WIRED at the HumanX AI conference in San Francisco that Mythos Preview “is a very, very big deal.”
“In the long run, you want to make sure that your defenses are machine-scale, because the attacks are machine-scale,” Patel said. “If I have billions of agents that are going to be attacking my infrastructure, I need to make sure that I can defend it effectively. What Anthropic did here is a fantastic thing, because it just creates a level of asymmetry against the bad actors.”
Still, some argue that the frenzy is overblown—a splinter of the overall AI hype cycle. “It's every spaghetti Western ever where big-tent preachers say the end is nigh and then skip town with everyone's money,” says longtime security and compliance consultant Davi Ottenheimer. “It's a shift, like learning how to fight with machine guns when others are still using bolt-action rifles, but it's not magical and mystical.”
Some argue, though, that given how long it takes for these mentality shifts to proliferate across all industries and organizations, it can be useful to seize on specific incidents or advances as an opportunity to raise awareness. Other cybersecurity reckonings have come after catastrophic breaches like the Aurora attacks on Google that highlighted the importance of “zero trust” architecture, or the Solarwinds and Log4shell hacking sprees that popularized a “secure by design” approach to software development. Anthropic argues that the debut of Mythos Preview can be used as a more prudent type of inflection point, because it is still a warning of what could be to come, not a real-world demonstration of a worst-case scenario.
Security experts also say that the moment presents an opportunity to address shortcomings in how software is currently developed.
“For decades, we have built an enormous global industry to defend, detect, and respond to ‘vulnerabilities’—flaws and defects in software—that should never have existed in the first place,” Jen Easterly, the longtime cybersecurity practitioner and former US Cybersecurity and Infrastructure Security Agency director, wrote on Wednesday. Project Glasswing, she argues, could usher in “a future in which AI helps us move beyond endlessly defending against flawed software and toward building technology that is more secure from the start. Not the end of cybersecurity as a mission, but the beginning of the end of cybersecurity as we know it.”
Edera's Zenla emphasizes that Mythos Preview is not a lightning bolt that will change everything overnight. Instead, she says, it is another step toward the security version of infinite monkeys at infinite typewriters eventually producing Shakespeare.
“If you get a million vulnerability researchers, they can find a huge number of bugs. But humans are not very good at holding lots of contextual information in their minds for long periods of time, so finding very long chains of vulnerabilities that are actually exploitable together has been rare," she says. “Mythos and models like it will accelerate the pace at which attackers will be able to group vulnerabilities into sets that can work together. Some people are going to be grumpy about it for a long time, but I do think the dynamic has shifted.”
Additional reporting by Maxwell Zeff.
|
|
|
Anthropic Mythos model can find and exploit 0-days |
the_register_ai |
07.04.2026 23:50 |
0.748
|
| Embedding sim. | 0.8391 |
| Entity overlap | 0.3235 |
| Title sim. | 0.2124 |
| Time proximity | 0.9653 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | ai security |
| NLP страна | |
Открыть оригинал
Security
19
Anthropic: All your zero-days are belong to Mythos
19
Hasn't released it to the public, because it would break the internet - in a bad way
Thomas Claburn
Tue 7 Apr 2026 //
23:50 UTC
For years, the infosec community’s biggest existential worry has been quantum computers blowing away all classical encryption and revealing the world’s secrets. Now they have a new Big Bad: an AI model that can generate zero-day vulnerabilities.
Anthropic made the model and named it Mythos. Thankfully, the AI company decided not to release it, because it would break the internet – and not in a good way.
"AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," the company said.
Mythos is markedly different from Claude Opus 4.6, which Anthropic only recently said was not very skilled at developing working exploit code. Where Opus 4.6 managed an exploit development success rate of just over zero percent, Mythos Preview generated a working exploit 72.4 percent of the time.
What Anthropic is describing is literally a zero-day engine: "Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit."
Fortunately, instead of releasing Mythos, Anthropic chose to provide a preview version to a set of industry partners so they can use it to find flaws in their systems before adversaries do.
The AI biz calls its limited release initiative Project Glasswing . Participants include: Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
And while this tech industry anti-rogues’ gallery scans their own systems with the purportedly perspicacious Mythos, Anthropic invited around 40 other organizations to participate in this introspective bug hunt, subsidized by up to $100M in usage credits for Mythos Preview and $4M in direct donations to open-source security organizations.
If that sounds a bit like an arsonist handing out fire extinguishers, well, that's on you for being so cynical.
AMD's AI director slams Claude Code for becoming dumber and lazier since last update
AWS CEO: It's funny when people ask me if AI is overhyped
Hundreds of orgs compromised daily in Microsoft device code phishing attacks
Intel gets trapped in Elon's reality distortion field as it joins in megafab delusions
Word of Mythos leaked last month when a draft blog post from Anthropic surfaced. The details published on Tuesday paint a stark picture for the security community: "During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so."
The 22 Anthropic researchers listed as authors of its Tuesday post insist that the vulns are often subtle and difficult to detect. Some are decades old, like the now-patched 27-year-old bug in OpenBSD.
"The exploits it constructs are not just run-of-the-mill stack-smashing exploits (though as we'll show, it can do those too). In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray that escaped both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses. And it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets."
According to Anthropic, Mythos identified "thousands of additional high- and critical-severity vulnerabilities." The company is in the process of disclosing them responsibly.
Uh, thanks? ®
Share
More about
AI
Anthropic
Security
More like these
×
More about
AI
Anthropic
Security
Software
Narrower topics
2FA
AdBlock Plus
Advanced persistent threat
AIOps
App
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Claude
Common Vulnerability Scoring System
Confluence
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Database
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Digital certificate
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Grab
Graphics Interchange Format
Hacker
Hacking
Hacktivism
IDE
Identity Theft
Image compression
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
NCSAM
NCSC
Neural Networks
NLP
OpenOffice
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
QR code
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
Retro computing
REvil
RSA Conference
Search Engine
Software Bill of Materials
Software bug
Software License
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
Text Editor
TLS
TOPS
Trojan
Trusted Platform Module
User interface
Visual Studio
Visual Studio Code
Vulnerability
Wannacry
WebAssembly
Web Browser
WordPress
Zero trust
Broader topics
Large Language Model
Self-driving Car
More about
Share
19
COMMENTS
More about
AI
Anthropic
Security
More like these
×
More about
AI
Anthropic
Security
Software
Narrower topics
2FA
AdBlock Plus
Advanced persistent threat
AIOps
App
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Claude
Common Vulnerability Scoring System
Confluence
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Database
Data Breach
Data Protection
Data Theft
DDoS
DeepSeek
DEF CON
Digital certificate
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Gemini
Google AI
Google Project Zero
GPT-3
GPT-4
Grab
Graphics Interchange Format
Hacker
Hacking
Hacktivism
IDE
Identity Theft
Image compression
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
NCSAM
NCSC
Neural Networks
NLP
OpenOffice
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
QR code
Quantum key distribution
Ransomware
Remote Access Trojan
Retrieval Augmented Generation
Retro computing
REvil
RSA Conference
Search Engine
Software Bill of Materials
Software bug
Software License
Spamming
Spyware
Star Wars
Surveillance
Tensor Processing Unit
Text Editor
TLS
TOPS
Trojan
Trusted Platform Module
User interface
Visual Studio
Visual Studio Code
Vulnerability
Wannacry
WebAssembly
Web Browser
WordPress
Zero trust
Broader topics
Large Language Model
Self-driving Car
TIP US OFF
Send us news
|
|
|
Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch |
techcrunch |
09.04.2026 19:10 |
0.741
|
| Embedding sim. | 0.8447 |
| Entity overlap | 0.2222 |
| Title sim. | 0.2427 |
| Time proximity | 0.8396 |
| NLP тип | product_launch |
| NLP организация | Meta |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
Meta’s AI app has seen a sizable jump in installs following Wednesday’s launch of the company’s newest AI model, Muse Spark — its first model release under Alexandr Wang, the head of Meta’s Superintelligence Labs who was recruited from Scale AI last year to overhaul the social giant’s AI efforts.
According to new data from market intelligence provider Appfigures, consumer demand for the Meta AI app has pushed the app up from No. 57 right before the launch of Muse Spark on Wednesday to No. 5 on the U.S App Store on Thursday — a move suggesting a flood of new installs.
Another market intelligence firm, Sensor Tower , estimated that Meta AI saw around 46,000 U.S. iOS app downloads on April 8, 2026, an increase of 87% day-over-day. On Android, U.S. downloads of Meta AI rose just 3% day-over-day on April 8.
Meta says its new AI model , which is available on both the web and mobile, is a significant upgrade over its earlier Llama 4 models. It’s also the company’s latest attempt to catch up to rivals like OpenAI and Anthropic, an effort that’s already cost Meta billions in recruiting AI talent , in addition to its $14.3 billion investment in Scale AI.
Currently, Muse Spark accepts multimodal input, including voice, text, and images, and has been designed to perform well on a number of tasks, like helping people learn about their health and reasoning through complex questions in areas like science and math. It can also aid in visual coding, letting users create websites and mini-games from prompts.
Plus, Meta AI is able to launch multiple subagents to handle users’ questions, the company said .
The model will roll out to other platforms, including WhatsApp, Instagram, Facebook, Messenger, and Meta’s AI glasses, in the weeks ahead.
Alongside the model’s launch, the Meta AI mobile app and website were upgraded with a new look and feel and now allow users to switch between modes depending on the task.
Despite the recent growth, Meta AI’s app still lags behind the AI chatbots from other top model makers, including OpenAI’s ChatGPT (No. 1), Anthropic’s Claude (No. 2), and Google’s Gemini (No. 3).
Wang pointed to the new high rank in a post on X earlier Thursday, and he noted that the app is “still growing.”
Meta AI is up to #6 in the App Store overnight, and still growing :)
Also who knew the 7-Eleven app was so popular pic.twitter.com/55JduZWsds
— Alexandr Wang (@alexandr_wang) April 9, 2026
Appfigures data indicates that Meta AI’s app has been installed a total of 60.5 million times worldwide across both the App Store and Google Play, with 25 million of those downloads occurring just this year.
Over the past five months, Meta AI app downloads have increased by 138% when compared with the first five months of the app’s availability.
India is now Meta AI’s top market by downloads, followed by the U.S., Brazil, Pakistan, and Mexico, according to Appfigures.
Image Credits: Appfigures (screenshot)
Outside the app stores, Sensor Tower’s data indicated that daily web visitors for Meta AI in the U.S. rose more
than 450% day-over-day, and daily web visitors in the U.S. reached an all-time high on April 8, 2026. The latter was an increase of more than 570%, compared to the average registered in the previous 30 days.
Updated with additional data from Sensor Tower.
Topics
AI , Apps , Meta , meta ai , muse spark
Sarah Perez
Consumer News Editor
Sarah has worked as a reporter for TechCrunch since August 2011. She joined the company after having previously spent over three years at ReadWriteWeb. Prior to her work as a reporter, Sarah worked in I.T. across a number of industries, including banking, retail and software.
You can contact or verify outreach from Sarah by emailing sarahp@techcrunch.com or via encrypted message at sarahperez.01 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
AI on the couch: Anthropic gives Claude 20 hours of psychiatry |
arstechnica_ai |
09.04.2026 21:20 |
0.739
|
| Embedding sim. | 0.8358 |
| Entity overlap | 0.1111 |
| Title sim. | 0.2299 |
| Time proximity | 0.989 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | foundation models |
| NLP страна | |
Открыть оригинал
The AI company Anthropic released a 244-page "system card" (PDF) this week describing its newest model, Claude Mythos. The model is "our most capable frontier model to date," the company says, and supposedly is so good that Anthropic has decided "not to make it generally available." (The company claims that Mythos is too good at finding unknown cybersecurity bugs, and so the model is only being released to select companies like Microsoft and Apple for now.)
Whatever the truth of this claim, the system card is a fascinating document. Anthropic is well-known as one of the more "AI might be conscious!" companies in the industry , and its new system card claims that as models become more powerful, "It becomes increasingly likely that they have some form of experience, interests, or welfare that matters intrinsically in the way that human experience and interests do."
The company isn't sure about this, it makes clear, but it says that "our concern is growing over time."
Read full article
Comments
|
|
|
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table |
wired |
08.04.2026 18:51 |
0.734
|
| Embedding sim. | 0.8208 |
| Entity overlap | 0.1935 |
| Title sim. | 0.25 |
| Time proximity | 0.9843 |
| NLP тип | product_launch |
| NLP организация | Meta |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
Will Knight
Business
Apr 8, 2026 2:51 PM
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table
Muse Spark is Meta’s first model since its AI reboot, and the benchmarks suggest formidable performance.
Photograph: ANGELA WEISS/Getty Images
Save this story
Save this story
Meta on Wednesday announced its first major model since CEO Mark Zuckerberg rebooted the company’s AI efforts last year under a new division called Meta Intelligence Labs . The model, called Muse Spark, is a step toward Zuckerberg’s vision of “personal superintelligence,” the company says, and for now, it will remain closed source.
Zuckerberg said in a social media post that Meta’s goal is to build AI products that “don't just answer your questions but act as agents that do things for you.” The billionaire added that he is “optimistic that this will support a wave of creativity, entrepreneurship, growth, and health.”
Muse Spark certainly appears to be a major upgrade over Meta’s last big release, Llama 4, which came out in April 2025 and was viewed in the tech industry as a disappointment with middling performance.
Meta is making Muse Spark available via meta.ai and through the Meta AI app. Unlike Llama, Muse Spark is not being released for others to download, though the company says it hopes to open-source future versions. Meta was previously seen as a leader in open source AI and made its Llama models available for researchers, startups, and hobbyists to download and customize.
“Looking ahead, we plan to release increasingly advanced models that push the frontier of intelligence and capabilities, including new open source models,” Zuckerberg wrote.
Meta’s self-reported benchmark scores for Muse Spark suggest the model is better at some tasks than the latest models from OpenAI, Anthropic, Google, and xAI. “Muse Spark is the first step on our scaling ladder,” Meta said in a blog post , referring to its goal of building AI that far outstrips human abilities.
Artificial Analysis, an AI benchmarking company that got early access to Muse Spark, said on social media that the new model is one of the best it has tested. “Muse Spark scores 52 on the Artificial Analysis Intelligence Index, placing it within the top 5 models we have benchmarked,” the company said in its post, citing its own rubric for scoring models that combines various third-party benchmarks.
Meta says the new model is natively multimodal, meaning that it has been trained to handle images, audio, and video as well as text. Muse Spark also features advanced reasoning capabilities, a key feature of the best AI models available today, and it was built from scratch to have strong coding capabilities. Meta described these features as the foundation for building ever-more capable models using modern machine-learning methods.
Meta says that it built Muse Spark to be especially good at providing medical advice. “To improve Muse Spark's health reasoning capabilities, we collaborated with over 1,000 physicians to curate training data that enables more factual and comprehensive responses,” the company said in its blog post.
Zuckerberg has spent a small fortune overhauling Meta’s artificial intelligence efforts since Llama 4 came out. The tech giant poached top AI engineers from competing firms with compensation packages worth hundreds of millions . It also spent billions to acquire or make major investments in a number of AI startups. Meta recruited Alexandr Wang, the CEO of Scale, an AI training company, to lead its AI efforts after investing $14.3 billion in the company.
Meta also published a document outlining its vision for safely scaling AI models to superhuman levels of performance. The company’s Advanced AI Scaling Framework outlines safety checks that the company will perform as its models become increasingly advanced.
|
|
|
Anthropic will let your agents sleep on its couch |
the_register_ai |
09.04.2026 19:29 |
0.731
|
| Embedding sim. | 0.8311 |
| Entity overlap | 0.3684 |
| Title sim. | 0.1856 |
| Time proximity | 0.8423 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
AI + ML
1
Anthropic will let your agents sleep on its couch
1
Want to run your business on autopilot? For better or worse, Managed Agents might help with that
Thomas Claburn
Thu 9 Apr 2026 //
19:29 UTC
If you need AI agents to do a lot of ongoing tasks for your business, Anthropic has a new answer for you. The Claude maker has introduced Managed Agents, a service to help organizations create and deploy cloud-hosted knowledge work automations.
Agents, for those who haven't been following along, consist of machine learning models given access to software tools in an iterative loop. Claude Code is a coding agent that can emit programming code with the assistance of models like Opus 4.6 and permitted command line tools like bash, yoked together through a client-side harness – an orchestration tool.
Those using Claude Code can create sub-agents that specialize in certain tasks, like frontend design. These are defined by Markdown files and YAML data – words that steer the underlying model toward training data related to functional interface patterns as opposed to code efficiency or some other goal.
"An agent is a reusable, versioned configuration that defines persona and capabilities," Anthropic explains in its documentation. "It bundles the model, system prompt, tools, MCP servers, and skills that shape how Claude behaves during a session."
Running an agent involves some degree of planning and configuration followed by monitoring and feedback – you give it a task, the agent attempts to comply, and it either asks further questions or proceeds to generate its interpretation of the desired response – until your token quota or your API budget has been exhausted.
Hence the appeal of Managed Agents: Anthropic is offering to make the agentic process a bit more hands-off and more scalable, which for organizations might have some appeal.
"Shipping a production agent requires sandboxed code execution, checkpointing, credential management, scoped permissions, and end-to-end tracing," the company said in a blog post .
"Managed Agents handles the complexity. You define your agent's tasks, tools, and guardrails and we run it on our infrastructure. A built-in orchestration harness decides when to call tools, how to manage context, and how to recover from errors."
Microsoft locks out VeraCrypt and WireGuard devs, blames verification process
AWS: Agents shouldn't be secret, so we built a registry for them
'Several dozen' high-value corporations hit by new extortion crew in helpdesk phishing spree
OpenAI puts Stargate UK on ice, blames energy costs and red tape
Where personal agent usage (for coding at least) tends to be semi-autonomous – you give the agent some tasks and check in as the model implements specific features – Claude Managed Agents is intended for longer periods of unsupervised action (a.k.a. spending).
Managed agents are designed to muck around in their managed environment, reading files, running commands, browsing the web, and executing code without much oversight. The mundane aspects of LLM interaction – compacting sessions to free up context space, for example – are left to the machines.
Anthropic recommends Managed Agents for tasks that require a long time to complete and lots of tool calls, can operate in cloud-hosted secure containers, and benefit from persistent file and conversation data.
The Managed Agents service isn't just for coding, which remains the primary commercial use case for Claude to date. Anthropic suggests that its hosted ghost workers can handle a broad set of office tasks – a position underscored by Claude Cowork's declaration of general availability on Thursday.
The AI biz emphasizes the general utility of its toilbots in a YouTube testimonial from Notion product manager Eric Liu that describes how Notion uses Managed Agents to ship code and produce websites and presentations. This involves, for example, asking the managed agents to consolidate project assets, create Slack channels, research competitor home pages, and send emails with project timelines.
All this could be yours for the low, low price of standard platform rates , plus $0.08 per session-hour for active runtime. ®
Share
More about
AI
Anthropic
Developer
More like these
×
More about
AI
Anthropic
Developer
Software
Narrower topics
AdBlock Plus
AIOps
API
App
Application Delivery Controller
Audacity
Claude
Confluence
Database
DeepSeek
FOSDEM
FOSS
Gemini
Git
Google AI
GPT-3
GPT-4
Grab
Graphics Interchange Format
IDE
Image compression
Jenkins
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
Neural Networks
NLP
OpenOffice
Programming Language
QR code
Retrieval Augmented Generation
Retro computing
Search Engine
Software Bill of Materials
Software bug
Software License
Star Wars
Tensor Processing Unit
Text Editor
TOPS
User interface
Visual Studio
Visual Studio Code
WebAssembly
Web Browser
WordPress
Broader topics
Large Language Model
Self-driving Car
More about
Share
1
COMMENTS
More about
AI
Anthropic
Developer
More like these
×
More about
AI
Anthropic
Developer
Software
Narrower topics
AdBlock Plus
AIOps
API
App
Application Delivery Controller
Audacity
Claude
Confluence
Database
DeepSeek
FOSDEM
FOSS
Gemini
Git
Google AI
GPT-3
GPT-4
Grab
Graphics Interchange Format
IDE
Image compression
Jenkins
Legacy Technology
LibreOffice
Machine Learning
Map
MCubed
Microsoft 365
Microsoft Office
Microsoft Teams
Mobile Device Management
Neural Networks
NLP
OpenOffice
Programming Language
QR code
Retrieval Augmented Generation
Retro computing
Search Engine
Software Bill of Materials
Software bug
Software License
Star Wars
Tensor Processing Unit
Text Editor
TOPS
User interface
Visual Studio
Visual Studio Code
WebAssembly
Web Browser
WordPress
Broader topics
Large Language Model
Self-driving Car
TIP US OFF
Send us news
|
|
|
Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents |
wired |
08.04.2026 17:00 |
0.705
|
| Embedding sim. | 0.8124 |
| Entity overlap | 0.12 |
| Title sim. | 0.1739 |
| Time proximity | 0.8631 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
Maxwell Zeff
Business
Apr 8, 2026 1:00 PM
Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents
Amid rapid enterprise growth, Anthropic is trying to lower the barrier to entry for businesses to build AI agents with Claude.
Photo-Illustration: Jobanny Cabrera; Getty Images
Save this story
Save this story
Anthropic announced Wednesday the launch of a new product that aims to make it easier for businesses to build and deploy AI agents . The tool, Claude Managed Agents, offers developers out-of-the-box infrastructure to build autonomous AI systems, simplifying a complex process that was previously a barrier to automating work tasks.
The move positions Anthropic to capitalize on its rapidly growing enterprise business. On Tuesday, the company said that its annualized recurring revenue has surpassed $30 billion , roughly three times higher than it was in December 2025. Both Anthropic and OpenAI, which also has an agent platform called Frontier , are racing to build out robust enterprise offerings as they prepare to go public as soon as this year.
The majority of Anthropic’s recent revenue growth has come from Claude Platform, an enterprise product that allows developers to tap into the company’s AI models through an API, according to Anthropic’s head of product for the Claude Platform, Angela Jiang. Developers have been using Anthropic’s API to deploy AI agents, such as Claude Code, in their workspace.
Jiang argues there’s a notable gap between what Anthropic’s models are capable of and what businesses are using them for. The new tool “enables any business to take the best-in-class infrastructure and deploy a fleet of Claude agents to do whatever work they need,” says Jiang.
Managed Agents will give developers an agent harness, which describes all of the software infrastructure that wraps around an AI model to help it work agentically, or take actions on behalf of a user. In practice, a harness is made up of software tools, a memory system, and other infrastructure. Agents made through Claude Managed Agent will also come with a built-in sandboxed environment, in which the agent can spin up software projects in a secure setting. The product also allows developers to create agents that can run autonomously for hours in the cloud, monitor what other Claude agents are doing, and toggle permissions that allow agents to access certain tools.
“When it comes to actually deploying and running agents at scale, that is a complex distributed-systems engineering problem,” says Katelyn Lesse, head of engineering for the Claude Platform. “A lot of customers we're talking about previously had a whole bunch of engineers whose job it would have been to build and run those systems at scale. Now that we are giving them that bit out of the box, they're able to have those same engineers be focused on the core competencies of their business and of their product.”
In a demo shared with WIRED, the AI productivity startup Notion showed how it's using Managed Agents to power a client onboarding feature. Eric Liu, a Notion product manager, demoed how he could off-load a long list of tasks within Notion to a Claude Managed Agent, which was able to start ticking off client onboarding tasks one by one. The product in the demo runs in Notion, but Liu opened a dashboard on the Claude Platform and looked at how the agents were working and what tools they were using.
Wall Street investors have grown wary of software stocks in recent months as Anthropic has released a wide range of enterprise offerings, which some believe could make traditional software-as-a-service companies obsolete. Whether that threat materializes or not, Managed Agents makes clear that Anthropic still has significant ground to cover before most enterprises are fully running on Claude.
|
|
|
[Перевод] Mythos -начало новой эры в кибербезопасности |
habr_ai |
11.04.2026 10:39 |
0.703
|
| Embedding sim. | 0.8201 |
| Entity overlap | 0.1724 |
| Title sim. | 0.0565 |
| Time proximity | 0.9017 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | artificial intelligence |
| NLP страна | United States |
Открыть оригинал
Министр финансов США Скотт Бессинт и председатель Федеральной резервной системы Джером Пауэлл созвали лидеров Уолл-стрит, чтобы сделать им срочное предупреждение: инструмент на основе искусственного интеллекта от Anthropic PBC знаменует начало новой эры в кибербезопасности.
Читать далее
|
|
|
Emerging AI-Driven Threats Prompt Renewed Focus on Enterprise Cybersecurity |
prnewswire |
10.04.2026 15:30 |
0.702
|
| Embedding sim. | 0.8188 |
| Entity overlap | 0.0794 |
| Title sim. | 0.1128 |
| Time proximity | 0.877 |
| NLP тип | product_launch |
| NLP организация | Quantum Secure Encryption Corp. |
| NLP тема | cybersecurity |
| NLP страна | Canada |
Открыть оригинал
Emerging AI-Driven Threats Prompt Renewed Focus on Enterprise Cybersecurity
News provided by
Equity Insider
Apr 10, 2026, 11:30 ET
Share this article
Share to X
Share this article
Share to X
Issued on behalf of Quantum Secure Encryption Corp.
VANCOUVER, BC , April 10, 2026 /PRNewswire/ -- Equity-Insider.com News Commentary — Anthropic just proved that even the company building the most powerful offensive cyber tool ever created can't keep its own front door locked. A CMS misconfiguration exposed Claude Mythos, a frontier AI model that autonomously finds and exploits zero-day vulnerabilities across every major operating system and browser [1] . The Global X Cybersecurity ETF dropped 4.5% in a single session as Wall Street repriced the entire defensive stack overnight [2] . Anthropic responded with Project Glasswing: restricted Mythos Preview access for 40+ organizations to patch critical infrastructure before adversaries catch up. Five companies sit at pivotal points along that defensive rebuild: Quantum Secure Encryption Corp. (CSE: QSE) (OTCQB: QSEGF ) (FSE: VN8), SentinelOne (NYSE: S ), Elastic (NYSE: ESTC ), Rapid7 (NASDAQ: RPD ), and Broadcom (NASDAQ: AVGO ).
A Trusted Computing Group survey found 91% of businesses still lack a formal roadmap for quantum-safe encryption migration, even as NIST deprecation timelines and NSA compliance deadlines narrow the window to months [3] . The 2026 Thales Data Threat Report sharpens the picture: only 47% of sensitive cloud data is encrypted today, down from 51% a year ago, while 61% of organizations rank harvest-now-decrypt-later attacks as their top quantum concern [4] . Institutional capital is flowing toward the convergence of cryptographic modernization and AI-native defense, where platforms already in production carry scalable, asymmetric upside.
Quantum Secure Encryption (CSE: QSE) (OTCQB: QSEGF ) (FSE: VN8) just launched QPA v2 , an enterprise platform that helps large organizations find the weak spots in their encryption and build a clear plan to upgrade before quantum computing turns those weak spots into open doors.
The Vancouver-based company says QPA v2 moves the conversation from 'we know there's a problem' to 'here's how we fix it.' The platform includes a planning wizard for governance, budgets, and migration timelines, AI-powered modules that evaluate how ready an organization's cryptography actually is, and inventory tools that scan software, hardware, and encryption components to flag what needs replacing. An executive dashboard pulls it all together, giving leadership real-time visibility into risk levels and upgrade progress. QSE says the platform is already live and in use with both existing and prospective clients.
"Organizations are now moving from understanding quantum risk to actively planning for it," said Ted Carefoot, CEO of QSE . "QPA v2 is designed to support that transition by providing a structured, repeatable framework that enables enterprises and public-sector organizations to assess their current state, prioritize risk, and plan their migration toward post-quantum cryptographic standards."
That shift into execution has been building since late 2025. QSE recently secured its first municipal government pilot for post-quantum cybersecurity through its membership in MISA (Municipal Information Systems Association), a national network connecting Canadian municipalities with new technology. The participating municipality is using QPA to identify which of its systems rely on encryption that future quantum computers could break, and to start planning upgrades now. QSE noted it is already in conversations with additional municipalities exploring similar assessments.
Since November 2025, QSE has expanded from four to thirteen operational markets worldwide, with eleven value-added distributors now active and two more partnerships expected to close shortly. The company also joined CADSI (Canadian Association of Defence and Security Industries), opening pathways into Canadian defence and public-sector procurement.
QPA v2 integrates with QSE's broader product suite, including its quantum-resilient key infrastructure, QAuth identity platform, and encrypted storage solutions. QSE is a Canadian post-quantum security company building tools to help organizations protect sensitive data from the next generation of cyberattacks that quantum computing is expected to enable, targeting commercial, enterprise, and government clients preparing for a fundamental shift in how encryption works.
CONTINUED… Read this and more on QSE at: https://equity-insider.com/2025/03/18/is-scope-technologies-corp-cse-scpe-otcqb-scpcf-the-next-big-player-in-quantum-cybersecurity/
Other industry developments and happenings in the market include:
SentinelOne (NYSE: S ) has expanded its strategic collaboration with Alphabet (NASDAQ: GOOG ) (NASDAQ: GOOGL ) through a multi-year agreement to develop AI-powered cybersecurity solutions at global scale. The partnership integrates SentinelOne 's autonomous endpoint detection and response platform with Alphabet 's Google Cloud infrastructure, expanding availability across three strategic regions: North America, Frankfurt, and the Kingdom of Saudi Arabia.
"Customers are under growing pressure to defend increasingly complex environments and protect their highly sensitive data while reducing operational friction," said Melissa Smith, SVP of Global Strategic Partnerships & Initiatives at SentinelOne . "With Google Cloud, we have chosen a highly secure and sophisticated platform for a data sovereignty strategy, and a strategic partner for intelligence sharing."
The collaboration targets enterprises operating in regulated markets, combining SentinelOne 's AI-native Singularity Platform with Google Cloud's threat intelligence and generative AI capabilities to replace legacy antivirus and first-generation EDR tooling. SentinelOne currently protects nearly one-fifth of the Fortune 500 and positions the expanded partnership as a path toward autonomous security operations for the AI era.
Elastic (NYSE: ESTC ) has achieved FedRAMP High authorization for its Elastic Cloud Hosted platform on AWS GovCloud (US), unlocking access to the most sensitive, unclassified government workloads across law enforcement, emergency response, public health, and national security operations. The FedRAMP High baseline requires more than 400 security controls to protect controlled unclassified information, making it the program's most rigorous certification tier.
"FedRAMP High expands our ability to support agencies operating in highly sensitive environments and underscores Elastic 's enduring commitment to help improve our national security posture while driving operational efficiencies," said Chris Townsend, global vice president of public sector at Elastic .
The authorization builds on Elastic 's growing federal footprint, including a collaboration with the Cybersecurity and Infrastructure Security Agency and ECS to support a unified SIEM-as-a-Service program for Federal Civilian Executive Branch Agencies, and a volume-based discount agreement with the General Services Administration to streamline procurement across federal agencies. Elastic 's open, standards-based architecture supports Kubernetes, OpenTelemetry, and other cloud-native projects, helping agencies reduce vendor lock-in and meet federal data mandates while enabling GenAI use cases through retrieval augmented generation techniques.
Rapid7 (NASDAQ: RPD ) announced the acquisition of Kenzo Security , an agentic AI security platform built to scale autonomous security investigations, further enhancing the Rapid7 Command Platform. The deal advances Rapid7 's managed detection and response capabilities from AI-assisted workflows to machine-speed security operations, with Kenzo customers reporting a 94% reduction in investigation time and alert coverage increasing from 12% to 100%.
"Reactive security models have reached their limits," said Corey Thomas, CEO of Rapid7 . "Attackers are using AI to move faster, attack surface complexity is accelerating, and security teams are expected to improve outcomes without additional resources. Kenzo Security advances our mission to enable security operations that preempt attackers with predictive precision and machine-scale."
By integrating Kenzo's entity-centric data mesh with the Rapid7 Command Platform, the combined offering will deliver full alert coverage at machine speed, 100% decision transparency, and continuous exposure management across endpoint, identity, cloud, and SaaS environments. Rapid7 does not anticipate a material impact to revenue, ARR, profitability, or free cash flows from the transaction, and the company serves more than 11,500 customers worldwide.
Broadcom (NASDAQ: AVGO ) announced the launch of Symantec CBX, a cloud-based extended detection and response platform that unifies Broadcom 's Symantec and Carbon Black technologies into a single solution. The platform targets under-resourced security operations teams facing enterprise-grade threats without the staffing, budget, or infrastructure to deploy complex security tools.
"This announcement marks a major milestone as we unite the strengths of Symantec and Carbon Black into a single, robust solution," said Jason Rolleston, Vice President and General Manager, Enterprise Security Group, Broadcom . "CBX empowers organizations of all sizes with the advanced, yet intuitive capabilities to tackle modern threats with confidence and efficiency through industry-first technologies and intelligent automation."
Symantec CBX combines Symantec's prevention, adaptive protection, data security, and incident prediction features with Carbon Black's endpoint detection and response technology, delivering correlated visibility across endpoints, networks, cloud, and identity attack surfaces. With 85% of incident flags now carrying AI-powered prediction recommendations, the platform is designed to accelerate investigations and reduce reliance on senior analyst expertise. Broadcom plans to make CBX available later this year through its Enterprise Security Group's Catalyst Partner Program, with migration pathways for existing customers.
FURTHER READING: https://equity-insider.com/2025/03/18/is-scope-technologies-corp-cse-scpe-otcqb-scpcf-the-next-big-player-in-quantum-cybersecurity/
CONTACT: EQUITY INSIDER info@equity-insider. com (604) 265-2873
DISCLAIMER: Nothing in this publication should be considered as personalized financial advice. We are not licensed under securities laws to address your particular financial situation. No communication by our employees to you should be deemed as personalized financial advice. Please consult a licensed financial advisor before making any investment decision. This is neither an offer nor recommendation to buy or sell any security. We hold no investment licenses and are thus neither licensed nor qualified to provide investment advice. The content in this report or email is not provided to any individual with a view toward their individual circumstances. Equity Insider is a wholly-owned subsidiary of Market IQ Media Group, Inc. ("MIQ"). MIQ has previously been paid a fee for QSE - Quantum Secure Encryption Corp. advertising and digital media from the company directly, which has since expired. There may be 3rd parties who may have shares QSE - Quantum Secure Encryption Corp., and may liquidate their shares which could have a negative effect on the price of the stock. Previous compensation constitutes a conflict of interest as to our ability to remain objective in our communication regarding the profiled company. Because of this conflict, individuals are strongly encouraged to not use this publication as the basis for any investment decision. The owner/operator of MIQ own shares of QSE - Quantum Secure Encryption Corp. which were purchased as a part of a private placement, and in the open market. MIQ reserves the right to buy and sell, and will buy and sell shares of QSE - Quantum Secure Encryption Corp. at any time hereafter without any further notice. We also expect further compensation in the future as an ongoing digital media effort to increase visibility for the company, no further notice will be given, but let this disclaimer serve as notice that all material disseminated by MIQ has been approved by the above mentioned company; this is a paid advertisement, and we own shares of the mentioned company that we will sell, and we also reserve the right to buy shares of the company in the open market, or through further private placements and/or investment vehicles. While all information is believed to be reliable, it is not guaranteed by us to be accurate. Individuals should assume that all information contained in our newsletter is not trustworthy unless verified by their own independent research. Also, because events and circumstances frequently do not occur as expected, there will likely be differences between any predictions and actual results. Always consult a licensed investment professional before making any investment decision. Be extremely careful, investing in securities carries a high degree of risk; you may likely lose some or all of the investment.
SOURCES:
https://www.anthropic.com/glasswing
https://www.cnbc.com/2026/03/27/anthropic-cybersecurity-stocks-ai-mythos.html
https://www.helpnetsecurity.com/2026/03/26/google-pqc-migration-timeline-2029/
https://www.helpnetsecurity.com/2026/03/02/ai-security-spending-budget-2026/
Logo: https://mma.prnewswire.com/media/2840019/5909776/Equity_Insider_Logo.jpg
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Uber is the latest to be won over by Amazon's AI chips | TechCrunch |
techcrunch |
07.04.2026 16:46 |
0.702
|
| Embedding sim. | 0.7923 |
| Entity overlap | 0.1765 |
| Title sim. | 0.157 |
| Time proximity | 0.9959 |
| NLP тип | partnership |
| NLP организация | Amazon |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
On Tuesday, Amazon announced that Uber was expanding its contract for AWS cloud services to run more of its ride-sharing features on Amazon’s chips. Uber will particularly expand its use of AWS’s Graviton (a low-power, ARM-based server CPU) and start a new trial testing Trainium3, AWS’s Nvidia competitor AI chip.
This deal is a bit less about a long-term threat to Nvidia than it is a thorough thumbing of the nose by Amazon at AWS’s cloud competitors, Google and Oracle.
While Uber historically ran its own data centers, back in 2023, the ride-hailing company famously signed giant, multi-year cloud computing deals with Oracle and Google . The idea was to move the majority of its IT infrastructure off its own data centers and onto these two clouds, it said.
Even in December, Uber publicly reiterated that goal, writing in a blog post :
In February 2023, Uber began transitioning from on-premise data centers to the cloud using OCI and Google Cloud Platform, taking on the dual challenge of shifting massive workloads and introducing Arm-powered compute instances into a previously x86-dominated environment.
Uber particularly called out in that post the use of the ARM chips made by Ampere in Oracle’s cloud. This is where things get interesting.
If you want a crash course in how inter-tangled Silicon Valley can be, take a look at the history of Ampere .
Ampere was founded by former Intel bigwig Renee James after she was not promoted to CEO at the chipmaker. She pulled all her strings, including her power at her then-job as an investor at private equity firm Carlyle and her board seat position at Oracle, to raise the cash to start this company. Oracle owned about one-third of the company, and James had to give up her status as an independent Oracle director because of that investment.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
(James was, by the way, a key board person who helped vote in Oracle’s $9.3 billion purchase of NetSuite in 2016, a company where Larry Ellison was a major stockholder. That deal sparked an unsuccessful shareholder lawsuit alleging Oracle overpaid for it.)
In December, Ampere’s major competitor SoftBank acquired it, and Oracle sold its stake for a handsome $2.7 billion pre-tax gain. James left Oracle’s board at the end of 2024 and is no longer working at Ampere.
Oracle is raising money as fast as it can to build data centers for OpenAI and Stargate. Ellison said Oracle sold Ampere because he believed designing chips in-house for its data centers was no longer a competitive advantage. It prefers to buy the chips and has signed massive deals with Nvidia.
It’s worth noting that Oracle, SoftBank, and Nvidia are also part of OpenAI’s orbit of circular deals that are supposed to fund the model maker’s massive data center build-out.
But now AWS is announcing it has nabbed a bigger contract from one of Oracle’s star customers, Uber, because it has in-house-designed chips.
Uber joins Anthropic, OpenAI, and Apple as Big Tech companies that have signed on or increased their usage of AWS because of these AI chips. In December, Amazon CEO Andy Jassy said Trainium was already a multibillion-dollar business .
(For a look at the team and lab that design these chips , check out our exclusive tour of the facility.)
Topics
AI , Amazon , AWS , Enterprise , Uber
Julie Bort
Venture Editor
Julie Bort is the Startups/Venture Desk editor for TechCrunch.
You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Microsoft takes on AI rivals with three new foundational models
Rebecca Szkutak
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Project Glasswing and open source: The good, bad, and ugly |
the_register_ai |
10.04.2026 11:30 |
0.701
|
| Embedding sim. | 0.8467 |
| Entity overlap | 0.0857 |
| Title sim. | 0.1061 |
| Time proximity | 0.6101 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | ai security |
| NLP страна | |
Открыть оригинал
Security
9
Project Glasswing and open source software: The good, the bad, and the ugly
9
Just what FOSS developers need – a flood of AI-discovered vulnerabilities
Steven J. Vaughan-Nichols
Fri 10 Apr 2026 //
11:30 UTC
Opinion Anthropic describes Project Glasswing as a coalition of tech giants committing $100 million in AI resources to hunt down and fix long-hidden vulnerabilities in critical open source software that it's finding with its new Mythos AI program. Or as The Reg put it , "an AI model that can generate zero-day vulnerabilities."
Oh boy! Just what we needed. Not just AI security bug slop, but automated, dedicated AI security bug slop!
While Anthropic claims its Claude Opus 4.6 can barely find zero-days, Mythos Preview can pop up working exploits 72.4 percent of the time. It's a good thing Anthropic has limited its use for now; if it lives up to its hype, Mythos would crash the internet in a day.
Project Glasswing is generously offering free access to Mythos Preview, which Anthropic claims "surpasses all but the most skilled humans at finding and exploiting software vulnerabilities." Free with $100 million in usage credits for Mythos Preview and $4 million in direct donations to open source security organizations. Is that enough money to secure open source software, 97 percent of all working software? I doubt it.
Can we believe in Anthropic? The company claims it has found a 27-year-old bug in OpenBSD, a 16-year-old vulnerability in FFmpeg's video encoding code, and a new set of chained exploits in the Linux kernel that enable an attacker to escalate from ordinary user access to complete root control.
I'm not impressed by that. I got my start in programming by finding bugs myself, and I was never any great shakes as a developer. On the other hand, as long-term Linux kernel maintainer Greg Kroah-Hartman told us recently, AI security bug reports suddenly went from slop to useful .
OK, let's say that even in early beta, Mythos is that good at finding bugs. What will that mean? Well, next we need someone to fix those bugs. Who's going to bell that cat?
So I asked people who are a lot smarter than I am about software security and open source software, and this is what they told me.
First, I contacted Daniel Stenberg, founder and lead developer of cURL, where AI slop reports caused his team to stop paying bug bounties . He told The Register : "Yeah, this risk adds more load on countless open source maintainers already struggling." There's the rub.
Stenberg agreed that "AI reporting has gotten a lot better over the last few months. The frequency of old-style, really stupid AI slop reports has gone down significantly." However, lots of those are still not vulnerabilities but end up being "just bugs," and the reports tend not to come with fixes or solutions, so even if we like getting bugs reported, getting a lot of them as security reports adds a significant load.
So even if Mythos is "close to being as good as they claim in their marketing, I figure we will see the maintainer load go up even more soon. As I've pointed out time and again, there are never enough maintainers or financial support for open source projects ."
Can't AI itself help? Sure. Dirk Hohndel, Verizon's senior director of open source, posted on LinkedIn that while AI coding tools aren't yet ready to maintain code, he believes they will be soon. " This is almost possible today . And at the rate of improvement these tools have seen over the last couple of quarters, I am convinced that it will be possible with acceptable results at some point this year."
However, Stenberg concluded that, so far, AIs typically aren't nearly as good at fixing the problems as they are at finding them, which also adds to the imbalance: several monster-sized companies and armies of users of their tools fill the inboxes of the far fewer and far less resourced open source projects. So even when those reports are good, this is a burden.
Dan Lorenc, CEO and co-founder of the security company Chainguard, agreed. He said: "I think Glasswing is exciting, and a careful rollout like this is a responsible way to get these capabilities into the hands of people trying to use them for good. At the same time, projects and enterprises using them probably aren't ready for the influx of real vulnerabilities and patches they're going to need to get out quickly."
Lorenc warned: "It's only a matter of time before others get similarly powerful models out, so everyone is going to have to prepare for an onslaught of work very soon. People can't keep pretending this isn't real or coming."
I then checked in with David Wheeler, director of Open Source Supply Chain Security at the Linux Foundation (LF). The LF, by the way, is one of the groups supporting Glasswing. Wheeler said: "Anthropic is pitching not just 'find' but 'scan and secure.' That is, they're using AI not only to find vulnerabilities, but also to create fixes for them. I think that's key; a good proposed fix makes the report much easier to act on, and it makes it much clearer what the purported vulnerability is."
We'll soon see how good Anthropic is at finding and fixing.
I'm also worried about another issue. Mythos is proprietary software. Oh sure, we all had a look at Anthropic's Claude code , but as Anthropic's lawyers will tell you in big red letters, their code is not open source. So even if Mythos turns out to be the greatest thing in programming since the compiler was invented, doesn't that mean open source software will be locked into a proprietary solution? The very idea gives me the creeps.
Wheeler replied: "Is there a risk of lock-in? Yes, that's always a risk. That said, I don't think the risk is as bad and we're working on ways to address this."
Open source isn't a tip jar – it's time to charge for access
Nanny state discovers Linux, demands it check kids' IDs before booting
Altman said no to military AI abuses – then signed Pentagon deal anyway
Open source devs consider making hogs pay for every download
"First: even if the tool is only available for a period of time, if the tool can help us find and eliminate vulnerabilities, that's still a good thing. Software is finite; it has a finite number of defects, and some security defects are more important than others. The more we can eliminate the vulnerabilities, the fewer that can be exploited, even if the service ends or becomes too expensive."
"That said, we do worry about the lock-in. We are also interested in solutions. After all, the new open source software cyber reasoning system (OSS-CRS) emerged from AIxCC and is a standard orchestration framework for building and running LLM-based autonomous bug-finding and bug-fixing systems."
In particular, "OSS-CRS defines a unified interface for CRS development. Build your CRS once by following the development guide, and run it across different environments (local, Azure...) without any modification. We're encouraging people building CRSs to use interfaces like this so they aren't as subject to lock-in. OSS-CRS also makes it easy to run an ensemble (a set of these tools). OSS-CRS does other things, but that hopefully shows that there are ways to mitigate the risk."
Well, we'll see. Personally, I'd be a lot happier if Mythos were open source software. Almost all AI software is, at its roots, based on open source.
That said, we're at an inflection point in AI and software development. Things are changing radically. I have to agree with LF CEO Jim Zemlin, who stated: "The urgency is real. We are in the most dangerous period, the transition, when attackers might gain a significant advantage as the technology ecosystem digests the impact of AI. We have already seen evidence of what smart cybersecurity crews can do when leveraging AI, and witnessed in-the-wild novel exploit kits written with AI assistance. Falling behind is not an option."
All true, but once more, and with feeling, I really, really wish the answer was written in open source code. ®
Share
More about
Anthropic
Developer
Open Source
More like these
×
More about
Anthropic
Developer
Open Source
Security
Vulnerability
Narrower topics
2FA
Advanced persistent threat
API
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Claude
Common Vulnerability Scoring System
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Data Breach
Data Protection
Data Theft
DDoS
DEF CON
Digital certificate
Digital Public Goods
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Git
Google Project Zero
Hacker
Hacking
Hacktivism
Identity Theft
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
MySQL
NCSAM
NCSC
OpenInfra
OpenOffice
OpenStack
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
Proxmox
Quantum key distribution
Ransomware
Remote Access Trojan
REvil
RSA Conference
Software Bill of Materials
Software bug
Spamming
Spyware
Surveillance
TLS
Trojan
Trusted Platform Module
Wannacry
Wikipedia
WPF
Y2K
Zero Day Initiative
Zero trust
Broader topics
Large Language Model
More about
Share
9
COMMENTS
More about
Anthropic
Developer
Open Source
More like these
×
More about
Anthropic
Developer
Open Source
Security
Vulnerability
Narrower topics
2FA
Advanced persistent threat
API
Application Delivery Controller
Audacity
Authentication
BEC
Black Hat
BSides
Bug Bounty
Center for Internet Security
CHERI
CISO
Claude
Common Vulnerability Scoring System
Cybercrime
Cybersecurity
Cybersecurity and Infrastructure Security Agency
Cybersecurity Information Sharing Act
Data Breach
Data Protection
Data Theft
DDoS
DEF CON
Digital certificate
Digital Public Goods
Encryption
End Point Protection
Exploit
Firewall
FOSDEM
FOSS
Git
Google Project Zero
Hacker
Hacking
Hacktivism
Identity Theft
Incident response
Infosec
Infrastructure Security
Jenkins
Kenna Security
MySQL
NCSAM
NCSC
OpenInfra
OpenOffice
OpenStack
Palo Alto Networks
Password
Personally Identifiable Information
Phishing
Programming Language
Proxmox
Quantum key distribution
Ransomware
Remote Access Trojan
REvil
RSA Conference
Software Bill of Materials
Software bug
Spamming
Spyware
Surveillance
TLS
Trojan
Trusted Platform Module
Wannacry
Wikipedia
WPF
Y2K
Zero Day Initiative
Zero trust
Broader topics
Large Language Model
TIP US OFF
Send us news
|
|
|
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice |
wired |
10.04.2026 09:30 |
0.697
|
| Embedding sim. | 0.8086 |
| Entity overlap | 0.1429 |
| Title sim. | 0.1887 |
| Time proximity | 0.7543 |
| NLP тип | product_launch |
| NLP организация | Meta |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
Reece Rogers
Gear
Apr 10, 2026 5:30 AM
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
Meta’s Muse Spark model offers to analyze users’ health data, including lab results. Beyond the obvious privacy risks, it's not a capable stand-in for a real doctor.
Photo-Illustration: WIRED Staff; Getty Images
Save this story
Save this story
Meta’s Superintelligence Labs launched its first generative AI model, called Muse Spark , earlier this week. It is currently available through the Meta AI app, but the company plans to integrate Muse Spark across all of its platforms—including Facebook, Instagram, and WhatsApp—in the coming weeks.
Meta claims that Muse Spark was designed, in part, to be better at answering questions people have about their health . The company even worked with “over 1,000 physicians to curate training data that enables more factual and comprehensive responses,” according to Meta’s announcement blog .
As the new model rolls out to millions of users, I tested Muse Spark to see how it would respond to health-related questions. When I asked how it could help me, the bot listed off a few basic uses, like building a workout routine or generating questions to ask my doctor, but a direct request for my health data stood out:
“Paste your numbers from a fitness tracker, glucose monitor, or a lab report. I’ll calculate trends, flag patterns, and visualize them,” read the Meta AI output. “Example: ‘Here are my last 10 blood pressure readings—is there a pattern?’”
Nudging users to upload their health data is not unique to Meta. OpenAI’s ChatGPT and Anthropic’s Claude both have chatbot modes designed specifically for helping users understand their health and make decisions. For example, you can open Claude and connect it to your Apple or Android health data with just the flip of an in-app toggle. Then, Claude will use that information as part of its answers. Google also lets you upload medical data to Fitbit for its AI health coach to parse .
Courtesy of Meta
Handing over this kind of data to any AI tool is a risky decision, even if users are able to generate personalized advice. “Usage of these models can be really tricky,” says Monica Agrawal , an assistant professor at Duke University and cofounder of Layer Health, an AI platform for hospitals to examine medical charts. “The more information you give it, the more context it has about you and, potentially, it can provide better responses. But on the flip side, there are major privacy concerns to sharing your health data without protections.”
Agrawal is concerned about users uploading sensitive data to chatbots since these commonly used AI tools are not compliant with HIPAA protections, the landmark US law that guards patients from having their sensitive health information exposed. Layer Health is HIPAA compliant. It’s a high standard of privacy that people are used to experiencing during doctor visits. The information someone shares with a bot is much more loosely regulated, even if it's their clinical lab result.
Anything you share in a chat with Meta AI may be stored and used to train future AI models. “We keep training data for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely, and efficiently,” reads Meta’s privacy policy about generative AI .
Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal , a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn't connect my own health information to a service that I'm not fully able to control, understand where that information is being stored, or how it's being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.
It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.
“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.
When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.
The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.
“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”
In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that's provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.
When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.
Chatting with a bot can feel like an intimate, personal affair, even when it isn’t. Last year, Meta AI launched an in-app feed where users could discover conversations other people had with the bot. Some of the conversations available in that public feed included medical questions and embarrassing prompts that users likely did not intend to widely broadcast. Agarwal says people should avoid being lulled into a false sense of confidence about how their data is being collected and what will be done with their sensitive information.
“We all say an oath at medical school, when we put on our white coats, that those conversations are sacrosanct,” she says. “These bots aren't taking those oaths.”
Correction on April 10, 2026 at 12:45pm ET: Meta says it does not use health data for ads. We've updated the story to reflect that.
|
|
|
AWS boss explains why investing billions in both Anthropic and OpenAI is an OK conflict | TechCrunch |
techcrunch |
08.04.2026 20:09 |
0.688
|
| Embedding sim. | 0.7832 |
| Entity overlap | 0.2759 |
| Title sim. | 0.168 |
| Time proximity | 0.8371 |
| NLP тип | partnership |
| NLP организация | Amazon |
| NLP тема | enterprise ai |
| NLP страна | United States |
Открыть оригинал
AWS CEO Matt Garman said Amazon’s recent $50 billion investment in OpenAI , after its long partnership including $8 billion of investment in Anthropic, is the type of conflict of interest the cloud giant is used to handling.
Garman has worked at Amazon since he was a business school intern in 2005, before the launch of AWS in 2006, he told the audience of the HumanX conference taking place this week in San Francisco.
When asked about the inherent conflict of working closely with two AI model companies that are fierce (and, arguably, sometimes petty ) competitors, he said it’s not a problem. Because AWS itself often competes with its partners, it has a lot of direct experience with such competition, he explained.
In AWS’s earliest years, it knew it couldn’t build every cloud offering itself, so the unit partnered with others.
“We also knew that we would have to compete with our partners, because technology is interconnected,” Garman recounted. “So, for a very long time, we’ve built this muscle up of how we go to market with our partners,” he continued. “But we also may even have first-party products that compete with them, and that’s okay, and we’ve promised them we won’t give ourselves unfair competitive advantage.”
Today, the world is used to Amazon competing with those who sell on its cloud. Even one of AWS’s biggest rivals, Oracle, sells its database and other services on AWS. But it was a radical idea back in 2006, when technology partners took pains never to compete with the partners that helped them succeed.
Still, Amazon is hardly a trailblazer in discarding investor loyalty and conflict-of-interest commitments in the wild, money-grabbing world of AI. When Anthropic announced its latest $30 billion round in February, it included at least a dozen investors who were also backing OpenAI. This included OpenAI’s main cloud partner, Microsoft.
Techcrunch event
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
For AWS, making a huge investment in OpenAI to gain its model for its customers (and as a technology development partner) was almost a matter of life and death. Both models were already available on Microsoft’s cloud, AWS’s biggest rival.
The cloud giants are also working to keep themselves front and center by offering AI model-routing services. Those services allow their customers to automatically use different models for various tasks as a way to maximize performance and reduce costs. As Garman explained, one model might be ideal for planning, another for reasoning, and a cheaper model for easier tasks, like code completion. “I think that is where the world will go,” Garman said.
That is also how Amazon, and Microsoft for that matter, will slip their own homegrown models into usage — that old competing-with-your-partners situation, again.
All’s fair in love and AI these days.
Topics
AI , Anthropic , AWS , Enterprise , Matt Garman , OpenAI , TC
Julie Bort
Venture Editor
Julie Bort is the Startups/Venture Desk editor for TechCrunch.
You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Anthropic buys biotech startup Coefficient Bio in $400M deal: Reports | TechCrunch |
techcrunch |
03.04.2026 20:28 |
0.685
|
| Embedding sim. | 0.8005 |
| Entity overlap | 0.0286 |
| Title sim. | 0.0813 |
| Time proximity | 0.92 |
| NLP тип | acquisition |
| NLP организация | Anthropic |
| NLP тема | drug discovery |
| NLP страна | United States |
Открыть оригинал
Anthropic has purchased the stealth biotech AI startup Coefficient Bio in a $400 million stock deal, according to The Information and Eric Newcomer . Sources close to the deal confirmed to TechCrunch that it closed, though declined to comment on the amount.
The deal comes as Anthropic continues its push into healthcare and life sciences, following its October announcement of Claude for Life Sciences , a tool that aims to help scientific researchers make discoveries .
Coefficient Bio’s founders, Samuel Stanton and Nathan C. Frey, launched the startup eight months ago, having both worked in computational drug discovery at Genentech’s Prescient Design. Coefficient Bio was using AI to help make drug discovery and other forms of biological research more efficient.
The team, consisting of around 10 people, is expected to join Anthropic’s health and life science team.
Topics
AI , Biotech & Health , Claude , In Brief , Mergers and Acquisitions , Privacy , Startups , Venture
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
AI
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
3 hours ago
Latest in Venture
In Brief
Anthropic buys biotech startup Coefficient Bio in $400M deal: Reports
Dominic-Madori Davis
23 hours ago
In Brief
Gateway Capital announces first close of $25M Fund II
Dominic-Madori Davis
2 days ago
Startups
Diverse teams start with diverse VCs, with Leah Solivan, Taskrabbit
Maggie Nye
Isabelle Johannessen
2 days ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Meta is reentering the AI race with a new model called Muse Spark |
the_verge_ai |
08.04.2026 16:12 |
0.685
|
| Embedding sim. | 0.7965 |
| Entity overlap | 0.0769 |
| Title sim. | 0.1176 |
| Time proximity | 0.8678 |
| NLP тип | product_launch |
| NLP организация | Meta Superintelligence Labs |
| NLP тема | foundation models |
| NLP страна | United States |
Открыть оригинал
Meta Superintelligence Labs is launching its first model since Mark Zuckerberg spent billions overhauling the company's AI efforts. Called Muse Spark, the model now powers the Meta AI app and the Meta AI website in the US, per the company's announcement. In the coming weeks, Meta says, it will appear in WhatsApp, Instagram, Facebook, Messenger, and Meta's smart glasses, as well as roll out in other countries.
Like Google Gemini, which easily integrates into Google's product suite, Meta touts Muse Spark as "purpose-built for Meta's products." The model, the first in a new series, will also be available to some of Meta's partners in "privat …
Read the full story at The Verge.
|
|
|
Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative | TechCrunch |
techcrunch |
07.04.2026 18:00 |
0.671
|
| Embedding sim. | 0.754 |
| Entity overlap | 0.1389 |
| Title sim. | 0.1644 |
| Time proximity | 0.9887 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | artificial intelligence |
| NLP страна | United States |
Открыть оригинал
Anthropic on Tuesday released a preview of its new frontier model, Mythos, which it says will be used by a small coterie of partner organizations for cybersecurity work. In a previously leaked memo , the AI startup called the model one of its “most powerful” yet.
The model’s limited debut is part of a new security initiative, dubbed Project Glasswing, in which 12 partner organizations will deploy the model for the purposes of “defensive security work” and to secure critical software, Anthropic said. While it was not specifically trained for cybersecurity work, the model will be used to scan both first-party and open source software systems for code vulnerabilities, the company said.
Anthropic claims that, over the past few weeks, Mythos identified “thousands of zero-day vulnerabilities, many of them critical.” Many of the vulnerabilities are one to two decades old, the company added.
Mythos is a general-purpose model for Anthropic’s Claude AI systems that the company claims has strong agentic coding and reasoning skills. Anthropic’s frontier models are considered its most sophisticated and high-performance models , designed for more complex tasks, including agent-building and coding.
The partner organizations previewing Mythos as part of Project Glasswing include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the initiative, these partners will ultimately share what they’ve learned from using the model so that the rest of the tech industry can benefit from it. The preview is not going to be made generally available, Anthropic said, though 40 organizations will gain access to the Mythos preview aside from the partnership.
Anthropic also claims that it has engaged in “ongoing discussions” with federal officials about the use of Mythos, although one would have to imagine that those discussions are complicated by the fact that Anthropic and the Trump administration are currently locked in a legal batt l e after the Pentagon labeled the AI lab a supply-chain risk over Anthropic’s refusal to allow autonomous targeting or surveillance of U.S. citizens.
News of Mythos was originally leaked in a data security incident reported last month by Fortune . A draft blog about the model (then called “Capybara”) was left in an unsecured cache of documents available on a publicly inspectable data lake. The leak, which Anthropic subsequently attributed to “human error,” was originally spotted by security researchers. “‘Capybara’ is a new name for a new tier of model: larger and more intelligent than our Opus models — which were, until now, our most powerful,” the leaked document said, adding later that it was “by far the most powerful AI model we’ve ever developed,” according to the report.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
In the leak, Anthropic claimed that its new model far exceeded performance areas (like “software coding, academic reasoning, and cybersecurity”) met by its currently public models and that it could potentially pose a cybersecurity threat if weaponized by bad actors to find bugs and exploit them (rather than fix them, which is how Mythos will be deployed).
Last month, the company accidentally exposed nearly 2,000 source code files and over half a million lines of code via a mistake it made in the launch of version 2.1.88 of its Claude Code software package. The company then accidentally caused thousands of code repositories on GitHub to be taken down as it attempted to clean up the mess.
Correction April 7, 2026: An earlier version of this article erroneously stated how many partners are working with Anthropic on Project Glasswing. There are 12 partner organizations, though 40 organizations total will have access to the Mythos preview.
Topics
AI , AI , Amazon , Anthropic , Enterprise , Microsoft , Mythos , Security
Lucas Ropek
Senior Writer, TechCrunch
Lucas is a senior writer at TechCrunch, where he covers artificial intelligence, consumer tech, and startups. He previously covered AI and cybersecurity at Gizmodo.
You can contact Lucas by emailing lucas.ropek@techcrunch.com.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Microsoft takes on AI rivals with three new foundational models
Rebecca Szkutak
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Anthropic reveals $30bn run rate, plan to use new Google TPU |
the_register_ai |
07.04.2026 01:09 |
0.671
|
| Embedding sim. | 0.8107 |
| Entity overlap | 0.0769 |
| Title sim. | 0.1327 |
| Time proximity | 0.5436 |
| NLP тип | partnership |
| NLP организация | Broadcom |
| NLP тема | ai infrastructure |
| NLP страна | |
Открыть оригинал
Systems
4
Anthropic reveals $30bn run rate and plans to use 3.5GW of new Google AI chips
4
Broadcom's building the silicon and is chuffed about that, but also notes Anthropic remains a risk
Simon Sharwood
Tue 7 Apr 2026 //
01:09 UTC
Broadcom has announced that Google has asked it to build next-generation AI and datacenter networking chips, and that Anthropic plans to consume 3.5GW worth of the accelerators it delivers to the ads and search giant.
News of the two deals emerged today in a Broadcom regulatory filing that opens with two items of news.
One is a “Long Term Agreement for Broadcom to develop and supply custom Tensor Processing Units (“TPUs”) for Google’s future generations of TPUs.” Google and Broadcom have collaborated to produce custom TPUs. Broadcom CEO Hock Tan recently shared his opinion that hyperscalers don’t have the skill to create custom accelerators and predicted Broadcom’s chip business will therefore win over $100 billion of revenue from AI chips in 2027 alone.
Working on next-gen TPUs for Google will presumably help to make that prediction a reality.
So will the second part of Broadcom’s announcement: a “Supply Assurance Agreement for Broadcom to supply networking and other components to be used in Google’s next-generation AI racks through up to 2031.”
Broadcom’s filing also revealed one user of Google’s next-gen TPU will be Anthropic, which starting in 2027, “will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next generation TPU-based AI compute capacity committed by Anthropic.”
Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far
Anthropic closes door on subscription use of OpenClaw
Fujitsu taps Broadcom's 3D chip tech for 144-core Monaka CPU
Oracle isn't done with Ampere yet as A4 instances arrive on OCI boasting 96 cores
The filing includes the following notable statement:
The consumption of such expanded AI compute capacity by Anthropic is dependent on Anthropic’s continued commercial success. In connection with this deployment, the parties are in discussions with certain operational and financial partners.
That sounds an awful lot like Broadcom putting on the record that the financial arrangements that will make it possible to deploy 3.5GW worth of custom TPUs for Anthropic represent sufficient risk that the company needs to put it on the record in a regulatory filing.
In its announcement about the deal, Anthropic seemingly tries to reassure markets about its financial affairs by revealing that “Our run-rate revenue has now surpassed $30 billion—up from approximately $9 billion at the end of 2025.”
“When we announced our Series G fundraising in February, we shared that over 500 business customers were each spending over $1 million on an annualized basis,” Anthropic wrote. “Today that number exceeds 1,000, doubling in less than two months.”
Yet Broadcom still worries about the AI upstart.
Google’s take on the announcements points out that in addition to renting TPUs, Anthropic is a big Google Cloud customer.
Anthropic pointed out that it also uses AWS’s Trainium AI chips, plus Nvidia kit, so it can “match workloads to the chips best suited for them.” ®
Share
More about
Anthropic
Broadcom
Google Cloud
More like these
×
More about
Anthropic
Broadcom
Google Cloud
Tensor Processing Unit
Narrower topics
Android
App stores
Chrome
Chromium
Claude
Gemini
Google AI
Google Cloud Platform
Google I/O
Google Nest
Google Project Zero
G Suite
Kubernetes
Pixel
Privacy Sandbox
Tavis Ormandy
Broader topics
AI
Alphabet
Large Language Model
Network
Search Engine
Semiconductor
More about
Share
4
COMMENTS
More about
Anthropic
Broadcom
Google Cloud
More like these
×
More about
Anthropic
Broadcom
Google Cloud
Tensor Processing Unit
Narrower topics
Android
App stores
Chrome
Chromium
Claude
Gemini
Google AI
Google Cloud Platform
Google I/O
Google Nest
Google Project Zero
G Suite
Kubernetes
Pixel
Privacy Sandbox
Tavis Ormandy
Broader topics
AI
Alphabet
Large Language Model
Network
Search Engine
Semiconductor
TIP US OFF
Send us news
|
|
|
[Перевод] Apple удалила ИИ-приложения, CEO Epic в бешенстве, и вот почему это лучшее, что случилось с вашим iPhone |
habr_ai |
11.04.2026 06:21 |
0.67
|
| Embedding sim. | 0.7936 |
| Entity overlap | 0 |
| Title sim. | 0.0173 |
| Time proximity | 0.9116 |
| NLP тип | regulation |
| NLP организация | Apple |
| NLP тема | ai safety |
| NLP страна | |
Открыть оригинал
Apple только что сделала то, чего мир искусственного интеллекта совсем не ожидал. Она выкинула AI-приложения для «вайб-кодинга» из App Store — по сути заявив: мы не доверяем тому, что эти инструменты нагенерировали.
Если вы делали ставку на то, что акции SaaS-компаний обрушатся ещё сильнее — на чистом страхе, что ИИ вот-вот уничтожит каждую софтверную компанию, — Apple только что показала вам, почему этот страх иррационален.
Я перелопатила несколько авторитетных отчётов о трендах в софтверной индустрии, аудитов безопасности и академических исследований, чтобы понять, что на самом деле происходит, когда AI-код встречается с реальным миром.
Я нашла четыре неопровержимые причины — и массу подтверждающих данных — почему аргумент «ИИ убьёт SaaS» не выдерживает критики. И сейчас я проведу вас через каждую из них.
А ведь рынок акций прямо сейчас закладывает в цены ровно противоположное.
Но сначала — что именно сделала Apple и какую деталь большинство публикаций упустили.
Читать далее
|
|
|
Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter | TechCrunch |
techcrunch |
09.04.2026 14:27 |
0.67
|
| Embedding sim. | 0.7698 |
| Entity overlap | 0.2 |
| Title sim. | 0.2017 |
| Time proximity | 0.7281 |
| NLP тип | other |
| NLP организация | Amazon |
| NLP тема | ai hardware |
| NLP страна | United States |
Открыть оригинал
Amazon CEO Andy Jassy’s annual shareholder letter reads something like a Kendrick Lamar diss track, if the rapper was a corporate-speak-talking CEO and not a poetic Pulitzer-prize winning musician.
Meaning, you have to know the history to understand all of the competitors Jassy takes aim at, alongside cute personal stories about his unrealized dream of being a sportscaster and watching hockey games with his dad.
Of course, Jassy doesn’t throw the gauntlet down directly. He takes a more nuanced approach. For instance, in his challenge to Nvidia, he writes, “We have a strong partnership with NVIDIA, will always have customers who choose to run NVIDIA” and will always support these chips in its cloud.
But he also says: “Virtually all AI thus far has been done on NVIDIA chips, but a new shift has started.” AWS customers, he says, “want better price-performance” meaning Amazon’s own home-grown Trainium AI chips.
Jassy says demand is so high for this chip that capacity for the newest one, Trainium3, is nearly sold out. Remarkably, he says that capacity is also nearly sold out for Trainium4, which is still 18 months away from being available.
This means that Amazon’s chip business has hit a $20 billion annual revenue run rate. But if Amazon were a chipmaker that sold its wares to others, it would be at $50 billion ARR, he postulates.
Granted, Nvidia did $215.9 billion in actual revenue last year. Nvidia may not be shaking in its boots, yet. Still, Jassy presents Trainium as a formidable up-and-comer.
Techcrunch event
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Jassy didn’t spare Intel either. He points out that AWS’s homegrown Graviton CPU, a competitor to the Intel x86 architecture, “is now used expansively by 98% of the top 1,000 EC2 customers,” aka some of the biggest companies in the world. Two companies even asked to “buy all of our Graviton instance capacity in 2026,” he writes (emphasis his). “We can’t agree to these requests given other customers’ needs, but it gives you an idea of the demand.”
He promised that Amazon’s Starlink competitor, Amazon Leo, scheduled to launch in mid-2026, is already succeeding, too. It’s won contracts from Delta Airlines, AT&T, Vodafone, Australia’s National Broadband Network, and NASA, among others.
Interestingly, he also said Amazon could be looking at selling robotics one day. It may turn all the data from its 1 million warehouse robots into “robotics solutions” for industrial uses and consumers, he wrote. Is there an Amazon humanoid in our future? We’ll see. He talked up other Amazon businesses, too, like same-day delivery, groceries, and drones.
But mostly, Jassy tried to make the case for the hundreds of billions of dollars of capital expenditures he’s committed. In February, he announced plans to spend $200 billion in 2026 on capex , mostly building out AWS data centers. That’s more than any of the other major tech companies, which are also spending big on capex. Jassy’s pitch to shareholders makes sense, considering Amazon’s stock plunged to below $200 a share and hasn’t recovered.
“We’re not investing approximately $200 billion in capex in 2026 on a hunch,” he wrote, using as an example that his deal with OpenAI included the model maker pledging to spend $100 billion on AWS. Of course, there are those who doubt OpenAI will meet all of its spending promises .
In a nod to that, Jassy insists that beyond OpenAI, “there are several other customer agreements completed (and unannounced), or deep in process,” lined up to buy the AWS capacity.
We’ll have to wait and see. Those who cause a bubble are never the ones who see (or admit to) its existence. “I’ve followed the public debate on whether this technology is over-hyped, whether we’re in ‘a bubble.’” But he declares in this letter that, for Amazon at least, this isn’t the case.
Correction: This story has been updated to reflect that the estimated ARR of Amazon’s custom chip business includes multiple chips beyond Trainium.
Topics
AI , Amazon , Andy Jassy , AWS , CAPEX , Enterprise , OpenAI
Julie Bort
Venture Editor
Julie Bort is the Startups/Venture Desk editor for TechCrunch.
You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Astropad's Workbench reimagines remote desktop for AI agents, not IT support | TechCrunch |
techcrunch |
08.04.2026 16:01 |
0.666
|
| Embedding sim. | 0.7479 |
| Entity overlap | 0.1111 |
| Title sim. | 0.1729 |
| Time proximity | 0.982 |
| NLP тип | product_launch |
| NLP организация | Astropad |
| NLP тема | ai agents |
| NLP страна | United States |
Открыть оригинал
Demand for Apple’s Mac Mini has skyrocketed , particularly in China , as the small computer has become an ideal platform for experimenting with autonomous AI agents like OpenClaw and others. Now, a company called Astropad is building out a remote desktop solution specifically for this use case.
On Tuesday, Astropad CEO Matt Ronge introduced Astropad Workbench , a remote desktop solution for Apple devices that he pitches as made “for the AI era.”
While an AI agent running on a Mac Mini may not need a screen, its operator (the human) will want to log in at times to see what’s happening in order to check logs, monitor outputs, or restart stuck tasks, he says.
Image Credits: Astropad
The new remote desktop solution offers a variety of features, including high-fidelity streaming; the ability to dictate prompts and commands with your voice; support for other input methods like the keyboard, Apple Pencil, or touch; and clients for both the iPad and iPhone — the latter essentially putting the remote desktop solution into your pocket for on-the-go access.
If you’re running AI agents across multiple Macs, Workbench offers a device chooser so you can move between them.
Image Credits: Astropad
The idea came about because it was something the team at Astropad had wanted for themselves, as had their friends.
“We have heavily adopted AI at Astropad, and we’ve been using agents. And sometimes, you have an agent running on a long task, and you want to check on it,” says Ronge. “There’s not a great way to do this…there were existing remote desktop tools, but nothing built specifically for this,” he continues. “There have also been ways where you can use a terminal, or there are things like Telegram chats, but they’re limited. I mean, there are times you’ve got to see what’s happening on your Mac. You’ve got to approve a dialog or save something, or just visually see what’s happening.”
Workbench also leverages the company’s proprietary, low-latency display protocol, which it calls LIQUID, which supports the workflows creative professionals use. It retains full fidelity, even at Retina resolutions, Astropad claims, and doesn’t blur lines or pixelate data. The protocol already powers Astropad’s other products, like Luna Display , which turns your iPad into a second display, and Astropad Studio , which lets you use an iPad as a professional drawing tablet.
While monitoring an AI agent may not always need a high-fidelity solution, Ronge points out that it’s something that’s nice to have — especially if you’re approving designs or mock-ups your AI agent made.
Image Credits: Astropad
Of course, remote desktop software has existed for some time, meaning Astropad has well-established rivals like Jump Desktop, RustDesk, AnyDesk, Parsec, VNC-based solutions, and many more.
But Ronge suggests that those weren’t designed for the specific needs of using remote desktop software to keep tabs on AI agents. With Workbench, it’s easy to check on the status of logs to see your AI agents’ progress in order to spot issues, restart stalled jobs, and make other changes, but what’s more, you can do this from your iPhone or iPad.
“We’ve been doing iPad stuff for years — it’s been, like, our whole company for the past 10 years. So we have a lot of experience in making good iPad apps,” Ronge says. “We know how to make good iOS apps…so we did that, and then we also added a voice model.”
Image Credits: Astropad
The tech uses Apple’s voice model so you can talk to your phone and direct your AI agent to do something with a press of the microphone button.
“It’s a very natural way to work with agents. That’s the kind of feature that existing remote desktop [apps] just don’t have — they’re built for more traditional, enterprise-style remote desktop.”
As a new release, there will still be some bugs and polishing needed, but the team is continuing to work on the product. Next up, they plan to launch Windows and Linux support and refine the iPhone app.
The new software runs on macOS 15 and up and iOS 26, and is available as a free download offering 20 minutes of access per day. For unlimited access, the cost is $10 per month, or $50 per year.
Astropad, a bootstrapped and profitable small tech business, has over 100,000 customers, including those who have bought its iPad hardware accessories and its software. With Workbench, Ronge believes the company has the potential to reach both AI enthusiasts and businesses as remote support for AI agents becomes more common.
“I totally think businesses are gonna buy it. I mean, just the productivity gains I’m seeing from it myself — this is totally headed to businesses. It’s just too powerful,” he notes.
Topics
AI , AI agents , Apps , Astropad , remote desktop , remoteaccesssoftware , TC
Sarah Perez
Consumer News Editor
Sarah has worked as a reporter for TechCrunch since August 2011. She joined the company after having previously spent over three years at ReadWriteWeb. Prior to her work as a reporter, Sarah worked in I.T. across a number of industries, including banking, retail and software.
You can contact or verify outreach from Sarah by emailing sarahp@techcrunch.com or via encrypted message at sarahperez.01 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Firmus, the 'Southgate' AI data center builder backed by Nvidia, hits $5.5B valuation | TechCrunch |
techcrunch |
07.04.2026 18:46 |
0.665
|
| Embedding sim. | 0.7602 |
| Entity overlap | 0.0385 |
| Title sim. | 0.1208 |
| Time proximity | 0.984 |
| NLP тип | funding |
| NLP организация | Firmus |
| NLP тема | ai infrastructure |
| NLP страна | Singapore |
Открыть оригинал
Asia AI data center provider Firmus on Monday announced a fresh $505 million raise led by Coatue at a $5.5 billion post-money valuation. With this round, Firmus has raised $1.35 billion in six months, it says.
The Singapore-based data center company previously raised AU$330 million (approximately $215 million) at an AU$1.85 billion ($1.2 billion) valuation from investors, including Nvidia.
Firmus is developing an energy-efficient “AI factory” network of data centers in Australia and Tasmania, a project it dubs Project Southgate. It is using Nvidia’s reference designs for building these efficient data centers. These new data centers will use Nvidia’s Vera Rubin platform — the chip giant’s next-gen AI computing system succeeding its Blackwell architecture, expected to ship in the second half of 2026.
Firmus originally provided cooling technologies for Bitcoin mining and has become yet another crypto-roots-turned-AI provider company that investors love.
Topics
AI , data centers , In Brief , nvidia , Startups
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
AI
Databricks co-founder wins prestigious ACM award, says ‘AGI is here already’
Julie Bort
1 hour ago
AI
Uber is the latest to be won over by Amazon’s AI chips
Julie Bort
23 hours ago
Venture
OpenAI alums have been quietly investing from a new, potentially $100M fund
Julie Bort
2 days ago
Latest in Startups
AI
Databricks co-founder wins prestigious ACM award, says ‘AGI is here already’
Julie Bort
1 hour ago
TechCrunch Disrupt 2026
Final 3 days to save up to $500 on your TechCrunch Disrupt 2026 pass
TechCrunch Events
2 hours ago
Startups
I can’t help rooting for tiny open source AI model maker Arcee
Julie Bort
18 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Atlassian launches visual AI tools and third-party agents in Confluence | TechCrunch |
techcrunch |
08.04.2026 13:00 |
0.653
|
| Embedding sim. | 0.7484 |
| Entity overlap | 0.0426 |
| Title sim. | 0.1528 |
| Time proximity | 0.8869 |
| NLP тип | product_launch |
| NLP организация | Atlassian |
| NLP тема | ai agents |
| NLP страна | United States |
Открыть оригинал
Software giant Atlassian announced new AI tools and agents on Wednesday, with a focus on turning data into visual assets and applications.
This includes the rollout of the visual tool Remix in open beta. Remix allows enterprises to turn the data and information stored in Atlassian’s content collaboration software Confluence into assets including charts and graphics.
Remix will recommend which visual format makes the most sense for the data or information at hand and create these visual assets without requiring the users to open another application or software.
The company also announced three new third-party agents that run within Confluence using model context protocols (MCPs).
One agent connects Confluence users to the vibe-coding darling Lovable to turn product ideas and data into working prototypes. Another agent connects to app builder software Replit and allows users to convert technical documents into starter apps. The third agent works with AI presentation builder Gamma to build slides and other presentation materials.
“With Remix and agents in Confluence, a single page becomes the starting point for whatever comes next: a clear story for leaders, a prototype for builders, or a walkthrough for customers, all from the same source of truth,” Sanchan Saxena, senior vice president of teamwork collaboration at Atlassian, wrote in a blog post announcing the features. “When you remove that friction, teams do more than manage documents; they create the next generation of products and experiences.”
The new tools are the latest in Atlassian’s push to incorporate AI agents and tools directly into the apps workers are already using, as opposed to launching new software platforms. In February, the company added AI agents to its product management software Jira.
Techcrunch event
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
This follows a trend across the industry of companies looking to embed AI tools and agents directly into existing workflows as opposed to launching separate AI-powered software. While Salesforce was one of the first enterprises to launch a separate AI agent management platform, Agentforce , in 2024, it has since released many of its AI innovations through existing software like its recent upgrade that turned messaging service Slack’s chatbot into an AI agent .
OpenAI is also leaning into this movement through its recent Frontier Alliances initiative . OpenAI partnered with four major consultant firms to task consultants with embedding OpenAI’s tech into their clients’ existing tech stacks and workflows as opposed to just selling them ChatGPT Enterprise subscriptions.
“Technology should fade into the background and let people focus on their best work,” Saxena wrote in the company’s blog post.
Topics
AI , AI agents , Apps , artificial intelligence , Atlassian , Australia & Oceania , confluence , Enterprise , JIRA , United States
Rebecca Szkutak
Senior Reporter, Venture
Becca is a senior writer at TechCrunch that covers venture capital trends and startups. She previously covered the same beat for Forbes and the Venture Capital Journal.
You can contact or verify outreach from Becca by emailing rebecca.szkutak@techcrunch.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
The Pro-Iran Meme Machine Trolling Trump With AI Lego Cartoons |
wired |
09.04.2026 13:25 |
0.651
|
| Embedding sim. | 0.7524 |
| Entity overlap | 0.05 |
| Title sim. | 0.0417 |
| Time proximity | 0.985 |
| NLP тип | other |
| NLP организация | Explosive Media |
| NLP тема | generative ai |
| NLP страна | Iran |
Открыть оригинал
David Gilbert
Politics
Apr 9, 2026 9:25 AM
The Pro-Iran Meme Machine Trolling Trump With AI Lego Cartoons
Since the beginning of the Iran war, the group Explosive Media has released over a dozen viral videos mocking Trump and the US.
Photograph: Niall Carson/Getty Images
Save this story
Save this story
Minutes after President Donald Trump announced that he would not wipe out “a whole civilization” on Tuesday evening, a team of self-described young Iranian activists jumped into action.
Members of the group known as Explosive Media were putting the finishing touches on their latest AI-generated, Lego -inspired Trump video. The video features a Trump mini-figure colluding with leaders from Gulf states, Iranian officials pressing a big red button labeled “back to the stone age,” and Trump throwing a chair at US generals.
This was the latest of more than a dozen videos the pro-Iran group has released since the beginning of the war in February, many of which have racked up millions of views on mainstream platforms. While Iranian government accounts have posted Lego-style videos in the past, Explosive Media’s content is more sophisticated and scripted. And it's produced by a team of young pro-Iranian creators who appear deeply knowledgeable about the internet and American culture. Already some critics have alleged the group has ties to the Iranian government.
“We were almost certain Trump would back down; it was clear to us,” a member of the Explosive Media team, who did not want to publicly identify themselves, tells WIRED. “We were prepared for this scenario and had content ready in advance. We just made a few adjustments and released it.”
The team even added mention of the 10-point plan Iran proposed as part of its recent ceasefire agreement. As the video concludes, a Lego Trump sits next to the document, sobbing while holding a white flag and eating a taco—a knowing reference to the acronym for “ Trump always chickens out .”
Within hours of Trump’s announcement, the video was published on Explosive Media’s X account and Telegram channel, where it had the caption: “IRAN WON! The way to crush imperialism has been shown to the world. Trump Surrendered. TACO will always remain TACO.”
While the Trump administration has been posting memes that intercut war footage with movie clips that appeal to a narrow audience of loyal followers, Explosive Media’s Lego videos have reached a much broader audience in the US—some of whom clearly liked what they saw.
“We’ve committed ourselves to learning more every day about American people and culture,” the Explosive Media team member tells WIRED. “In this process, Americans themselves have been helping us—and that support and guidance continues. They share impactful tips and ideas with us.”
Explosive Media began in 2025 as a YouTube channel featuring political commentary delivered by a young Iranian man. The content never gained traction, with most videos racking up only a couple hundred views.
But all that changed in February, when the group began posting Lego-inspired videos, with the team scripting, producing, and editing each video using AI tools. (The group would not reveal which AI tools it was using.)
The videos quickly took hold on platforms like TikTok, X, and Instagram.
“People are disengaging from some of the real conflict content and looking for something that can distill what's happening quickly and in a language and tone that they understand and that's what those Lego videos are doing,” Moustafa Ayad, a researcher with the Institute of Strategic Dialogue who has closely tracked the online content being shared by Iranian groups during the war, tells WIRED. “They're making it easily accessible to understand the conflict from Iran's point of view, and it's hitting on points of disaffection in the United States at the same time. It's working on two fronts.”
Iran has previously used Lego-style videos in war propaganda. Back in 2024, according to Ayad, the Islamic Revolutionary Guard Corps shared links to a Lego video, and during the Twelve-Day War in 2025, Iranian state media proclaimed victory over Israel in another Lego video .
But none of those bore the sophistication or cultural insight of the videos currently being shared by Explosive Media.
In one video, the group depicts Trump ordering the strikes on Iran after reviewing an “Epstein File” while standing alongside Satan and Israeli prime minister Benjamin Netanyahu. In another, Iranian missiles bearing the names of everyone from Malcolm X to the victims of Jeffrey Epstein are shown being fired at the US. In a short video posted over the weekend, the group showed Trump holding up a victory sign while his behind was on fire. Many of the videos also feature catchy music including original rap tracks in English. The group has even set up its own Spotify page where it has posted the songs from the videos.
The group claims that it is not associated with the Iranian regime, but its pro-regime stance, coupled with the fact it seemingly has internet access in a country that is virtually cut off from the global internet , may suggest otherwise. “Seeing as how the regime in Iran has effectively cut off the internet to everyone else, I think you'd have to be pretty close to the government to have access to the internet,” says Ayad.
The group told WIRED that it obtained internet access because it was viewed as a media organization, claiming over 2.5 million followers on different “Iranian messaging channels.”
The Iranian regime has leveraged social media to push their side of the conflict to a non-Iranian audience, often combining AI and humor.
After Trump warned Iranians that they would soon be “living in Hell” this week, the country’s embassy in Zimbabwe posted on X suggesting they had lost the keys to the Strait of Hormuz. The Iranian embassy in Tunisia posted an AI-video of Trump walking off Air Force One holding a large white flag.
“All that stuff has been really demonstrating both Iranian capabilities to understand what American audiences gravitate towards and also a firm understanding how the internet functions, especially social media platforms and what gets the most attention, what gets people re-sharing the content, engaging with the content,” says Ayad. “And they've done that well with the Lego videos. I don't think there's another piece of media that has come out of this conflict that has been talked about, re-shared as much as the Lego videos.”
|
|
|
20+ лучших инструментов и гайдов Anthropic по ИИ |
habr_ai |
10.04.2026 10:43 |
0.632
|
| Embedding sim. | 0.7116 |
| Entity overlap | 0.2308 |
| Title sim. | 0.1205 |
| Time proximity | 0.9093 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | artificial intelligence |
| NLP страна | |
Открыть оригинал
Все самое лучшее от Anthropic в одном месте: нейросети, промты, гайды. Подойдет предпринимателям, создателям контента, разработчикам софта и всем кто интересуется нейросетями.
Читать далее
|
|
|
BrainStorm Cell Therapeutics Announces Full Year 2025 Financial Results and Provides Corporate Update |
prnewswire |
31.03.2026 20:01 |
0.632
|
| Embedding sim. | 0.7154 |
| Entity overlap | 0.1212 |
| Title sim. | 0.1858 |
| Time proximity | 0.8366 |
| NLP тип | earnings |
| NLP организация | BrainStorm Cell Therapeutics Inc. |
| NLP тема | drug discovery |
| NLP страна | United States |
Открыть оригинал
BrainStorm Cell Therapeutics Announces Full Year 2025 Financial Results and Provides Corporate Update
News provided by
BrainStorm Cell Therapeutics Inc.
Mar 31, 2026, 16:01 ET
Share this article
Share to X
Share this article
Share to X
Strategic $2.0 million post year-end financing and re-rating of BCLI shares underscore market confidence
Company highlights durable long-term survival in Expanded Access Program ( EAP) cohort supporting Phase 3b patient selection and inclusion criteria
NEW YORK , March 31, 2026 /PRNewswire/ -- BrainStorm Cell Therapeutics Inc. (OTCQB: BCLI ), a leading developer of adult stem cell therapeutics for neurodegenerative diseases, today announced financial results for the full year ended December 31, 2025, and provided a corporate update.
"Our main priority continues to be advancing NurOwn into a Phase 3b confirmatory study, having received clearance from the FDA in 2025 for the planned study," said Chaim Lebovits, President and CEO. "This study is designed to generate confirmatory data to support a potential BLA submission. Having completed key regulatory steps, the SPA, CMC alignment, and the May 2025 FDA clearance, we believe we have addressed some of the primary hurdles in our development program and positioned the company for success. Our network of leading clinical sites are on standby, and we remain actively engaged with regulators. Further, we believe that the $2.0 million in strategic financing secured in early 2026 has served as the catalyst for our recovery, resulting in a significant re-rating of our shares and providing the cash runway to execute on key operational activities. ALS is a devastating disease with limited treatment options, and we remain committed to the ALS community as we work to complete development of NurOwn and make it available to patients."
Recent Highlights
NurOwn (MSC-NTF) for ALS
• Company continues to execute on operational, clinical and manufacturing activities to support the Phase 3b ENDURANCE study of NurOwn. The planned study, pending funding, is expected to enroll approximately 200 participants at leading ALS centers and is designed as a two-part trial consisting of a 24-week randomized, double-blind, placebo-controlled portion (Part A), followed by a 24-week open-label extension (Part B) to further assess long-term safety and durability of effect. The primary efficacy endpoint for Part A will be change from baseline to Week 24 on the ALSFRS-R. If successful, the Company believes these data could support a potential BLA submission. Further trial details are posted on www.ClinicalTrials.gov , ID NCT06973629 .
A Citizen Petition was filed with the FDA by representatives of the ALS community. The petition requests a renewed regulatory review of the data supporting NurOwn. BrainStorm acknowledges the petition as a constructive development that underscores the continued interest in NurOwn's potential therapeutic value. The Company did not participate in the drafting or submission of this petition.
In the Expanded Access Program (EAP), 50% of patients (5/10) remain alive according to publicly available data. Estimated mean survival has reached approximately 84 months. At the time of the Citizen Petition filing (July 2025), all five surviving patients were tracheostomy-free, as documented in the petition. All 10 EAP patients exceeded 5-year survival versus ~20% historically. Notably, these patients entered into the Phase 3 trial with higher baseline function (mean ALSFRS-R of approximately 41), reinforcing the strategic focus of the Phase 3b trial on earlier-stage populations where the treatment effect may be most detectable.
Corporate
In February 2026, the company entered into two strategic private placement agreements, each consisting of stock and warrants, securing a total of $2 million in funding. Together, the company hopes these financings will reinforce a stable valuation for the company and provide the resources to support near-term operational objectives and preparatory work for the planned Phase 3b ENDURANCE trial of NurOwn.
Financial Results for the full year Ended December 31, 2025
Cash, cash equivalents, and restricted cash were approximately $0.3 million as of December 31, 2025. Subsequent to year-end, the company secured an additional $2 million in funding through two private placements, as described above.
Research and development expenditures, net, for the year ended December 31, 2025 were $4.2 million, compared to $4.7 million for the year ended December 31, 2024.
General and administrative expenses for the twelve months ended December 31, 2025, and 2024 were approximately $5.8 million and $7.0 million, respectively.
Net loss for the twelve months ended December 31, 2025, was approximately $10.3 million, as compared to a net loss of approximately $11.6 million for the twelve months ended December 31, 2024.
Net loss per share for the twelve months ended December 31, 2025, and 2024 was $1.11 and $2.31, respectively.
About NurOwn®
The NurOwn® technology platform (autologous MSC-NTF cells) represents a promising investigational therapeutic approach to targeting disease pathways important in neurodegenerative disorders. MSC-NTF cells are produced from autologous, bone marrow-derived mesenchymal stem cells (MSCs) that have been expanded and differentiated ex vivo. MSCs are converted into MSC-NTF cells by growing them under patented conditions that induce the cells to secrete high levels of neurotrophic factors (NTFs). Autologous MSC-NTF cells are designed to effectively deliver multiple NTFs and immunomodulatory cytokines directly to the site of damage to elicit a desired biological effect and ultimately slow or stabilize disease progression.
About BrainStorm Cell Therapeutics Inc. BrainStorm Cell Therapeutics Inc. (OTCQB: BCLI ) is a leading developer of autologous adult stem cell therapies for debilitating neurodegenerative diseases. The company's proprietary NurOwn® platform uses autologous mesenchymal stem cells (MSCs) to produce neurotrophic factor-secreting cells (MSC-NTF cells), designed to deliver targeted biological signals that modulate neuroinflammation and promote neuroprotection.
NurOwn® is BrainStorm's lead investigational therapy for amyotrophic lateral sclerosis (ALS) and has received Orphan Drug designation from both the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA). A Phase 3 trial in ALS (NCT03280056) has been completed, and a second Phase 3b trial is set to launch under a Special Protocol Assessment (SPA) agreement with the FDA. The NurOwn clinical program has generated valuable insights into ALS disease biology, including pharmacogenomic response associated with the UNC13A genotype, biomarker data collected at seven longitudinal time points, and a comprehensive analysis of the "Floor Effect" — a critical challenge in measuring clinical outcomes in advanced ALS. BrainStorm has published its findings in multiple peer-reviewed journals. In addition to ALS, BrainStorm has completed a Phase 2 open-label multicenter trial (NCT03799718) of MSC-NTF cells in progressive multiple sclerosis (MS), supported by a grant from the National MS Society. BrainStorm is also advancing a proprietary, allogeneic exosome-based platform designed to deliver therapeutic proteins and nucleic acids. The company recently received a Notice of Allowance from the U.S. Patent and Trademark Office for a foundational patent covering its exosome technology, further strengthening BrainStorm's growing IP portfolio in this emerging area of regenerative medicine. To learn more, visit www.brainstorm-cell.com .
Notice Regarding Forward-Looking Statements This press release contains "forward-looking statements" that are subject to substantial risks and uncertainties, including statements regarding meetings with the U.S. Food and Drug Administration (FDA), Special Protocol Assessment (SPA), the clinical development of NurOwn as a therapy for the treatment of ALS, the future availability of NurOwn to patients, the Phase 3b ENDURANCE study of NurOwn and the future success of BrainStorm. All statements, other than statements of historical fact, contained in this press release are forward-looking statements. Forward-looking statements contained in this press release may be identified by the use of words such as "anticipate," "believe," "contemplate," "could," "estimate," "expect," "intend," "seek," "may," "might," "plan," "potential," "predict," "project," "target," "aim," "should," "will" "would," or the negative of these words or other similar expressions, although not all forward-looking statements contain these words. Forward-looking statements are based on BrainStorm's current expectations and are subject to inherent uncertainties, risks and assumptions that are difficult to predict. These potential risks and uncertainties include, without limitation, management's ability to successfully achieve its goals, BrainStorm's ability to raise additional capital, BrainStorm's ability to continue as a going concern, prospects for future regulatory approval of NurOwn, whether BrainStorm's future interactions with the FDA will have productive outcomes, whether Brainstorm can successfully fund and complete the Phase 3b ENDURANCE study of NurOwn and other factors detailed in BrainStorm's annual report on Form 10-K available at http://www.sec.gov . These factors should be considered carefully, and readers should not place undue reliance on BrainStorm's forward-looking statements. The forward-looking statements contained in this press release are based on the beliefs, expectations, and opinions of management as of the date of this press release. We do not assume any obligation to update forward-looking statements to reflect actual results or assumptions if circumstances or management's beliefs, expectations or opinions should change, unless otherwise required by law. Although we believe that the expectations reflected in the forward-looking statements are reasonable, we cannot guarantee future results, levels of activity, performance, or achievements.
CONTACTS Investors: Michael Wood Phone: +1 646-597-6983 [email protected]
Media: Uri Yablonka, Chief Business Officer Phone: +1 917-284-2911 [email protected]
BRAINSTORM CELL THERAPEUTICS INC.
CONSOLIDATED BALANCE SHEETS
U.S. dollars in thousands
(Except share data)
December 31,
2025
2024
U.S. $ in thousands
ASSETS
Current Assets:
Cash and cash equivalents
$
29
$
187
Other accounts receivable
86
63
Prepaid expenses and other current assets
192
135
Total current assets
$
307
$
385
Long-Term Assets:
Other long-term assets
$
25
$
22
Restricted Cash
247
184
Operating lease right of use asset (Note 4)
208
807
Property and Equipment, Net (Note 5)
235
434
Total Long-Term Assets
$
715
$
1,447
Total assets
$
1,022
$
1,832
LIABILITIES AND STOCKHOLDERS' EQUITY (DEFICIT)
Current Liabilities:
Accounts payables
$
7,067
$
6,080
Accrued expenses
396
619
Short-term loans (Note 11)
967
300
Operating lease liability (Note 4)
208
549
Employees related liability
2,369
1,430
Total current liabilities
$
11,007
$
8,978
Long-Term Liabilities:
Operating lease liability (Note 4)
—
171
Warrants liability (Note 7)
—
447
Total long-term liabilities
$
—
$
618
Total liabilities
$
11,007
$
9,596
Stockholders' Deficit:
Stock capital: (Note 8)
16
14
Common Stock of $0.00005 par value - Authorized: 250,000,000 shares at December 31, 2025 and 100,000,000 shares at December 31, 2024 respectively; Issued and outstanding: 11,034,775 and 6,141,762 shares at December 31, 2025 and December 31, 2024 respectively
Additional paid-in-capital
227,058
218,974
Treasury stocks
(116)
(116)
Accumulated deficit
(236,943)
(226,636)
Total stockholders' deficit
$
(9,985)
$
(7,764)
Total liabilities and stockholders' deficit
$
1,022
$
1,832
The accompanying notes are an integral part of the consolidated financial statements.
BRAINSTORM CELL THERAPEUTICS INC.
CONSOLIDATED STATEMENTS OF COMPREHENSIVE LOSS
U.S. dollars in thousands
(Except share data)
Year ended
December 31,
2025
2024
U.S. $ in thousands
Operating expenses:
Research and development
$
4,175
$
4,651
General and administrative
5,778
7,042
Operating loss
(9,953)
(11,693)
Financial expense, net
533
77
Gain on change in fair value of Warrants liability (Note 7)
179
147
Net loss
$
(10,307)
$
(11,623)
Basic and diluted net loss per share
$
(1.11)
$
(2.31)
Weighted average number of shares outstanding used in computing basic and diluted net loss per share
9,268,976
5,021,798
The accompanying notes are an integral part of the consolidated financial statements.
Logo - https://mma.prnewswire.com/media/1166536/5894126/BrainStorm_Logo.jpg
SOURCE BrainStorm Cell Therapeutics Inc.
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Google makes it easy to deepfake yourself |
the_verge_ai |
09.04.2026 10:53 |
0.63
|
| Embedding sim. | 0.7346 |
| Entity overlap | 0.0455 |
| Title sim. | 0.022 |
| Time proximity | 0.9299 |
| NLP тип | product_launch |
| NLP организация | YouTube |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
YouTube Shorts is rolling out a new AI-powered feature giving creators an easy way to realistically clone themselves on camera. The launch, hinted at earlier this year , reflects the platform's fraught relationship with AI-generated content, adding more generative features while struggling to contain AI slop , deepfake scams, and impersonations.
YouTube says the new tool will let users create a digital version of themselves, called an avatar, that can be inserted into existing Shorts videos or used to generate entirely new ones. The company said avatars will "look and sound like you," framing them as a safer and more secure way to use AI to …
Read the full story at The Verge.
|
|
|
Poke makes using AI agents as easy as sending a text | TechCrunch |
techcrunch |
08.04.2026 21:05 |
0.629
|
| Embedding sim. | 0.7002 |
| Entity overlap | 0.0244 |
| Title sim. | 0.2072 |
| Time proximity | 0.9698 |
| NLP тип | product_launch |
| NLP организация | poke |
| NLP тема | ai agents |
| NLP страна | united states |
Открыть оригинал
Is Poke an OpenClaw for the rest of us? That’s the idea coming from a new startup offering an AI agent that you can access via iMessage, SMS, Telegram, and, in some markets, WhatsApp.
The AI agent Poke launched publicly in March, allowing consumers to access a personal assistant that can take action on their behalf through a familiar interface. Today, Poke can help with everyday needs, like daily planning, managing your calendar, tracking your health and fitness, controlling your smart home, editing your photos, and more, all via text message.
Image Credits: Poke/The Interaction Company of California
While you may still interact with a general-purpose AI chatbot like ChatGPT or Claude when you have questions or want to do research, you’d turn to Poke when you want to get something done quickly, or when you want to automate some task to save you time.
For instance, you could ask Poke to alert you to specific emails (like those from your family or your boss), or remind you in the morning if you need to take an umbrella with you. It could help you track your health and fitness goals, or let you know the score to last night’s game. Poke could send daily medication reminders, or catch you up on the day’s news, and more, since users can write their own automations in plain text and then share them with friends.
Backed by Spark Capital, General Catalyst, and other angels, the 10-person startup has more recently added another $10 million to its coffers, on top of last year’s $15 million seed round. It’s now valued at $300 million, post-money.
Starting today, personal superintelligence is just one tap away.
No download, no signup.
Text Poke for free now: https://t.co/VIWYU64dUI 🌴
—
0:00 – What's Poke?
0:50 – Introducing Poke Recipes
1:25 – Create a Recipe in 10 seconds
1:43 – Earn on Poke
2:44 – Build with npx… pic.twitter.com/LHLFRVgahk
— Poke (@interaction) March 19, 2026
The tool arrives as demand for agentic AI systems is spiking, leading OpenAI to snap up OpenClaw’s creator , and Nvidia’s CEO Jensen Huang to warn that every company needs its own OpenClaw strategy when announcing Nvidia’s enterprise-grade alternative.
But for those less technically inclined, the prospect of having to install software through the terminal, manage dependencies, and troubleshoot errors is daunting. Plus, systems like OpenClaw raise security concerns due to its deep system access.
For many people, then, OpenClaw and other agentic systems still feel out of reach. The team behind Poke wants to change that.
Image Credits: Poke/The Interaction Company of California
Marvin von Hagen , co-founder of The Interaction Company of California , the Palo Alto-based startup behind the new AI agent, tells TechCrunch that Poke emerged from watching how beta testers were using the company’s earlier product, an AI assistant for email, built around a year ago.
“What we noticed there was that people wanted to use Poke for everything… Even though it was only meant for email, people started asking Poke to remind them to take their medication. They asked Poke about sports results — ‘Hey Poke, tell me every morning if I need a jacket or not,’” explains von Hagen. “And at that time, we didn’t have a lot of this functionality, but we noticed how we needed to become general-purpose much more quickly, because people just like the personality and the humanness of it so much.”
The team then partially pivoted and focused on making Poke more useful, proactive, and more personable.
Unlike OpenClaw, getting started with Poke is easy. You simply visit Poke.com, click “Get Started,” and enter your phone number. There’s no app to install as the assistant operates over text messaging.
Image Credits: Poke/The Interaction Company of California
Under the hood, Poke turns to the AI model that best fits the task, whether that’s a model from one of the big AI providers or an open source model.
“I think this is also one of our main strengths in the long run: that almost all of our competitors are just big tech and labs that are bound to a specific provider. Like Meta AI will only ever be able to use Meta models, and ChatGPT will only ever be able to use OpenAI models,” von Hagen points out.
To work over messaging platforms like iMessage, Poke also leverages Linq , a solution that enables AI assistants to live within messaging apps. The app can run through SMS and Telegram, too, but WhatsApp support is currently limited as Meta barred other general-purpose chatbots last fall.
That could change, however. Regulators from the EU , Italy, and Brazil opened antitrust probes to fight this decision, which has brought Poke back to Brazil. It will hopefully also allow Poke to work on WhatsApp in the EU when Meta brings the costs down. (Meta has seen pushback over the high fees it’s charging — von Hagen says it’s a form of “malicious compliance” that he believes will soon be addressed.)
Image Credits: Poke/The Interaction Company of California
At launch, Poke offers a variety of “recipes” — or pre-made tools that help you automate various aspects of your life or work. These span categories like health and wellness, productivity, finance, scheduling, travel, home, school, email, community, and, for those who are technical, developer tools. Installing them requires a click of a button and then a standard authorization process, if needed.
These recipes are designed to work with apps and services you already know, like Gmail, Google Calendar, Outlook, Notion, Linear, Granola, and others. There are health and fitness “recipes” that work with Strava, Withings, Oura, Fitbit, and more, as well as those that work with smart home devices from companies like Philips Hue and Sonos.
Developers using Poke can also automate parts of their workflow via integrations with tools like PostHog, Webflow, Supabase, Vercel, Devin, Sentry, GitHub, Cursor Cloud Agents, and others.
Poke’s security model is multi-layered and includes regular penetration testing, security checks, various tools, and limiting permissions for both agents and human employees. By default, the team can’t see anything inside the tokens, unless the user manually opts to provide access to a log file or analytics by flipping a switch in their settings to opt to share this information. (TechCrunch has not performed its own security audit, to be clear.)
Image Credits: Poke screenshot/TechCrunch
Over the past couple of weeks, Poke’s users have created thousands more recipes and automations, which the company plans to add to its recipes directory for discovery in the near future. It’s also encouraging creators to build these shareable recipes by offering to pay somewhere between 10 cents and a dollar (based on geography) for every user who signs up for Poke via the recipe.
Image Credits: Poke.com screenshot/TechCrunch
Image Credits: Poke.com screenshot/TechCrunch (opens in a new window)
The cost to use Poke is surprisingly affordable: it’s free to start, then pricing is flexible. During the beta tests, users actually had to negotiate with the AI agent what price they’d pay per month, which ranged between $10-$30 — or so Poke told us in response to this question.
Von Hagen says that, now, pricing is based on how the AI agent is being used. If you’re asking for things that don’t require real-time data, you could probably use Poke for free. What costs Poke money is real-time inference, like automations that run on every incoming email or real-time flight check-ins. To set prices, the company gave Poke guidance on how expensive things are, which allows it to determine personalized pricing.
While the company has managed to make Poke more efficient to reduce costs, the goal right now isn’t profitability, von Hagen notes.
Image Credits: Poke screenshot/TechCrunch
“We really don’t want to make money, but we really want to grow. We want to build a product for a billion people and monetization is really secondary,” he says. “The goal for the next weeks and months now is to bring Poke into everyday life.” To do so, it will look to creators and influencers to showcase how they’re using Poke.
I built "tastebuds", an MCP server on @interaction ’s Poke Recipes to crowdsource opinions on food.
When you’re looking for a place to eat, Poke recommends food spots in your area that other Poke users have enjoyed before by storing their reviews, all running quietly in the…
— Dani (@daniticow) April 8, 2026
The company, co-founded by Felix Schlegel , isn’t sharing how many customers have signed up, beyond noting that the figure has 10x’ed over the past couple of months. (However, we did spot Poke at the top of Vercel’s AI Gateway leaderboard , for whatever that’s worth.)
In addition to its main institutional investors, Spark Capital and General Catalyst, the startup has attracted the attention of a number of angels, including John and Patrick Collison (Stripe founders), Jake and Logan Paul, Logan Kilpatrick from DeepMind, Joanne Jang of OpenAI, and Scott Wu and Walden Yan (Cognition founders).
It also included Vercel co-founder Guillermo Rauch, PayPal co-founder Ken Howery, Dropbox co-founder Arash Ferdowsi, Mercor co-founder Brendan Foody, Hugging Face co-founder Thomas Wolf, Flapping Airplanes co-founder Ben Spector, and several others.
Topics
AI , AI agents , ai automation , consumer AI , Exclusive , Poke , Startups
Sarah Perez
Consumer News Editor
Sarah has worked as a reporter for TechCrunch since August 2011. She joined the company after having previously spent over three years at ReadWriteWeb. Prior to her work as a reporter, Sarah worked in I.T. across a number of industries, including banking, retail and software.
You can contact or verify outreach from Sarah by emailing sarahp@techcrunch.com or via encrypted message at sarahperez.01 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Syneron Bio Announces Completion of $150 Million Series B Financing |
prnewswire |
03.04.2026 07:02 |
0.626
|
| Embedding sim. | 0.7565 |
| Entity overlap | 0.0732 |
| Title sim. | 0.1358 |
| Time proximity | 0.4853 |
| NLP тип | funding |
| NLP организация | Syneron Bio |
| NLP тема | drug discovery |
| NLP страна | China |
Открыть оригинал
Syneron Bio Announces Completion of $150 Million Series B Financing
News provided by
Syneron Bio
Apr 03, 2026, 03:02 ET
Share this article
Share to X
Share this article
Share to X
BEIJING , April 3, 2026 /PRNewswire/ -- Syneron Bio, a leader in intelligent platform-driven macrocyclic peptide drug discovery, announced the successful closing of its Series B financing on March 31th. The round was led by an international life-science fund, with co-lead participation from Decheng Capital, and CDH VGC. Additional investors included a wholly owned subsidiary of the Abu Dhabi Investment Authority ("ADIA"), True Light Capital, — an independent, wholly-owned subsidiary of Temasek, Qiming Venture Partners, BioTrack Capital and well known industrial investors. Existing shareholders, including AstraZeneca, LAV, Sinovation Capital, 5Y Capital, GL Ventures, Biotech Development Fund and Lenovo Capital, among others, also continued their support.
Proceeds from this financing will be primarily used to further advance the intelligent evolution of the company's proprietary macrocyclic peptide discovery platform, Synova™, and to accelerate the progression of its diversified innovative pipeline into clinical development.
Dr. Frank Zhang, Founder and Chief Executive Officer of Syneron Bio, commented:
"We are honored to receive support and recognition in this financing round from a distinguished group of leading global healthcare funds, tech venture capitals, reputable multi-national corporations, and prominent sovereign wealth funds. The completion of this financing further strengthens our cash position, enabling us to accelerate the development of multiple core pipeline programs across oncology, autoimmune, metabolic, and rare diseases, with the goal of benefiting patients worldwide. At the same time, our high-throughput, intelligence-driven Synova™ platform will continue to scale and evolve, significantly improving the efficiency and success rate of macrocyclic peptide drug discovery. With the continued support of our capital and industry partners, we look forward to delivering innovative therapies and addressing critical unmet medical needs for patients around the globe."
Contact Person: Corrine.Chan Email: [email protected] Website: https://www.syneronbio.com/ Telephone:1801606886 Company: Syneron Bio City: Beijing
SOURCE Syneron Bio
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Skyhawk Therapeutics Announces its Founding Scientific Advisory Board Chairman Receives the American Cancer Society's 2026 Medal of Honor |
prnewswire |
30.03.2026 16:34 |
0.622
|
| Embedding sim. | 0.7028 |
| Entity overlap | 0.0833 |
| Title sim. | 0.089 |
| Time proximity | 0.9966 |
| NLP тип | other |
| NLP организация | Skyhawk Therapeutics |
| NLP тема | computational biology |
| NLP страна | United States |
Открыть оригинал
Skyhawk Therapeutics Announces its Founding Scientific Advisory Board Chairman Receives the American Cancer Society's 2026 Medal of Honor
News provided by
Skyhawk Therapeutics
Mar 30, 2026, 12:34 ET
Share this article
Share to X
Share this article
Share to X
Dr. Tyler Jacks, founding chairman of Skyhawk's Scientific Advisory Board, receives the American Cancer Society's highest award, the Medal of Honor, first awarded in 1949, for his tireless commitment to addressing the complexities of cancer.
BOSTON , March 30, 2026 /PRNewswire/ -- Skyhawk Therapeutics, Inc., a clinical-stage biotechnology company developing novel small molecule therapies to modulate critical RNA targets for a series of challenging diseases, announces that the American Cancer Society (ACS) has awarded the 2026 Medal of Honor to Dr. Tyler Jacks, founding Chairman of Skyhawk's Scientific Advisory Board, founder of the David H. Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology (MIT) and professor of biology at MIT. The Medal of Honor is the most prestigious award given by ACS to distinguished individuals who have made valuable contributions in basic, clinical, translational, or population science and whose work has led to advancement in cancer prevention, diagnosis, treatment, or survivorship to improve the lives of patients.
"Tyler Jacks is a truly extraordinary scientist and leader across many fields and organizations. His recognition from ACS is richly deserved," said Dr. Benjamin L. Ebert, MD, PhD, co-founding member of Skyhawk's SAB and President and CEO of Dana-Farber Cancer Institute. "As founding Chairman of Skyhawk's SAB, Tyler has been a deft guide of the scientific programs at Skyhawk, addressing both cancers and other diseases with great unmet need for patients. We are grateful for all of his prodigious contributions."
"Tyler has worked for decades with creativity, passion, talent and a blessed knack for collaboration to reduce the burden cancer and other hideous diseases impose on patients and their families. I'm delighted to see his dazzling accomplishments as scientist and institution builder recognized by the ACS, America's most well-known cancer institution," said Skyhawk's Co-founder and CEO Bill Haney. "Skyhawk's pioneering work as a leader in the RNA revolution has been transformed by Tyler's insight—and I know the list of other firms, non-profits and universities who have likewise benefited from Tyler's judgment and generosity of spirit is a long one."
About Dr. Tyler Jacks Dr. Jacks is a pioneer in the development and use of genetically engineered mouse models to study difficult-to-treat human cancers. These models help scientists understand how tumors start and evolve, how closely they resemble human cancers, and how tools used in early cancer detection can be improved. In his MIT lab, Dr. Jacks and his group study how key genes, including tumor suppressors, oncogenes, and genes related to DNA repair, help cancers resist treatment and promote growth. Dr. Jacks has also conducted extensive research into how the immune system sees and also fails to see developing cancers.
Author of more than 300 scientific papers, Dr. Jacks has served as chair of the National Cancer Advisory Board and the Board of Scientific Advisors of the National Cancer Institute and as past president and board member of the American Association of Cancer Research (AACR). His many awards include the AACR Outstanding Achievement Award, the Paul Marks Prize for Cancer Research, and MIT's James R. Killian Jr. Faculty Achievement Award. He is an elected member of the National Academy of Sciences and the National Academy of Medicine.
Dr. Jacks graduated magna cum laude with highest honors in biology from Harvard College. He earned his doctorate at the University of California, San Francisco, where he studied under Dr. Harold Varmus, who won the 1989 Nobel Prize in Physiology or Medicine for research on oncogenes and cancer growth.
About Skyhawk Therapeutics Skyhawk Therapeutics is a clinical-stage biotechnology company which uses its proprietary platform, SKYSTAR ® , to discover and develop small molecule RNA modulating therapies for the world's most intractable diseases. For more information visit, www.skyhawktx.com .
Skyhawk Contact Maura McCarthy Head of Corporate Development [email protected]
SOURCE Skyhawk Therapeutics
21 %
more press release views with
Request a Demo
×
Modal title
|