|
S
|
Creating with Sora Safely |
openai |
23.03.2026 00:00 |
1
|
| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | product_launch |
| NLP организация | |
| NLP тема | video generation |
| NLP страна | |
Открыть оригинал
To address the novel safety challenges posed by a state-of-the-art video model as well as a new social creation platform, we’ve built Sora 2 and the Sora app with safety at the foundation. Our approach is anchored in concrete protections.
|
|
|
PR Newswire Sets the Record Straight on AI Visibility: "Be the Source" |
prnewswire |
10.04.2026 16:06 |
1
|
| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 0.9983 |
| NLP тип | other |
| NLP организация | PR Newswire |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
PR Newswire Sets the Record Straight on AI Visibility: "Be the Source"
USA - English
USA - English
News provided by
PR Newswire
Apr 10, 2026, 12:06 ET
Share this article
Share to X
Share this article
Share to X
New insights from GEO webinar highlight how brands can win in the era of AI summaries through authority, consistency and Multichannel Amplification™
NEW YORK , April 10, 2026 /PRNewswire/ -- PR Newswire today released key insights from its recent webinar, " GEO: Owning the AI Summary ," reinforcing a critical shift in communications strategy: in the age of generative AI, visibility is no longer about clicks – it's about being cited as a trusted source.
As AI-powered search and summaries reshape how audiences discover information, PR Newswire emphasized that brands must move beyond optimization frameworks and instead focus on building authoritative, consistent and multichannel narratives that AI systems trust and reference.
"You can optimize content, or you can be the source AI trusts," said Jeff Hicks, Chief Product & Technology Officer at PR Newswire . "The brands that win in this next era won't just structure content well. They'll build durable authority across every channel with a consistent brand voice."
From Clicks to Citations: A Shift in What Matters
Insights shared during the webinar highlighted a fundamental evolution:
AI search is merging with traditional search – not replacing it.
Visibility is increasingly driven by citations, not rankings .
AI systems prioritize structured, authoritative and consistent content .
Brand narratives now have a long shelf life , with AI referencing content years after publication.
"What good content looked like 10 years ago still applies today," said Glenn Frates, RVP Distribution at PR Newswire . "Now your audience includes machines – and they expect clarity, authority and consistency, just like your human audiences."
Key Takeaways from the Webinar
Authority is cumulative: Earned media, owned content and press releases work together to build AI trust.
Consistency beats volume: A steady narrative outperforms one-off announcements.
Structure matters: Headlines, bullet points and section headers help both humans and AI parse content.
Multichannel Amplification™ is essential: Press releases, blogs, social media and earned coverage reinforce each other.
AI has a "long memory": Older content continues shaping brand perception.
"AI isn't just citing what's visible. It's informed by everything beneath the surface," said Scott Newton, Director of Solutions Consulting at Cision and Brandwatch . "That underlying narrative you build over time is what ultimately shapes how your brand shows up in AI answers."
FAQ: Real Questions from the live GEO webinar
Q1: What counts as an "authoritative source" in AI search – an SME quote or a C-suite voice? A: Authority comes from relevance and expertise, not just title. A subject matter expert often provides more valuable, context-rich insight than a generic executive quote. AI systems prioritize depth, clarity and expertise over hierarchy.
Q2: Does keeping content behind a paywall hurt AI visibility? A: It can limit discoverability. While premium content strategies remain valuable, brands should ensure some authoritative, indexable content exists publicly to inform AI systems and support citation potential.
Q3: If older content still gets cited, does deleting archives hurt GEO performance? A: Yes. AI systems frequently reference older content, especially in answers to nuanced or topic-specific queries. Removing historical content can weaken your long-term narrative authority and visibility.
Q4: Should brands publish everything at once or spread content over time? A: Both strategies have value, but consistency is key. A steady cadence across channels reinforces narrative strength more effectively than isolated bursts. Think of it as a "drumbeat," not a spike.
Q5: How do you measure how your brand shows up in AI platforms? A: Measurement requires actively testing prompts across platforms like ChatGPT, Gemini and others. Tools like PR Newswire's AEO & GEO Brand Report help brands track citation frequency, sentiment and share of voice across AI-generated responses.
Q6: Do integrations like photos, videos and IMC campaigns impact AI visibility? A: Yes. Multimedia content enhances engagement and can be cited (e.g., YouTube), while integrated campaigns reinforce consistent messaging – strengthening both human and AI discoverability.
A New Standard for AI-Era Communications
While new frameworks and acronyms continue to emerge in the market, PR Newswire emphasized that success in AI search is not about reinventing communications – but executing fundamentals at scale and with precision.
Build trustworthy, authoritative content .
Maintain consistent storytelling over time .
Leverage Multichannel Amplification™ .
Focus on becoming a primary source of truth .
"AI doesn't browse – it cites," added Frates. "If your brand isn't part of the source layer, it won't be part of the answer."
Additional resources
PR Newswire Launches AEO & GEO Report for AI Brand Visibility
Why FAQs are Built for AI
On-demand webinar - GEO: Owning the AI Summary
About PR Newswire
PR Newswire is the industry's leading press release distribution partner with an unparalleled global reach of more than 500,000 newsrooms, websites, direct feeds, journalists and influencers and is available in more than 170 countries and 40 languages. From our innovative AI-powered PR Newswire Amplify™ platform, award-winning Content Services offerings, integrated media newsroom and microsite products, Investor Relations suite of services, paid placement and social sharing tools, PR Newswire has a comprehensive Multichannel Amplification™ catalogue of solutions to solve the modern-day challenges PR and communications teams face. For more than 70 years, PR Newswire has been the preferred destination worldwide for brands to share their most important news stories.
About Cision
Cision is the global leader in consumer and media intelligence, engagement, and communication solutions. We equip PR and corporate communications, marketing, and social media professionals with the tools they need to excel in today's data-driven world. Our deep expertise, exclusive data partnerships, and award-winning products, including CisionOne, Brandwatch, PR Newswire, and Trajaan, enable over 75,000 companies and organizations, including 84% of the Fortune 500, to see and be seen, understand and be understood by the audiences that matter most to them.
For questions, contact the team at [email protected] .
Logo - https://mma.prnewswire.com/media/2732721/PR_Newswire_Logo.jpg
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Intel signs on to Elon Musk's Terafab chips project | TechCrunch |
techcrunch |
07.04.2026 18:10 |
0.844
|
| Embedding sim. | 0.937 |
| Entity overlap | 0.4211 |
| Title sim. | 0.3488 |
| Time proximity | 0.9854 |
| NLP тип | partnership |
| NLP организация | Intel |
| NLP тема | ai hardware |
| NLP страна | United States |
Открыть оригинал
Intel will join SpaceX and Tesla in an effort to build a new U.S. semiconductor factory in Texas, although the scope of its contributions are unclear.
“Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power future advances in AI and robotics,” Intel said in a corporate post on X. Intel hasn’t shared any more information.
Elon Musk announced in March a team-up between the two tech companies he leads to develop chips for AI compute, satellites, and SpaceX’s mooted space data center and to support the possibility of autonomous Tesla vehicles and robots.
However, building a chip fab is one of the most difficult and expensive corporate infrastructure projects out there, typically requiring years of time and more than $20 billion to create a facility with a huge clean room for thousands of ultra-precise machines to carve silicon. It wasn’t obvious how SpaceX and Tesla, two companies with no experience in the sector, could team up to execute the project efficiently.
Now we have a better idea: Intel will do it. The company has been hunting for large anchor customers to support its foundry business, and now it has two. Still, if investors thought that Terafab would be a greenfield approach based on SpaceX’s and Tesla’s unique approach to engineering, that may not play out.
Once the leading U.S. silicon producer, Intel has seen rivals Nvidia and AMD take the lead in developing advanced processors and adopt the “fabless” business model where chip designers outsource the manufacturing of their semiconductors. Intel’s stock rose more than 3% on the news today. It was trading at $52.28, about 2.9% higher than its opening bell price, at 2 p.m. ET.
Intel declined to comment on the partnership, while SpaceX didn’t respond to TechCrunch’s query.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Topics
AI , Elon Musk , Hardware , In Brief , Intel , semiconductors
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
AI
Uber is the latest to be won over by Amazon’s AI chips
Julie Bort
23 hours ago
AI
A timeline of the US semiconductor market in 2025
Rebecca Szkutak
Jan 21, 2026
AI
Intel spinout Articul8 raises more than half of $70M round at $500M valuation
Jagmeet Singh
Jan 7, 2026
Latest in Hardware
TechCrunch Disrupt 2026
Final 3 days to save up to $500 on your TechCrunch Disrupt 2026 pass
TechCrunch Events
2 hours ago
In Brief
Intel signs on to Elon Musk’s Terafab chips project
Tim Fernholz
22 hours ago
In Brief
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
22 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI just bought TBPN |
the_verge_ai |
02.04.2026 17:40 |
0.825
|
| Embedding sim. | 0.925 |
| Entity overlap | 0.2222 |
| Title sim. | 0.3636 |
| Time proximity | 0.9573 |
| NLP тип | acquisition |
| NLP организация | OpenAI |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
OpenAI has purchased TBPN , an online talk show that often interviews AI executives and other tech leaders. The show goes live every weekday at 2PM PT, often for a three-hour duration, counting OpenAI CEO Sam Altman, as well as executives from Meta, Microsoft, Palantir, and Andreessen Horowitz, among its past guests, and Bloomberg, CNBC, and Fox Business as its competitors.
TBPN's livestream is primarily available on X and YouTube , but many users watch it on X. OpenAI's purchase comes as a lawsuit between Altman and Elon Musk, who was a co-founder of OpenAI before splitting from the project and now owns X, is headed to trial later this mont …
Read the full story at The Verge.
|
|
|
Why OpenAI really shut down Sora | TechCrunch |
techcrunch |
30.03.2026 03:09 |
0.822
|
| Embedding sim. | 0.9317 |
| Entity overlap | 0.4667 |
| Title sim. | 0.32 |
| Time proximity | 0.7669 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
OpenAI’s decision last week to shut down Sora, its AI video-generation tool, just six months after releasing it to the public, raised immediate suspicions. The app had invited users to upload their own faces — so was this some kind of elaborate data grab? According to a new WSJ investigation , the real explanation is considerably more boring: Sora was a money pit that nobody was using, and keeping it alive was costing OpenAI the AI race.
So what happened? After a splashy launch, Sora’s worldwide user count peaked at around a million and then collapsed to fewer than 500,000. Meanwhile, the app was burning through roughly $1 million every day — not because people loved it but because video generation is so costly to run. Every user who dropped themselves into a fantastical scene was drawing down a finite supply of AI chips.
While a whole team inside OpenAI was focused on making Sora work, Anthropic was quietly winning over the software engineers and enterprises that drive revenue. Claude Code, in particular, was eating OpenAI’s lunch.
So CEO Sam Altman made the call: kill Sora, free up compute, and refocus. If you want to understand just how sudden this was, consider what happened to Disney, per the WSJ: The entertainment giant had committed $1 billion to the partnership, yet found out Sora was being shut down less than an hour before the public. The deal died with it.
Topics
AI , In Brief , OpenAI , sora , TC
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
57 minutes ago
AI
Exclusive: Runway launches $10M fund, Builders program to support early-stage AI startups
Rebecca Bellan
7 hours ago
AI
Qodo raises $70M for code verification as AI coding scales
Kate Park
1 day ago
Latest in AI
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
57 minutes ago
AI
Alexa+ gets new food ordering experiences with Uber Eats and Grubhub
Lauren Forristal
3 hours ago
Robotics
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles
Tim Fernholz
6 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI acquires TBPN, the buzzy founder-led business talk show | TechCrunch |
techcrunch |
02.04.2026 19:21 |
0.804
|
| Embedding sim. | 0.9262 |
| Entity overlap | 0.2 |
| Title sim. | 0.1646 |
| Time proximity | 0.99 |
| NLP тип | acquisition |
| NLP организация | OpenAI |
| NLP тема | artificial intelligence |
| NLP страна | United States |
Открыть оригинал
OpenAI has acquired popular tech industry talk show TBPN — Technology Business Programming Network — making this the AI giant’s first acquisition of a media company. The show will report to OpenAI’s chief political operative, Chris Lehane.
TBPN, hosted by former tech founders John Coogan and Jordi Hays, is a daily live show that airs on YouTube and X for three hours, focusing on tech, business, AI, and defense.
The show has gained a cult following in Silicon Valley, a safe space where industry power players can speak candidly and be questioned by fellow insiders. The show has a reputation for being something of a Sports Center for the tech industry — a place where top tech CEOs like Mark Zuckerberg, Satya Nadella, Marc Benioff, and, yes, Sam Altman, come to chop it up, react to the news of the day, and occasionally make some of their own.
TBPN will continue to live on as its own brand, which OpenAI will help scale. Not that it necessarily needed help on that front; TBPN has grown into an empire that’s on track to pull in more than $30 million this year, according to The Wall Street Journal .
OpenAI already has its own podcast for long-form conversations with the people building tech at the company.
OpenAI will also tap the founders’ “amazing comms and marketing instincts” outside the show, according to OpenAI’s head of AGI deployment, Fidji Simo, who said TBPN will “bring AI to the world in a way that helps people understand the full impact of this technology on their daily lives.”
Simo went even further, noting that TBPN’s prowess is necessary for an atypical company like OpenAI where “the standard communications playbook just doesn’t apply.”
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
She said TBPN will have editorial independence and continue to “run their programming, choose their guests, and make their own editorial decisions.”
Still, the acquisition might give some pause. After all, OpenAI is a valuable AI lab on the brink of an IPO buying a buzzy talk show that often discusses the company and its competitors. And once the deal closes, TBPN will operate under OpenAI’s strategy team and report to Chris Lehane , the man who invented the phrase “vast right-wing conspiracy” as a tool to deflect press scrutiny of the Clinton White House.
Lehane, who has been described as a master of the “political dark arts,” is also behind the crypto industry super PAC Fairshake, which spent hundreds of millions to kneecap anti-crypto candidates in the 2024 election. He joined OpenAI that same year and has been in President Trump’s ear ever since, whispering recommendations for sweeping and controversial policies like preventing states from regulating AI and easing environmental restrictions that might slow data center construction.
OpenAI CEO Sam Altman, who said in a social media post that TBPN is his favorite tech show, seems to believe the acquisition won’t change TBPN’s commentary and even criticism of the company.
“I don’t expect them to go any easier on us, am sure I’ll do my part to help enable that with occasional stupid decisions,” he wrote.
TBPN, meanwhile, sees the acquisition as a means to do more than just commentary.
“While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” Hays said in a statement. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
Got a tip or documents about the AI industry? From a non-work device, contact Rebecca Bellan confidentially at rebecca.bellan@techcrunch.com or Signal: rebeccabellan.491.
Topics
AI , Chris Lehane , Media & Entertainment , OpenAI , sam altman , tbpn
Rebecca Bellan
Senior Reporter
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Let’s take a look at the retro tech making a comeback
Lauren Forristal
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
VCs are betting billions on AI's next wave, so why is OpenAI killing Sora? |
techcrunch |
27.03.2026 15:40 |
0.794
|
| Embedding sim. | 0.8877 |
| Entity overlap | 0.65 |
| Title sim. | 0.1415 |
| Time proximity | 0.9871 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | artificial intelligence |
| NLP страна | United States |
Открыть оригинал
When an 82-year-old Kentucky woman was offered $26 million from an AI company that wanted to build a data center on her land, she said no. Sure, that same company can try to rezone 2,000 acres nearby anyway, but as AI infrastructure stretches further into the real world, the real world is starting to push back.
That tension is everywhere this week, from OpenAI shutting down its Sora app to courts finally starting to hold social platforms accountable. On this episode of TechCrunch’s Equity podcast, Kirsten Korosec, Anthony Ha, and Sean O’Kane dig into what it looks like when the AI hype cycle meets reality.
Listen to the full episode to hear about:
Why rival prediction market CEOs of Kalshi and Polymarket are co-investing in a $35M VC fund
How drone startups like Zipline , Lucid Bots , and Brinc are finding real traction where other robotics plays have stalled
What Kleiner Perkins’ $3.5B raise says about where the biggest VC firms think the next AI wave is going
Why two separate court verdicts against Meta in the same week could be the “tobacco moment” for social media
Subscribe to Equity on YouTube , Apple Podcasts , Overcast , Spotify and all the casts. You also can follow Equity on X and Threads , at @EquityPod.
Theresa Loconsolo
Audio Producer
Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the network’s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University.
You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com .
View Bio
Anthony Ha
Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.
You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com .
View Bio
Kirsten Korosec
Transportation Editor
Kirsten Korosec is a reporter and editor who has covered the future of transportation from EVs and autonomous vehicles to urban air mobility and in-car tech for more than a decade. She is currently the transportation editor at TechCrunch and co-host of TechCrunch’s Equity podcast. She is also co-founder and co-host of the podcast, “The Autonocast.” She previously wrote for Fortune, The Verge, Bloomberg, MIT Technology Review and CBS Interactive.
You can contact or verify outreach from Kirsten by emailing kirsten.korosec@techcrunch.com or via encrypted message at kkorosec.07 on Signal.
View Bio
Sean O'Kane
Sr. Reporter, Transportation
Sean O’Kane is a reporter who has spent a decade covering the rapidly-evolving business and technology of the transportation industry, including Tesla and the many startups chasing Elon Musk. Most recently, he was a reporter at Bloomberg News where he helped break stories about some of the most notorious EV SPAC flops. He previously worked at The Verge, where he also covered consumer technology, hosted many short- and long-form videos, performed product and editorial photography, and once nearly passed out in a Red Bull Air Race plane.
You can contact or verify outreach from Sean by emailing sean.okane@techcrunch.com or via encrypted message at okane.01 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Kentucky woman rejects $26M offer to turn her farm into a data center
Graham Starr
Latest Equity Episodes
See More
Startups
VCs are betting billions on AI’s next wave, so why is OpenAI killing Sora?
Theresa Loconsolo
Anthony Ha
Kirsten Korosec
Sean O'Kane
3 days ago
Startups
ReelShort made $1.2 billion on werewolf romances. Watch Club wants to do it better.
Theresa Loconsolo
Rebecca Bellan
Amanda Silberling
Mar 25, 2026
Startups
Nvidia has an OpenClaw strategy. Do you?
Theresa Loconsolo
Kirsten Korosec
Anthony Ha
Sean O'Kane
Mar 20, 2026
Startups
The PhD students who became the judges of the AI industry
Rebecca Bellan
Theresa Loconsolo
Mar 18, 2026
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Why OpenAI killed Sora |
the_verge_ai |
28.03.2026 12:00 |
0.791
|
| Embedding sim. | 0.911 |
| Entity overlap | 0.4211 |
| Title sim. | 0.3265 |
| Time proximity | 0.5989 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
On Tuesday morning, everything was business as usual at OpenAI. By the end of the day, the company had announced that it would scrap its video-generation app, Sora, and reverse plans for video generation inside ChatGPT; it would wind down a $1 billion Disney deal; it would shuffle the role of a high-level executive; and it would raise an additional $10 billion from investors, adding up to more than $120 billion total for its latest funding round.
OpenAI is now in a frenzy to turn a profit, or at least lose less money. Since its launch, Sora seems to have taken up a massive amount of compute without the financial return to justify it. Indus …
Read the full story at The Verge.
|
|
|
OpenAI’s AGI boss is taking a leave of absence |
the_verge_ai |
03.04.2026 20:22 |
0.789
|
| Embedding sim. | 0.8803 |
| Entity overlap | 0.25 |
| Title sim. | 0.3059 |
| Time proximity | 0.9956 |
| NLP тип | leadership_change |
| NLP организация | OpenAI |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
OpenAI is undergoing another round of C-suite changes, according to an internal memo viewed by The Verge .
Fidji Simo, OpenAI's CEO of AGI deployment - who was until recently the company's CEO of applications - says in the memo that she will be stepping away on medical leave "for the next several weeks" due to a neuroimmune condition. While she's out, OpenAI president Greg Brockman will be in charge of product, including leading OpenAI's super app efforts. On the business side, CSO Jason Kwon, CFO Sarah Friar, and CRO Denise Dresser will take charge.
OpenAI's CMO, Kate Rouch, has also decided to step down in order to focus on her health, …
Read the full story at The Verge.
|
|
|
OpenAI's $122B in funding comes at a perilous moment |
the_register_ai |
01.04.2026 16:26 |
0.788
|
| Embedding sim. | 0.9056 |
| Entity overlap | 0.2564 |
| Title sim. | 0.2091 |
| Time proximity | 0.8868 |
| NLP тип | funding |
| NLP организация | OpenAI |
| NLP тема | large language models |
| NLP страна | United States |
Открыть оригинал
AI + ML
16
OpenAI gets $122B to 'just build things' as the world blows them up
16
War, oil shocks, and market nerves could yet knock the AI boom off course
Lindsay Clark
Wed 1 Apr 2026 //
16:26 UTC
Opinion OpenAI has secured an additional $122 billion in capital from a diverse group of investors and reached a nominal $852 billion valuation, the highest of any pre-IPO tech company.
The poster child of the LLM era argues it needs the money to help the world "just build things," it said in an online missive.
Backers include its regular partners Amazon, Nvidia, SoftBank, and Microsoft, as well as an all-star cast of venture capitalists.
Curiously for a company that is yet to go public, it is also getting public money. Through "bank channels," it is raising more than $3 billion from individual investors and will be included in "several exchange-traded funds managed by ARK Invest, further broadening ownership and giving more people the opportunity to share in the upside economics of OpenAI and the AI era."
OpenAI's revolving credit has grown to around $4.7 billion supported by a global syndicate including some of the biggest names in the banking sector.
OpenAI has hit 900 million weekly active users and over 50 million subscribers among consumers, but OpenAI thinks half of its revenue will come from enterprise offerings by the end of the year. Business users will be expected to pull their weight to help these ravenous investors get their handsome gains.
OpenAI's business APIs process more than 15 billion tokens per minute while its Codex developer tool now serves over 2 million weekly users, up five times in the past three months, the company said.
To meet that demand for its software and LLMs, OpenAI is working with an "infrastructure portfolio across multiple cloud partners and multiple chip platforms."
OpenAI's infrastructure partners include Microsoft, Oracle, AWS, CoreWeave, and Google Cloud, while its chip suppliers include Nvidia, AMD, AWS again with Trainium, Cerebras, and Broadcom. Its datacenter partners are Oracle, SBE, and SoftBank.
Commentators have been quick to point out that the list has significant crossover with its investors. OpenAI will build datacenters with Nvidia's chips, but Nvidia is also an investor in OpenAI. Microsoft invests in OpenAI, but OpenAI also spends heavily on Azure .
Other linked companies are borrowing heavily. Oracle has increased its borrowing by $50 billion to help build datacenters for OpenAI in a $300 billion cloud deal .
How will all this debt and investment be paid back? "A unified AI superapp" is part of the answer.
"Users do not want disconnected tools," OpenAI said. "They want a single system that can understand intent, take action, and operate across applications, data, and workflows. Our superapp will bring together ChatGPT, Codex, browsing, and our broader agentic capabilities into one agent-first experience."
OpenAI patches ChatGPT flaw that smuggled data over DNS
Microsoft takes up residence next to OpenAI, Oracle at Crusoe's 900 MW Texas datacenter expansion
OpenAI now gets to decide which type of product assassin it will become
HP stuffs OpenAI LLM into new laptops in bid for small biz
Perhaps OpenAI is hoping its superapp also has superpowers, because it might need them.
As things stand, some observers predict OpenAI will not make a profit until 2030 . While the latest funding round is a statement of investor confidence, it's worth considering whether the AI boom belongs in the first half of the decade. A lot might happen in the next few years. The second half of the decade, as determined by the US president, seems more characterized by the Iran conflict and the oil price. Whatever the messy conclusion and whenever it comes, the aftermath will be with us for a while.
S&P Global is already predicting a hit to the AI boom. Tech giants have so far not shown any signs of reducing the staggering capital investments – possibly $1.6 trillion on datacenters to largely meet AI demand by 2030, according to Omdia – but they might.
If the Iran conflict creates a more permanently high oil price, it could see those companies cut spending in the next two quarters as energy costs rise, producing a "really meaningful correction in all equity markets," Melissa Otto, head of research at S&P Global Visible Alpha, told Reuters .
Before the conflict, jitters already abounded. Despite announcing surging profits, Microsoft saw its share price drop 6 percent as investors voiced concerns about the rampant capex growth it needs to support AI demand.
The world doesn't just want to build things. It demonstrably also wants to blow them up. OpenAI's long list of investors and cloud providers might now wonder if they are standing in the blast zone. ®
Share
More about
OpenAI
More like these
×
More about
OpenAI
Narrower topics
ChatGPT
Copilot
GPT-3
GPT-4
More about
Share
16
COMMENTS
More about
OpenAI
More like these
×
More about
OpenAI
Narrower topics
ChatGPT
Copilot
GPT-3
GPT-4
TIP US OFF
Send us news
|
|
|
5 Burning Questions About Elon Musk’s Terafab Chip Partnership with Intel |
wired |
08.04.2026 17:13 |
0.781
|
| Embedding sim. | 0.8676 |
| Entity overlap | 0.3214 |
| Title sim. | 0.383 |
| Time proximity | 0.8482 |
| NLP тип | partnership |
| NLP организация | Intel |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
Lauren Goode Paresh Dave
Business
Apr 8, 2026 1:13 PM
5 Burning Questions About Elon Musk’s Terafab Chip Partnership with Intel
Intel’s role in Elon Musk’s ambitious chip venture is still murky, raising questions about what the partnership actually entails—and whether it can work at all.
Photo-Illustration: Darrell Jackson; Getty Images
Save this story
Save this story
Intel CEO Lip-Bu Tan said Tuesday that the chipmaker will “work closely” with Elon Musk to support the billionaire entrepreneur's Terafab project, a potentially massive chip development and fabrication operation that will be jointly developed by SpaceX and Tesla. A photo posted by Intel’s official X account shows the two executives shaking hands last weekend in front of a large Intel sign. Musk’s 1-terawatt, ultrahigh performance chip-fabrication facility, which may span multiple locations, could cost billions of dollars.
“Terafab represents a step change in how silicon logic, memory, and packaging will get built in the future,” Tan said in a social media post . “Intel is proud to be a partner and work closely with Elon on this highly strategic project.”
Exactly how Tan and Musk plan to execute such an ambitious venture remains unclear. Musk has been talking about the need to develop a so-called Terafab for months, viewing the endeavor as a way to produce the vast number of chips his companies will need for cars , robots , and data centers . Some chip-industry analysts are highly skeptical that Musk can pull off such a complex and capital-intensive venture.
Intel, meanwhile, has been attempting to make a mighty comeback after years of stagnation, and part of its efforts include pitching its capacity to manufacture advanced semiconductors to tech companies hungry for chips to power the AI boom. As WIRED recently reported , Intel’s ability to secure these outside customers is critical to its success. And Musk could be a huge whale of a customer.
Musk did not respond to WIRED’s questions about the partnership. A spokesperson for Intel referred WIRED to the company’s posts about the deal on social media and declined to comment further. For now, here are five outstanding questions about how Intel’s involvement could affect Terafab’s chances of success.
How Big Is The “Deal”?
Hard to say. Neither Intel nor Tesla has filed any paperwork with the US Securities and Exchange Commission, which is typically required if a new partnership or deal materially changes the capital investment or manufacturing capacity of a public company.
For example, when chipmaker AMD and Meta announced a “multiyear, multi-generation” partnership in February to deploy up to 6 gigawatts of AMD GPUs for Meta’s AI services, AMD disclosed the deal in an SEC filing. As of publishing, no such forms have been filed yet by Intel or Tesla. That indicates Tan and Musk’s agreement may be mostly handshakes and vibes at the moment. As one chip-industry insider put it, “It makes quite a headline for a couple days, no?”
What Is Intel Actually Contributing?
Intel’s public statement about the mash-up with Musk is almost comically vague. The company said that its “ability to design, fabricate, and package ultrahigh-performance chips at scale” will help accelerate Terafab’s goal of producing 1 terawatt of computing power a year to support “future advances in AI and robotics.”
Pat Moorhead, a longtime chip-industry analyst and founder of Moor Insights & Strategy, predicts that Musk will lean on Intel for its advanced packaging capabilities to start. He notes that Tesla “doesn’t need [chip] design engineering; they’re already very capable of that.” Moorhead adds that Musk may also want to license Intel’s chip architecture, which Terafab could build upon and customize.
Intel handling advanced packaging is a safe bet in the near term because it gives all of the companies involved a chance to test their partnership without alienating TSMC, which runs the world’s biggest fabs, Moorhead says. “If you do packaging first, you’re not going to infuriate TSMC as much as you would if you used Intel for wafers,” he says. (Tesla has existing chip partnerships with TSMC and Samsung.)
Moorhead says that Musk’s long-term goal is probably still to own as much of the chip-making stack as possible, from design to fabrication, as well as developing new ways to create wafers. But Moorhead and other analysts have expressed skepticism that a brand-new fab that consolidates every stage of the chip development and fabrication process and at massive scale is even a possibility.
How Much Customization Will Musk Want?
Tesla’s track record on chips suggests that the answer will be a lot. Last year, Tesla signed a $16.5 billion deal with Samsung to produce the automaker’s next-generation A16 chip at its factory in Texas. But Tesla designed the chip itself, to ensure it was tailored to the company’s line of autonomous vehicles and humanoid robots, and, according to Musk, “Samsung agreed to allow Tesla to assist in maximizing manufacturing efficiency.”
“This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house,” Musk posted on X around the time of the deal.
Chip experts say that Intel is likely going to reach a similar customization agreement with Musk. “Technically, as a fabless chip designer, Elon and team could customize their chips to their hearts’ desire,” says Austin Lyons, author of the newsletter Chipstrat and a semiconductor analyst at Creative Strategies. “But the question is whether Elon will want to somehow customize the process itself across wafers and packaging. And, knowing [Musk], I’m sure he’ll be pushing on the end-to-end processes, and surely on aggressive cadences.”
Who Controls the Intellectual Property?
Intel has been struggling in recent years, but it still has a number of fabrication plants around the world and decades of experience. Musk will have to license that manufacturing know-how.
According to Moorhead, that means Intel will likely own the intellectual property produced at the Terafab. Musk would be able to create his own “recipe” for chip manufacturing, but until his companies are in a place to buy up their own chip-making equipment—such as advanced lithography machines—he will still be licensing a manufacturing process or special-process design kit from another foundry.
Who Will Actually Build It?
Worker shortages may add to the challenges Musk faces turning his vision for the Terafab into reality. He has yet to announce where the new plant will be built, but construction is underway on a 2-million-square-foot chip-design lab on the Tesla automotive campus near Austin, and the state has increasingly become Musk’s home and central to his sprawling corporate universe.
Texas and much of the US are facing a shortage of tradespeople like plumbers and electricians needed to build data centers and semiconductor fabs. The data center industry is proving to have the deepest pockets to recruit workers, says Chap Thornton, business manager at the UA Plumbers & Pipefitters Local 286 union for the Austin area. They “aren’t afraid to pay for the labor they need to get it done on their timelines,” he says. “Any of this other stuff that pops up is going to be a bidding war.”
Construction that began in 2020 on Tesla’s 10-million-square foot so-called Gigafactory required demanding schedules and resulted in numerous injuries and at least one death , according to safety regulators and published worker accounts. It’s possible that workers with plentiful alternatives may not want to suit up for Musk again. “Everybody that wants to work is employed,” Thornton says of his union’s about 2,000 active members.
Intel’s involvement may be a benefit for Musk to counter some of the safety concerns. A few years ago, the chipmaker was one of the first in the state to back away from having construction crews work seven days a week. “Productivity goes to heck in a handbasket when you’re working seven days a week,” Thornton says. “Intel definitely has that track record of safety on their sites.”
|
|
|
Rebellions eyes global expansion with rack-scale AI platform |
the_register_ai |
30.03.2026 13:01 |
0.767
|
| Embedding sim. | 0.8623 |
| Entity overlap | 0.4375 |
| Title sim. | 0.1557 |
| Time proximity | 0.9999 |
| NLP тип | funding |
| NLP организация | Rebellions |
| NLP тема | ai hardware |
| NLP страна | South Korea |
Открыть оригинал
Systems
2
South Korean AI chip startup Rebellions eyes new shores for rack-scale invasion
2
Funding round comes ahead of planned IPO
Tobias Mann
Mon 30 Mar 2026 //
13:01 UTC
GPU-makers like Nvidia and AMD may dominate the AI infrastructure market, but there are still more than a few AI chip startups knocking around.
One of them is Rebellions, which after establishing a foothold on its home turf in South Korea, aims to bring its tech to the rest of the world, beginning with a new rack-scale compute platform that won't require enterprises to adopt liquid cooling or ultra-power dense racks.
Founded in late 2020, the startup produces AI accelerators that have been deployed in numerous applications in the South Korean domestic market.
Initially, "we focused a great deal on telcos, service providers, and enterprise-end users within the Korean market," Rebellions chief business officer Marshall Choy told El Reg . "We built up use cases around everything from call centers and customer service to CCTV surveillance for the national highway system."
"We're in a very strong position to take those learnings, capabilities, and improvements we've done over the years and bring that out to other regions, outside of Korea, as less of a fresh start, but more of a rinse and repeat type of motion," he added.
Following the introduction of its Rebel Quad accelerators, since rebranded as the Rebel100, the company has turned its attention to the rest of the world. Over the past few months, Rebellions has opened offices in Japan, Saudi Arabia, Taiwan, and the US, where it hopes to win over enterprises with its new RebelRack and RebelPods.
Before looking at the racks, let's talk about the chips themselves. Our sibling site The Next Platform dug into the Rebel100 last winter, but at a high level, the chip looks quite similar to Nvidia's H200 accelerators from late 2023.
According to Rebellions, the processor is capable of a petaFLOP of dense 16-bit floating point math or double that at FP8. However, unlike the H200, which used a monolithic compute die fabbed at TSMC, Rebellions' latest processor uses a chiplet architecture with four compute dies manufactured and packaged by Samsung.
That processor is fed by four HBM3e stacks totaling 144 GB of capacity and 4.8 TB/s of aggregate bandwidth.
While the smaller compute dies and reliance on Samsung should not only help with yields and avoid competing for TSMC's limited fab and packaging capacity, it still needs to source HBM from somewhere. Memory is already in short supply and HBM is among the scarcest.
This is where being a South Korean company with close ties to both the SK chaebol and Samsung comes in handy. SK Hynix and Samsung are the largest suppliers of HBM in the world. Last we heard, Rebellions was sourcing its HBM from Samsung, but in a pinch it shouldn't have to fight that hard to get SK Hynix to kick in some capacity.
The chip itself is currently being packaged as a PCIe card with a 600 watt TDP, rather than the OAM or SXM modules we've become accustomed to.
Rebellions' reference design calls for eight of these cards to be crammed into a single air-cooled node.
High-efficiency, standard form factors such as 19-inch chassis and air cooling were key design points for Rebellions as it meant the system could be deployed into existing enterprise datacenters, something that can't be said of Nvidia's latest generation of liquid-cooled Rubin GPUs.
The RebelRack will feature four of these nodes, each connected via quad-400 Gbps networking, for a total of 32 accelerators and 64 petaFLOPS of FP8 compute, 4.6 TB of HBM3e, and 153.6 TB/s of aggregate memory bandwidth.
For larger deployments, Rebellions is also developing what it calls the RebelPod, which can scale from eight to 128 nodes, each with eight Rebel100 accelerators interconnected using 800 Gbps Ethernet.
"Right now, people think of rack level. I think we're going to be thinking, in a few days from now, about row level and datacenter level," Choy said.
Compared to GPU systems, this isn't a lot of networking. Most HGX systems now feature at least one 800 Gbps NIC per GPU. Choy tells us that going forward, the network fabric is going to be a major focus for the company.
Alibaba has made 470,000 AI chips, admits they're inferior and may always be
Decoding Nvidia's Groq-powered LPX and the rest of its new rack systems
Meta reveals four Broadcom-built custom AI chips, claims some outperform commercial silicon
Nvidia-backed photonics startup Ayar Labs fills its wallet to mass-produce CPO chiplets
As we've seen with other rack-scale systems from AMD and Nvidia, compute and networking are only two pieces of the puzzle; you also need software that can stitch everything together cohesively.
Rebellions' software stack is nothing exotic. We're told the platform runs on open source frameworks like vLLM, PyTorch, and Triton. For disaggregated inference, it's using llm-d, another open source framework that enables compute-heavy prefill operations on one set of accelerators and memory bandwidth-heavy decode operations on another.
"Everything's open source, from vLLM compiler all the way up to the very highest level of stack, Red Hat, OpenShift, and everything in between," Choy said. "If you've used any of these technologies in any other context, you already know how to use Rebellions."
We've heard similar claims from chipmakers before that haven't ended up being quite so easy to use. However, Rebellions is a member of the PyTorch Foundation, something that can't be said of many AI chip startups.
Of course, none of this is cheap, but Rebellions isn't hurting for cash. On Monday the startup raised $400 million in a pre-IPO funding round led by Mirae Asset Financial Group and the Korea National Growth Fund to both support its expansion westward and further the development of more capable of and efficient AI accelerators and systems.
According to recent reports , the company could file for an IPO as soon as this year or early next year. ®
Share
More about
AI
Datacenter
Samsung
More like these
×
More about
AI
Datacenter
Samsung
Sk Hynix
South Korea
Narrower topics
AIOps
DeepSeek
Disaster recovery
Gemini
Google AI
GPT-3
GPT-4
Kakao
Large Language Model
Machine Learning
MCubed
NAVER
Neural Networks
NLP
Open Compute Project
PUE
Retrieval Augmented Generation
Samsung Galaxy
Samsung Galaxy Ace
Software defined data center
Star Wars
Tensor Processing Unit
TOPS
Broader topics
APAC
Self-driving Car
More about
Share
2
COMMENTS
More about
AI
Datacenter
Samsung
More like these
×
More about
AI
Datacenter
Samsung
Sk Hynix
South Korea
Narrower topics
AIOps
DeepSeek
Disaster recovery
Gemini
Google AI
GPT-3
GPT-4
Kakao
Large Language Model
Machine Learning
MCubed
NAVER
Neural Networks
NLP
Open Compute Project
PUE
Retrieval Augmented Generation
Samsung Galaxy
Samsung Galaxy Ace
Software defined data center
Star Wars
Tensor Processing Unit
TOPS
Broader topics
APAC
Self-driving Car
TIP US OFF
Send us news
|
|
|
OpenAI Buys Some Positive News |
wired |
02.04.2026 19:29 |
0.764
|
| Embedding sim. | 0.8718 |
| Entity overlap | 0.2059 |
| Title sim. | 0.1702 |
| Time proximity | 0.9891 |
| NLP тип | acquisition |
| NLP организация | OpenAI |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
Maxwell Zeff
Business
Apr 2, 2026 3:29 PM
OpenAI Buys Some Positive News
OpenAI is acquiring TBPN , a business talk show that’s popular among Silicon Valley elites, as it continues to battle its negative public image.
Photograph: Anna Moneymaker/Getty Images
Save this story
Save this story
OpenAI announced Thursday that it had acquired the online business talk show TBPN for an undisclosed sum. The move comes as OpenAI struggles with its public image, which has taken a significant hit in recent months.
Since launching in 2024, TBPN has risen in popularity among Silicon Valley circles by offering a daily livestream about the technology industry that’s seen as more tech-friendly than traditional outlets. The show's two hosts, John Coogan and Jordi Hays, offer real-time commentary on breaking news, cycle through viral social media posts, and interview executives from companies including Meta, Salesforce, Palantir, and OpenAI. It’s become especially popular among OpenAI staff and other AI researchers, many of whom are addicted to the social media platform X.
It’s hard to understand how a media startup fits into OpenAI’s core businesses selling ChatGPT, Codex, and a new super app the company is developing to consumers and enterprises. In March, OpenAI’s CEO of applications, Fidji Simo, told staff in an all-hands meeting that the company needed to cancel its side projects and refocus around its core businesses.
In a memo to staff announcing the acquisition, Simo said the typical communications playbook does not apply to OpenAI. “We're not a typical company,” she said in the memo, which was also published as a blog. “We're driving a really big technological shift. And with the mission of bringing AGI to the world comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”
TBPN is a small business compared to OpenAI. The media firm says it generated $5 million in ad revenue last year and was on track to make more than $30 million in revenue in 2026, according to The Wall Street Journal . The show reportedly reaches around 70,000 viewers per episode across a variety of platforms. A source close to OpenAI says the company doesn’t expect TBPN to contribute financially to the business, though it will help with OpenAI’s communications strategy.
OpenAI has fallen under increased public scrutiny in recent months. After the company signed a deal with the Department of Defense in February, Anthropic’s Claude surged in downloads and claimed the top spot among Apple’s free apps. OpenAI’s leaders are also dealing with a growing QuitGPT movement , which is made up of people who vow to never use OpenAI’s products. OpenAI president Greg Brockman cited AI’s popularity issues as a core reason for his increased political spending .
The acquisition makes OpenAI the latest Silicon Valley player to try owning and operating a news business. In recent decades, there have been several notable examples of technology leaders purchasing media firms, including Jeff Bezos buying The Washington Post, Marc Benioff buying Time magazine, and Robinhood buying the newsletter company MarketSnacks. In each case, the acquisitions raised immediate questions about whether the outlets would remain truly independent. In her memo, Simo told staff that TBPN will retain editorial independence.
“ TBPN is my favorite tech show. We want them to keep that going and for them to do what they do so well,” said OpenAI CEO Sam Altman in a post on X. “I don't expect them to go any easier on us, [and I] am sure I'll do my part to help enable that with occasional stupid decisions.”
OpenAI said TBPN will continue to “run their programming, choose their guests, and make their own editorial decisions,” according to Simo’s memo. The company also said that TBPN will report directly to OpenAI’s VP of global affairs, Chris Lehane. WIRED previously reported how an economic research team under Lehane had struggled to report on AI’s negative impacts on the economy .
“Over the past year, we’ve had a front-row seat not just to OpenAI but to the entire ecosystem, covering the daily news, announcements, and launches in real time,” said Jordi Hays, cofounder and host of TBPN , in a statement. “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right. Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
This TBPN deal comes just a week after OpenAI shuttered its AI social video product , Sora, as part of the company’s broader effort to focus its resources and leadership.
|
|
|
Sora’s shutdown could be a reality check moment for AI video | TechCrunch |
techcrunch |
29.03.2026 16:30 |
0.758
|
| Embedding sim. | 0.8693 |
| Entity overlap | 0.3636 |
| Title sim. | 0.2673 |
| Time proximity | 0.6964 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | video generation |
| NLP страна | United States |
Открыть оригинал
OpenAI announced this week that it’s shutting down its Sora app and related video models just six months after launching the app.
On the latest episode of TechCrunch’s Equity podcast , Kirsten Korosec, Sean O’Kane, and I debated what the decision means for OpenAI and for the industry more broadly. To some extent, the move seems consistent with what we’ve been hearing about OpenAI as it focuses on enterprise and productivity tools ahead of a possible IPO.
In fact, Kirsten suggested that OpenAI’s decision to shutter Sora was “a sign of maturity that was nice to see in an AI lab.”
But Sora’s shutdown — along with ByteDance’s reported delay in launching its Seedance 2.0 video model worldwide — could also be a reality check moment for the makers of AI video tools, and for evangelists who claim these tools will be replacing Hollywood anytime soon.
Read a preview of our conversation, edited for length and clarity, below.
Anthony: I think it’s worth highlighting that it’s not just the app. I mean, the app was particularly unappealing to me, at least, and I think to other people, because it was this idea of a social network without people, where it’s just nothing but slop.
But beyond the app, it seems like OpenAI is basically winding down pretty much everything it’s doing with video. According to The Wall Street Journal , which broke some of this news, it’s really about this idea that Open AI is — in advance of potentially going public — really trying to focus on business products, enterprise products, programming products. [So] this consumer social app, [and] more broadly video, is not a priority right now.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Sean: Yeah, I never really used [the app]. The idea of it turned me off for a number of different reasons. And you know, it was a good reminder that Open AI — and I don’t mean this to knock them down in really any way — but I think this was a reminder, probably, for them internally, of the element of luck […] in how successful ChatGPT became.
Clearly, there is something that is valuable there to people, I don’t want to take away from that, because you do not get to the usage numbers that we’ve heard reported from them without there being something that is working right — and even more so that it’s been kept up over a number of years and developed into something that stays meaningful to people.
But there was an element of Sora, when it came out, of like, “We built the most successful consumer product ever, and now we’re doing it again. And we’re going to bring in Disney and all this stuff.” I think this is just a really harsh reminder of like it’s not always going to be an absolute shortcut to the top of the greatest consumer products ever and that there really needs to be something that people feel like they’re getting some meaning out of it for it to stick around.
Kirsten: Yeah, I actually want to give OpenAI props for this decision, because we sometimes make fun of the whole idea of “move fast and break things,” but I think that there is some value [to] companies that can iterate very quickly and then kill off products that are not working and not feel a sense of failure behind it. I mean, there was real money that was lost. If you were to look at the deal with Disney, that was a billio- dollar deal , but if you look at — and we don’t have the insight into this because we’re not seeing their balance sheets — but what were they spending on this and what was the long-term value for the company?
And I think that while, sure, it was interesting to see what they could create, their decision to shutter it, to me, showed a sign of maturity that was nice to see in an AI lab.
Anthony: In terms of what it means for OpenAI, it seems very consistent with everything that we’ve been hearing about their strategy going forward. It doesn’t seem like a huge blow or anything like that in terms of how we think about the future of generative AI.
Particularly in video, it’s interesting because it also comes at this time that there’s been reporting around Seedance, which is the ByteDance generative AI model [for video]. There’s reports that [Seedance 2.0 has] been delayed because there’s engineering and legal questions and basically [figuring out], “Can we build IP protections into this?” Which apparently they hadn’t taken as seriously before.
And so, it’s this reality check moment. There were these really hyperbolic statements, including from people within Hollywood that [were] like, “We’re done, this is the future, it’s just typing in prompts and making feature films.” And it turns out that for all kinds of technical and legal reasons, it is not that easy and we are very, very far from that happening.
Sean: And the last thing I think we should say about this, too, is this is one of a number of decisions that appear to be happening after Fidji Simo came in [and began] sort of running the day-to-day operations. That’s just a huge dynamic that’s changed inside of OpenAI. And I think the further we get away from that moment of her being tapped to run the show, and especially these consumer products and decide the fate of them, the easier it’ll be to look back at this moment in time and think about how big a moment that was for this company.
Loading the player…
Topics
AI , Equity podcast , Media & Entertainment , OpenAI
Anthony Ha
Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.
You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI executive shuffle includes new role for COO Brad Lightcap to lead 'special projects' | TechCrunch |
techcrunch |
03.04.2026 20:35 |
0.746
|
| Embedding sim. | 0.8459 |
| Entity overlap | 0.3529 |
| Title sim. | 0.12 |
| Time proximity | 0.9988 |
| NLP тип | leadership_change |
| NLP организация | OpenAI |
| NLP тема | artificial intelligence |
| NLP страна | United States |
Открыть оригинал
A handful of OpenAI executives are transitioning into new roles, according to a report from Bloomberg . An OpenAI spokesperson confirmed the personnel changes to TechCrunch.
CEO of AGI development Fidji Simo announced in a memo that Brad Lightcap, OpenAI’s COO, has a new job leading “special projects,” which will involve “complex deals and investments across the company.” He will report directly to CEO Sam Altman.
Denise Dresser, the former Slack CEO who recently joined OpenAI as chief revenue officer, will take over some of Lightcap’s commercial duties.
NEW: OpenAI’s Fidji Simo announced exec changes to staff today: she is taking medical leave for several weeks, COO Brad Lightcap is transitioning to a new role, and CMO Kate Rouch is stepping down to focus on her cancer recovery.
More here: https://t.co/EfAqZI7jN3 pic.twitter.com/KmWoXUG0Iu
— Shirin Ghaffary (@shiringhaffary) April 3, 2026
Simo also had news of her own to share: She will be taking medical leave for the next several weeks to navigate a neuroimmune condition.
“I have done everything possible to avoid it, but sadly my body isn’t cooperating,” Simo wrote in the memo obtained by Bloomberg.
“The timing is maddening because we have such an exciting roadmap ahead that the team is executing on, and I hate to miss even a minute of it,” she said.
While she is on leave, OpenAI co-founder and president Greg Brockman will manage product.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Kate Rouch, OpenAI’s marketing head, will also be stepping down from her role to focus on cancer recovery, but will return to a “different, more narrowly scoped role when her health allows,” the memo said. The company plans to search for a new CMO.
“We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases,” OpenAI told TechCrunch in a statement. “We’re well-positioned to keep executing with continuity and momentum.”
Update, 4/3/26, 6:25 PM ET with additional context around Dresser’s role.
Topics
AI , Brad Lightcap , Fidji Simo , OpenAI
Amanda Silberling
Senior Writer
Amanda Silberling is a senior writer at TechCrunch covering the intersection of technology and culture. She has also written for publications like Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast about internet culture, with science fiction author Isabel J. Kim. Prior to joining TechCrunch, she worked as a grassroots organizer, museum educator, and film festival coordinator. She holds a B.A. in English from the University of Pennsylvania and served as a Princeton in Asia Fellow in Laos.
You can contact or verify outreach from Amanda by emailing amanda@techcrunch.com or via encrypted message at @amanda.100 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Allbirds is selling for $39M. It raised nearly 10 times that amount in its IPO.
Connie Loizos
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI acquires TBPN |
openai |
02.04.2026 10:30 |
0.742
|
| Embedding sim. | 0.8815 |
| Entity overlap | 0.2 |
| Title sim. | 0.102 |
| Time proximity | 0.7292 |
| NLP тип | acquisition |
| NLP организация | OpenAI |
| NLP тема | artificial intelligence |
| NLP страна | |
Открыть оригинал
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
|
|
|
OpenAI alums have been quietly investing from a new, potentially $100M fund | TechCrunch |
techcrunch |
06.04.2026 21:54 |
0.741
|
| Embedding sim. | 0.925 |
| Entity overlap | 0.0577 |
| Title sim. | 0.2791 |
| Time proximity | 0.1399 |
| NLP тип | funding |
| NLP организация | Zero Shot |
| NLP тема | venture capital |
| NLP страна | United States |
Открыть оригинал
A new venture capital fund with deep ties to OpenAI has made its first close on its $100 million goal, the founders tell TechCrunch. The partners have already written a couple of checks.
The fund is called Zero Shot (a play on the AI training term) and its co-founding team includes several OpenAI OGs who found themselves becoming VCs almost by serendipity.
Three of the founding partners hail from OpenAI. Evan Morikawa, the former head of applied engineering during the launch of DALL·E and ChatGPT through Codex, is now at robotics startup Generalist. Andrew Mayne, OpenAI’s original prompt engineer, is well-known as the host of The OpenAI podcast. Mayne also founded Interdimensional , an AI deployment consultancy. And Shawn Jain is an engineer and former researcher at OpenAI, who then became a VC and is a founder of his own GenAI startup, Synthefy.
The alums are joined by VC Kelly Kovacs, previously a founding partner at 01A, the growth-stage venture firm founded by Dick Costello and Adam Bain . The fifth founding member of the fund is Brett Rounsaville, formerly of Twitter and Disney, who is also CEO at Mayne’s Interdimensional.
Zero Shot fund founders from left to right: Evan Morikawa, Shawn Jain, Andrew Mayne, Kelly Kovacs, and Brett Rounsaville Image Credits: Zero Shot / Zero Shot
The OpenAI alums have “been friends for years,” Mayne told TechCrunch, having worked together at the model maker from before it released ChatGPT through its wildest growth years.
After leaving, they all found themselves constantly being hit up to consult for VCs about emerging AI tech, and by founder friends wanting advice. That’s what propelled Mayne to start his consulting company.
“Some of our friends were coming out of OpenAI and interested in doing companies,” Mayne said.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
The alums saw gaping holes between the many AI startups being funded and what the market really needed.
“Maybe we should do our own fund, because we think we have a pretty good sense of where things are headed, and we have this great access to people who we think are incredible builders,” Mayne said, recalling the decision.
After conversations with institutions and family offices and closing the first $20 million, the partners set their sights on a $100 million initial fund. They’ve already written a few checks.
Zero Shot backed early OpenAI product manager Angela Jiang and her startup Worktrace AI. The startup is developing an AI-based management software platform to help enterprises automate tasks by first discovering what should be automated. Worktrace AI raised a $10 million seed round from notables like Mira Murati and OpenAI’s Fund, PitchBook estimates.
The team also invested in Foundry Robotics, a startup working on next-gen, AI-enhanced factory robotics. It recently raised a $13.5 million seed , led by Khosla Ventures. Zero Shot has already invested in a third startup, too, which is still in stealth.
The AI bets they’re skipping
Zero Shot’s founders say they understand the direction of AI better than many a VC. That helps them pick startups to back, but also identify which ideas to avoid.
Mayne, for instance, is bearish on most iterations of vibe coding because he foresees that the model makers, with their coding expertise, are going to quickly make subscriptions to such platforms feel unnecessary.
Morikawa tells TechCrunch that, with his deep knowledge of AI and robotics, he’s not a fan of the many “ergo-centric video data companies right now in robotics.” Those are startups working on embodiment training data for robotics.
“There’s a lot of hoping and praying going on right now that someone in the research world will figure out how to transfer the embodiment gap,” Morikawa said of such video data, but “that’s nowhere near possible.”
Mayne is equally skeptical of most startups doing “digital twins.” He’s done due diligence on a few, including building a reasoning model to test them, and has concluded that a regular LLM model works just as well, he said.
“There is a real skill in knowing how to predict where these models will be going next, because it’s extremely not obvious. It’s not linear,” Morikawa said.
In addition to the investing founders, Zero Shot has some recognizable names who have agreed to be advisors, and will get a share of the “carried interest” that the fund returns. The advisors include Diane Yoon, OpenAI’s former head of people; Steve Dowling, the former head of communications at OpenAI and Apple; and Luke Miller, former product leader at OpenAI.
Topics
AI , Exclusive , OpenAI , TC , Venture , venture capital , zero shot
Julie Bort
Venture Editor
Julie Bort is the Startups/Venture Desk editor for TechCrunch.
You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI kills Sora, becomes product assassin |
the_register_ai |
25.03.2026 16:36 |
0.738
|
| Embedding sim. | 0.8445 |
| Entity overlap | 0.2692 |
| Title sim. | 0.1724 |
| Time proximity | 0.8852 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
AI + ML
7
OpenAI now gets to decide which type of product assassin it will become
7
AWS, Google, Broadcom, or Netscape?
Simon Sharwood
Wed 25 Mar 2026 //
16:36 UTC
OpenAI on Wednesday announced the death of its controversial Sora video creation tool, just two days after publishing a guide on how to use it well.
Like so many AI products, Sora was capable of creating revolting content and blatant copyright abuse. OpenAI tidied up those messes and then signed a deal with Disney that saw the House of Mouse promise to inject $1 billion into the AI upstart and explore using its tools.
On Monday, OpenAI was still promoting safe use of Sora on its website. On Tuesday it used a less visible channel, an X post, to announce “We’re saying goodbye to the Sora app ... We’ll share more soon, including timelines for the app and API and details on preserving your work.”
Disney then bailed on its deal.
Thus endeth OpenAI’s video generation efforts, for now at least.
The death of Sora follows last week’s Wall Street Journal report that OpenAI intends to refocus on business users.
It’s also OpenAI’s second recent rapid reversal, after the January decision to deprecate the GPT-4o model just nine months after releasing it, and with just two weeks’ notice.
Technology buyers know that their suppliers sometimes kill products.
Google has often been cast as a product-slaying villain, even prompting the creation of killedbygoogle.com to mourn its murderous ways.
The Chocolate Factory sometimes kills products because they’re just bad and nobody uses them – hello, Wave – but on other occasions just decides they’re not needed any more, as was the case with the basic HTML version of Gmail . On at least one other occasion Google killed a product because it wanted users to start paying for it. In that case, users angrily pointed out that Google had promised the legacy version of its Workplace suite would always be free, and reversed its decision.
Overall, however, Google has not often disrupted its business customers.
AWS has been more inconvenient, launching multiple overlapping products and sometimes killing the least popular.
Users of deprecated Google and AWS products face inconvenience, but generally get fair warning and have alternatives.
Broadcom, and its spiritual sibling Cloud Software Group, offers a muddier example of product death strategies, by keeping software alive but only offering it in bundles –completely changing the way it is sold and telling users they’ll be better off this way. Atlassian has made the same argument when killing on-prem products.
And then there was Netscape, which in its late 1990s rush to dominate the twin markets for enterprise-grade web infrastructure software and consumer browsers created a fog of vaporware and got lost trying to bring it all to market. The company infamously announced products and changed strategy before it could finish them, in the process creating a codebase so convoluted it had to rewrite its core offerings. Trying to do so left it dead in the water as rivals, especially Microsoft, surged ahead.
OpenAI's ChatGPT is so popular that almost no one will pay for it
Anthropic CEO bloviates for 20,000+ words in thinly veiled plea against regulation
OpenAI will try to guess your age before ChatGPT gets spicy
OpenAI is still figuring out how to make money, but wants you to believe in it
Netscape offers a warning to OpenAI. The company’s leadership was arguably more experienced and accomplished than the AI upstart’s, and it faced only one direct foe rather than the flotilla of AI aspirants in today’s global software market. The company still struggled to build a disciplined engineering organization, never managed to overcome Microsoft’s ability to block its access to users, and eventually collapsed under the weight of its own ambitions.
OpenAI has its own problems with Redmond, which is both a collaborator and a competitor. The AI pioneer has also spread itself thin, into the Stargate datacenter project , making OpenClaw fit for mass consumption , brain-computer interfaces , and maybe even consumer hardware .
The demise of Sora shows OpenAI’s leadership knows they must become product-killers, and that they are currently happy to leave a bloody mess behind them by failing to provide a plan for the future of users’ content – while also losing one of the biggest-name customers on the planet. ®
Share
More about
Atlassian
AWS
OpenAI
More like these
×
More about
Atlassian
AWS
OpenAI
Software License
Narrower topics
Amazon Bedrock
Aurora
AWS Graviton
ChatGPT
Confluence
Copilot
EC2
GPT-3
GPT-4
Jira
S3
Broader topics
Amazon
Cloud Computing
Copyright
Jeff Bezos
Legal
Software
More about
Share
7
COMMENTS
More about
Atlassian
AWS
OpenAI
More like these
×
More about
Atlassian
AWS
OpenAI
Software License
Narrower topics
Amazon Bedrock
Aurora
AWS Graviton
ChatGPT
Confluence
Copilot
EC2
GPT-3
GPT-4
Jira
S3
Broader topics
Amazon
Cloud Computing
Copyright
Jeff Bezos
Legal
Software
TIP US OFF
Send us news
|
|
|
[Перевод] OpenAI, Disney, $1 млрд и крах за 4 месяца. Что случилось с Sora? |
habr_ai |
01.04.2026 06:10 |
0.737
|
| Embedding sim. | 0.8449 |
| Entity overlap | 0.3 |
| Title sim. | 0.1028 |
| Time proximity | 0.9477 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Миллиарды долларов сожжены — и почти ничего не получено в виде выручки. Таково наследие Sora, анонсированной OpenAI в декабре 2025 года и официально закрытой 24 марта 2026-го.
Это не просто крах очередного ИИ-проекта. Последствия могут оказаться серьёзными для Сэма Альтмана, OpenAI и всего ИИ-пузыря в целом.
Читать далее
|
|
|
OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise | TechCrunch |
techcrunch |
31.03.2026 21:25 |
0.736
|
| Embedding sim. | 0.8568 |
| Entity overlap | 0.2273 |
| Title sim. | 0.0545 |
| Time proximity | 0.9499 |
| NLP тип | funding |
| NLP организация | OpenAI |
| NLP тема | foundation models |
| NLP страна | United States |
Открыть оригинал
OpenAI has closed a deal to raise $122 billion at an $852 billion valuation, its largest funding round to date as the company is expected to hit the public markets this year.
The round will add to OpenAI’s war chest as it spends enormous amounts of money on AI chips, data center buildouts, and hiring top talent.
SoftBank co-led the round alongside Andreessen Horowitz, D.E. Shaw Ventures, MGX, TPG, and T. Rowe Price Associates, with participation from Amazon, Nvidia, and Microsoft.
About $3 billion came from individual investors via bank channels. OpenAI is also going to be included in several ETFs managed by ARK Invest, giving more people access to the private company’s stock to broaden its shareholder base in advance of its reportedly upcoming IPO .
OpenAI also said it expanded its revolving credit facility to about $4.7 billion, supported by several of the top global banks. The facility remains undrawn, the company said, which suggests it’s bolstering its financial flexibility as it ramps spending on compute and infrastructure, rather than responding to near-term liquidity needs.
The company’s press release on the raise reads less like a typical blog post than a draft of an S-1; it’s heavy on the flywheel metaphors, digs into revenue per compute unit, and offers the kind of TAM-justifying language that institutional investors drool over.
OpenAI included updates on revenue and user numbers, claiming it’s generating $2 billion in revenue per month and taking a shot at competitors: “At this stage, we are growing revenue four times faster than the companies who defined the Internet and mobile eras, including Alphabet and Meta.”
The company also said it has more than 900 million weekly active users in consumer AI and over 50 million subscribers, with search usage nearly tripling in the last year. OpenAI said its ads pilot is bringing in more than $100 million in annual recurring revenue in under six weeks, opening up a serious potential revenue stream for the company that built its user base without ads.
The AI giant claims momentum is mirrored on the business side, which now makes up 40% of its revenue (up from around 30% last year ) and is “on track to reach parity with consumer by the end of 2026.” Its growth across agentic workflows, the company said, is driven by its newest model GPT-5.4.
Finally, OpenAI also called itself an “AI superapp,” making it clear that it wants to own the primary interface for how people use AI.
All of it adds up to a single message: OpenAI is building its public market narrative in real time, and this round is as much about anchoring IPO expectations as it is about the capital itself.
Topics
AI , Andreessen Horowitz , Fundraising , OpenAI , openai fundraise , Softbank
Rebecca Bellan
Senior Reporter
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Let’s take a look at the retro tech making a comeback
Lauren Forristal
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
[Перевод] Disney отменила $1 млрд инвестиций. Sora закрыта. Пузырь начал лопаться? |
habr_ai |
07.04.2026 08:29 |
0.733
|
| Embedding sim. | 0.879 |
| Entity overlap | 0.625 |
| Title sim. | 0.2655 |
| Time proximity | 0.1291 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Когда Альтман запустил её в конце 2024 года, интернет захлестнула волна 20-секундных видео — технически впечатляющих, но практически бесполезных для чего-либо серьёзного.
Несмотря на очевидные ограничения, энтузиасты технологий заявляли, что этого достаточно, чтобы трансформировать всю индустрию видеопроизводства . Разрыв между реальными возможностями и ожиданиями был колоссальным.
Теперь, чуть больше года спустя, OpenAI закрыла модель Sora и сопутствующее приложение. Более того — Disney отменила запланированную инвестицию в $1 миллиард для лицензирования интеллектуальной собственности под использование Sora.
Многие указывают на судьбу Sora как на знак того, что ИИ-пузырь начинает сдуваться . Так ли это? Ответ: и да, и нет . Позвольте объяснить.
Читать далее
|
|
|
Intel will help build Elon Musk’s Terafab AI chip factory |
the_verge_ai |
07.04.2026 15:43 |
0.732
|
| Embedding sim. | 0.8587 |
| Entity overlap | 0.1944 |
| Title sim. | 0.1091 |
| Time proximity | 0.8202 |
| NLP тип | partnership |
| NLP организация | Intel |
| NLP тема | ai hardware |
| NLP страна | United States |
Открыть оригинал
Elon Musk's Terafab AI chip project in Austin, Texas, is gaining a crucial new partner: Intel.
On Tuesday, the American chipmaker announced it was signing on to help design and build the sprawling facility, which would supply AI chips to Musk's two companies, SpaceX ( newly merged with xAI ) and Tesla. Musk needs AI chips to power his plans to build a "robot army" that includes self-driving cars and humanoid robots, as well as for the data centers he plans on launching into space. SpaceX plans on making its initial public offering later this year.
"Terafab will close the gap between today's chip production and the future's demand - a future …
Read the full story at The Verge.
|
|
|
OpenAI shuts down Sora while Meta gets shut out in court | TechCrunch |
techcrunch |
27.03.2026 13:30 |
0.727
|
| Embedding sim. | 0.8488 |
| Entity overlap | 0.12 |
| Title sim. | 0.2903 |
| Time proximity | 0.618 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
Loading the player…
When an 82-year-old Kentucky woman was offered $26 million from an AI company that wanted to build a data center on her land, she said no. Sure, that same company can try to rezone 2,000 acres nearby anyway, but as AI infrastructure stretches further into the real world, the real world is starting to push back.
That tension is everywhere this week, from OpenAI shutting down its Sora app to courts finally starting to hold social platforms like Meta accountable . On this episode of TechCrunch’s Equity podcast, Kirsten Korosec, Anthony Ha, and Sean O’Kane dig into what it looks like when the AI hype cycle meets reality.
Subscribe to Equity on YouTube , Apple Podcasts , Overcast , Spotify and all the casts. You also can follow Equity on X and Threads , at @EquityPod.
Topics
AI , app shutdown , drones , Kleiner Perkins , Meta , meta lawsuit , OpenAI , Roundup , Social , sora , Startups
Theresa Loconsolo
Audio Producer
Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the network’s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University.
You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Kentucky woman rejects $26M offer to turn her farm into a data center
Graham Starr
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch AI
TechCrunch's AI experts cover the latest news in the fast-moving field.
TechCrunch Space
Every Monday, gets you up to speed on the latest advances in aerospace.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
NVIDIA Extreme Co-Design Delivers New MLPerf Inference Records |
nvidia_dev_blog |
01.04.2026 15:00 |
0.726
|
| Embedding sim. | 0.8324 |
| Entity overlap | 0.3333 |
| Title sim. | 0.0382 |
| Time proximity | 0.9999 |
| NLP тип | other |
| NLP организация | |
| NLP тема | ai infrastructure |
| NLP страна | |
Открыть оригинал
Co-designed hardware, software, and models are key to delivering the highest AI factory throughput and lowest token cost. Measuring this goes far beyond peak... Co-designed hardware, software, and models are key to delivering the highest AI factory throughput and lowest token cost. Measuring this goes far beyond peak chip specifications. Rigorous AI inference performance benchmarks are critical to understanding real-world token output, which drives AI factory revenue. MLPerf Inference v6.0 is the latest in a series of industry benchmarks that measure…
Source
|
|
|
The Biggest AI-as-a-Service Company in History |
ai_supremacy |
10.04.2026 09:33 |
0.721
|
| Embedding sim. | 0.8288 |
| Entity overlap | 0.2093 |
| Title sim. | 0.1529 |
| Time proximity | 0.8836 |
| NLP тип | product_launch |
| NLP организация | Anthropic |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
Yeah, it’s Anthropic.
Anthropic is in the Zeitgeist , no longer a glorified AI research lab startup. Now a full on product explosion is happening. This is what AI Supremacy in coding models, agentic AI and maybe even reasoning models looks like (revenue growth is staggering). Upgrade.
“Anthropic: (in philosophy and cosmology) involving or concerning the existence of human life, especially as a constraint on theories of the universe.
“anthropic reasoning is only interesting in a scientific sense if it can help us understand how the universe came to be the way it is”
(typically of environmental degradation) caused by human beings; anthropogenic.
“anthropic climate change”
What is even Anthropic?
It’s mid April, 2026. Everywhere I look recently I see articles about Anthropic and its many Claude updates. I get it, Anthropic is making a huge push into Enterprise AI, will go IPO in about six months, and just released a preview of one of the most impressive models we’ve seen called Mythos . Anthropic is surging ahead of OpenAI and it has repercussions on the entire Tech stack, and incumbents. Tech layoffs feel different this year. (Oracle, Disney, Epic Games, Atlassian (look at the stock), Block, Workday, etc…)
Fastest Growing Enterprise AI Company Ever
Anthropic is arguably the fastest-scaling software company ever , with diversified compute on Nvidia GPUs but also Amazon and Google AI chips. It’s the foundational Agentic AI-as-a-Service company of the Generative AI era (2023-2026, so far). Its API benefits from the explosive growth of Cursor and it’s getting a lot more Enterprise customers with contracts over $1 Million dollars.
What is this List?
But since most of my readers are primarily AI Enthusiasts, their product release timeline lately has been a bit daunting and snappy. I wanted to list some of the best article resources and guides I could find on this platform. This is essentially a Friday listicle, mostly around Claude guides. Clicking on one of these links supports some of its top AI navigators, educators, analysts, enthusiasts.
In a staggering window between February and April 2026, Anthropic's run-rate jumped from $14 billion to $30 billion.
This week Anthropic announced their solution to Managed Agents.
Read the Blog
According to Deedy Das , a partner at Menlo Ventures who invest in Anthropic:
“Claude Mythos just obliterated every single benchmark in AI.”
It might take some time for us to get our hands on the model, since they are prioritizing trust and saftey.
Claude’s Good Times Begin
Hilariously it’s turned every AI creator I know into a booster for this company. Instead of simply repeating another guide, I wanted to give you my readers a kind of directory to some of the best content I could find around Claude’s various list of new products.
Remember, Claude Code only became widely available to users by May 22, 2025, not even a year ago.
What are Managed Agents?
Just a small note on the latest product:
Claude Managed Agents is a suite of composable APIs for building and deploying cloud-hosted agents at scale , handling sandboxed code execution, checkpointing, credential management, scoped permissions, and end-to-end tracing for you. These are suitable for businesses of all sizes, but especially large companies.
It's best for workloads that need long-running execution (tasks that run for minutes or hours with multiple tool calls), cloud infrastructure with secure containers and minimal infrastructure.
So that’s my intro and here’s my list: (I will try and be brief but also give you some weekend reading if you want to catch up on your AI skills):
Click the links to learn more. (I try to keep paid articles to a minimum in the below list, for learning purposes)
Search Claude AI on Substack.
A Short Claude Learning Directory:
I think if you read and scan through some of these articles, it’s fair value for your time. Keeping up to date with Claude AI in practice is likely a good idea.
Scan these titles carefully 👀 (These were all hand picked by me over a few days)
Why Anthropic believes its latest model is too dangerous to release , by (Understanding AI
The Non-Technical Person’s Complete Guide to Claude Code , by
How to Onboard to Claude Without the Learning Curve , by (Build to Launch)
AI Tooling for Software Engineers in 2026, by (The Pragmatic Engineer)
Anthropic Just Passed OpenAI in Revenue . Spending 4x Less, by (The AI Corner) 🔥
How to setup Claude Code (without coding), by (How to AI)
39 Claude Skills Examples to Transform How You Work , by (AI Blew my Mind)
Make the Most of Claude Code: 15 Projects From Your First Prompt to an AI System That Runs Itself , by (Build to Launch)
Turn Claude Cowork Into Your Personal COO 🧠, by
Claude Just Changed Content Creation Forever! (Tutorial) , by
Claude Dispatch and the Power of Interfaces, by (One Useful Thing)
The Ultimate Guide to Building Your Agentic AI Workflow With Claude Cowork , by (The AI Maker)
Guide to Claude Cowork (without coding) by (How to AI)
Claude Cowork: The Ultimate Guide for PMs , by (The Product Compass)
How to Build Product Strategy in the Age of AI: Step-by-Step with Claude Code , by (Product Growth)
Head of Claude Code : What happens after coding is solved | Boris Cherny, by
Claude Cowork Guide for Power Users: 50+ Tested Tips on Plugins, Skills , Sub-Agents, and Memory, by
How to set up Claude Cowork Projects , by (Artificial Corner)
We Tried 100 Claude Skills. These Are The Best , by (Multiple authors, Artificial Corner).
Claude Skills Are Taking the AI Community by Storm, by
Claude Cowork: 10 Use Cases I Tested + 67 More by Profession , by (AI Blew my Mind)
10 Claude Cowork Workflows That Actually Work , by (The AI Corner)
You’re Using Claude Wrong! Here’s How to Be Ahead of 99% of Users , by (Artificial Corner).
Everyone should be using Claude Code more, by (Lenny’s Newsletter)
Forward-deployed AI consulting: the new world of AI Agents , by (AI Opportunity)
Why Anthropic Thinks AI Should Have Its Own Computer (a Podcast interview)— Felix Rieseberg of Claude Cowork & Claude Code Desktop, by
Anthropic’s New AI Report Accidentally Reveals an Industry-Sized Weak Spot , by (The Algorithmic Bridge)
Turn Claude Into Your Personal AI Operating System , by (AI Blew my Mind)
Before You Use Claude Cowork, Build This First , by
How to Set up Claude for Non-Coders , by 🔥
Perplexity Computer vs Claude Code vs Cowork vs Manus: Tested Side by Side, by
The Claude Dispatch Guide : 48 Hours Running AI Agents From My Phone, by (The Product Compass)
The PM’s Guide to Agent Distribution: MCP Servers, CLIs, and AGENTS.md , by (Product Growth)
How I Connected NotebookLM to Claude and Changed How I Do Research Forever, by (The AI Maker)
Claude Code and Codex bet on different harnesses , by
How to build an AI boardroom in Claude Code , by (Insanely Human)
How to use Claude Code to build a board room of world class Product Mentors , (Department of Product)
Claude Team is Shipping Like Crazy: 74 Releases in 52 Days , by (AI Product Management)
How to use Claude Cowork as a Competitive Intelligence System , by (Department of Product)
The Guide to Claude Code for PMs , by (The Product Compass)
The Pentagon is making a mistake by threatening Anthropic , by (Understanding AI)
On Claude Mythos : “New Sages Unrivalled”, by (Hyperdimensional)
Is Claude Cowork safe ? by & (wonderingabout.ai)
Claude.md Full guide , by (Product Market Fit and AI Opportunity)
The Meaning of Anthropic vs the Pentagon , by (Hyperdimensional, Persuasion) 🔥
Claude Managed Agents Explained , by (Emerging AI)
Claude Skills: The Feature That Saves You 200 Hours , by (The VC Corner)
Anthropic Responsible Scaling Policy v3: Dive Into The Details by (Don’t Worry about the Vase)
The One-Person Unicorn 🦄, by
AI Just Entered Its Manhattan Project Era , (The Algorithmic Bridge)
Are AI agents actually slowing us down ? By (The Pragmatic Engineer)
AI #163: Mythos Quest By (Don’t Worry about the Vase)
Components of A Coding Agent , by (Ahead of AI)
Model Context Protocol (MCP) Explained , by
How to Create Interactive Chats with Claude AI , by (How I AI)
Anthropic just solved the hardest part of building AI agents , by (The AI Corner)
A Disruptive Moment in Time… by (Megatrends)
What should we take from Anthropic’s (possibly) terrifying new report on Mythos? by (Marcus on AI)
Claude Mythos and misguided open-weight fearmongering , by (Interconnects)
AI’s Acceleration Paradox , by
Connectors vs Skills vs Projects vs Custom MCP : What Each Layer Is Actually For , by
How to Install Claude Excel , by (How to AI)
A beginner’s guide to using Claude , By (Really Rich)
MCP - A Deep Dive, by (The System Design Newsletter)
6 Ways I Cut My Claude Token Usage in Half! by
If someone you know could use this list, please share it for the benefit of others and the above writers.
Share
Illustration by Daniel Liévano
If this is the fastest growing AI company on Earth and in human history, I guess I understand why guides for its new products are in hot-demand. 🔥
The Claude Learning Explosion
Many lucrative Creators in AI I know, are (almost exclusively) writing about Claude now. I guess Anthropic owns the limelight ✨(or like the way Twitter AI bros used to say: “what a time to be alive” 😂).
Click on the images to visit the Newsletter.
How to AI
The Product Compass
The AI Corner
The Artificial Corner
Michael Crist
AI Blew my Mind
Prosper
Product with Attitude
The AI Maker
Build to Launch
I guess it’s profitable just to write about what’s trending. 🤷🏻‍♂️ Also, why not go to the source:
Anthropic Academy for learning with Claude AI
Claude for Work
Claude for Personal | Daily Tasks
Claude API Development Guide
Go Deeper with Anthropic Courses
Course: Introduction to Claude Cowork
Course: Claude Code in Action
Best Practices for Claude Code
Extend Claude with Skills
Claude Code Docs: Getting Started
What other podcasts, YouTube, courses or resources have you found that helps with learning about Claude AI and enabling you to wrap you mind around AI developments?
Leave a comment
I’m not a huge YouTube watcher but you can go deeper with some of your favorite authors and analysts:
YouTube Channels of note related to AI
- The Dwarkesh Patel YouTube
Sebastian Raschka - by
- How I AI
B. Jones - AI News and Strategy Daily (also a famous TikTok AI educator)
ChinaTalk with Sc
No Priors - A VC podcast, but usually good.
Department of Product , with
Hard Fork , featuring Casey Newton (who’s the other guy again?)
Machine Learning Street Talk (kind of legendary)
Yannic Kilcher - usually breakdowns of AI Papers
Underfitted -
SemiAnalysis -
SAIL -
Jasmine Sun
Asianometry (another legend)
AI Proem , with
High Capacity , with
How to understand the “Layers of Claude”:
By Ruben Hassid, of How to AI Newsletter.
Anyways I hope you were able to find some value from my lists. This onslaught of Claude guides in April, 2026 is way bigger than the January 2025 DeepSeek moment. See you next time.
Edit: It’s my prediction Anthropic becomes the biggest “AI-as-a-Service company” in the world. One could argue today that’s Amazon or Nvidia. Anthropic and OpenAI won’t just make their own custom chips with Broadcom, they will start their own Cloud Computing services as hyperscalers in their own right.
|
|
|
The next phase of enterprise AI |
openai |
08.04.2026 14:00 |
0.717
|
| Embedding sim. | 0.799 |
| Entity overlap | 0.3636 |
| Title sim. | 0.1633 |
| Time proximity | 0.9988 |
| NLP тип | product_launch |
| NLP организация | OpenAI |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
OpenAI outlines the next phase of enterprise AI, as adoption accelerates across industries with Frontier, ChatGPT Enterprise, Codex, and company-wide AI agents.
|
|
|
The vibes are off at OpenAI |
the_verge_ai |
08.04.2026 13:47 |
0.716
|
| Embedding sim. | 0.8532 |
| Entity overlap | 0.0789 |
| Title sim. | 0.0909 |
| Time proximity | 0.7626 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | foundation models |
| NLP страна | |
Открыть оригинал
OpenAI is in a relatively precarious position. The company is and has been a funding behemoth - just over a week ago, it closed $122 billion in funding at a post-money valuation of $852 billion. It's potentially planning for an IPO later this year. ChatGPT's longtime lead in consumer-facing AI led it to name-brand status akin to "Kleenex" for tissues. But in recent months, a slew of executive reshufflings, discontinued projects, and other news has raised questions about how stable the company really is - and how long it may be able to stay on top.
OpenAI's current batch of public controversies started early in the year. At the end of Febru …
Read the full story at The Verge.
|
|
|
SpaceX’s AI Endgame: Owning the Infrastructure Layer of Intelligence |
ai_supremacy |
06.04.2026 09:31 |
0.713
|
| Embedding sim. | 0.8391 |
| Entity overlap | 0.0426 |
| Title sim. | 0.0833 |
| Time proximity | 0.8938 |
| NLP тип | funding |
| NLP организация | SpaceX |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
SpaceX
Good Morning,
With NASA’s Artemis II mission (April 1st to 11th, 2026) going around the Moon right now, there’s a lot of buzz for the aspirational Lunar economy that’s coming. But what does this have to do with AI?
From historic space missions to historic IPOs, SpaceX’s IPO and the inclusion of xAI and likely next year in 2027, Tesla merging into the company as well, is going be a sight to behold. This could also help fund Elon Musk’s AI and vast orbital datacenter ambitions.
The Space race has AI characteristics now . The race to energy, compute and new resources.
Artemis II could be the last dance of NASA in a manner of speaking. Follow live.
Why it Matters?
These three IPOs: SpaceX , OpenAI and Anthropic in the next year could help define the future of AI for the next decade.
The size of the IPOs and the hype behind them is unprecedented in the history of technology. The Elon Musk vs. Sam Altman story is about to get a lot more interesting.
Axios
I remember when it was a rarity for a company to be worth over $2 Trillion dollars (that wasn’t too long ago). And yet, SpaceX boosted its ‌target IPO valuation above $2 ‌trillion, Bloomberg News reported on ​Thursday. SpaceX with xAI is projected to generate only between $24 billion and $30 billion in total revenue for 2026. The bulk of that as of early 2026 is made by Starlink . That would suggest a price-to-sales ratio of easily over 64, more than twice Nvidia’s historical peak in 2024. That’s being optimistic. This would also mean SpaceX would be valued above every S&P 500 company except Nvidia, Apple, Alphabet, Microsoft, and Amazon.
Tesla likely to Merge with SpaceX Post IPO
If we assume Tesla will be merged into SpaceX in 2027, that could in theory be a company worth over $3 Trillion. Tesla’s market cap at the time of writing is still over $1 Tr. Then there are the stark ambitions of TeraFab , a mega $20–$25 billion joint venture between Tesla, SpaceX, and xAI, to create a massive, vertically integrated AI chip manufacturing factory.
SpaceX is aiming to raise approximately at least $75 billion in its initial public offering. SpaceX is targeting a June, 2026 IPO data, that will be tough to make that would put them around six months ahead of OpenAI likely in December, 2026 or early 2027. Anthropic is likely to IPO in November, 2026. Even as Elon Musk’s companies struggle to remain competitive, perhaps SpaceX has a role to play as some kind of a future catalyst.
The complexity and feasibility of both TeraFab and orbital datacenters at scale deserve their own deep dives and is way beyond the scope of this article. The magnitude of their vision dwarfs Tesla robotaxis and Optimus robots, neither of which have manifested in a timely fashion thus far.
Teslarati. Tesla seems confused about its future. Now TeraFab and Orbital Datacenter further complicates future promises.
The enormity of Musk’s projections for TeraFab, Orbital Datacenters, Humanoid robotics and a viable (“self-growing”) Lunar colony and Mars plans are truly from the dreams of which science fiction are made of. He’s not alone, as Blue Origin should IPO next year (2027 or early 2028) soon as well. The speculative nature of Elon Musk’s vision for SpaceX, that was founded in 2002, is not extremely realistic. This is not a startup, this is a 25-year first-mover rocket company that suddenly has a lot of moving parts. The proceeds of the June or later 2026 IPO of SpaceX will presumably fund the following:
Starship & Starlink Scaling
Orbital datacenters
xAI datacenters and compute
TeraFab
Scaling Optimus humanoid robots
A Lunar Colony / Outpost
The eventual colonization of Mars
SpaceX’s Y & Zs 💥
SpaceX’s IPO narrative is making Tesla robotaxis look like a walk in the park. With such lofty goals, SpaceX is surrounding itself with execution risks.
SpaceX IPO Valuation is going get some scrutiny, barely any ARR. A lot of debt, high cash-burn rates.
The valuation compared to Amazon is fairly nonsensical whether the IPO is $1.75 Bn. or more.
SpaceX’s valuation compared to Amazon is hilarious. What does a $2 Trillion company even look like?
Share
To give you a comparison though, Amazon also a $2 Trillion dollar company but made $717 Bn. in 2025, let’s say SpaceX will make $31 Bn. in 2026, that’s 23x less revenue for the same market cap? This while Amazon has AWS, advertising revenue and many other diversified streams including E-commerce, Third-party sellers, Subscriptions, etc… It’s not clear how SpaceX will ever make money as more rocket companies are becoming viable in even the next five years not just Rocket Lab . All of this while xAI burns at least $1 Bn. a month.
I teamed up with from Croatia, who writes:
Cyclop SpaceTech
A research-driven Newsletter that explores early stage companies, startups, market trends, industry leaders and recent news in the space industry.
Cyclop SpaceTech Cyclop SpaceTech is a research-driven Substack exploring startups, companies, market trends, and innovations in the space industry. It delivers clear, data-backed insights for investors and operators tracking the future of commercial space.
By Matej Pretković
Cyclop SpaceTech provides:
In-depth analysis of companies and startups at all stages
Market and technology trend reports
Insights into emerging products and services
Connections to industry experts and thought leaders
Identification of the critical problems space tech must solve next
Matej Pretković is the Founder and CEO of Cyclop Corp, a Zagreb-based consulting and research company supporting the European space economy with independent research and investment facilitation for space-related startups.
The motto of xAI is to “understand the universe”, and we’ll learn a lot more about orbital datacenters in the lead up to this IPO in June or later. Blue and Starcloud have also applied to put a lot of satellites into space, among others.
Can a $75 Billion cash infusion make Tesla great again and turn SpaceX into an orbital compute giant with a terrestrial integrated Fab and enable it to become an energy & robotics leader in the 2030s and 2040s?
Getty images, NPR.
TeraFab looks like a fun idea on paper:
“Tesla, SpaceX, and xAI are launching the most epic chip-building effort ever - combining logic, memory and advanced packaging under one roof.” - says the TeraFab project
Elon Musk hopes the SpaceX IPO revenue can help fund orbital datacenters and the very expensive TeraFab projects. The U.S. Government is pushing corporations to make settlements and Fabs on the Moon before China. Presumably they will get funding help to do this.
LA Times. A pedestrian walks past SpaceX in Hawthorne in 2024.(Genaro Molina/Los Angeles Times).
Solar Arrays in Space ☀️
As global data-center power consumption (the energy for AI Infra) is expected to roughly double to nearly 1,000 terawatt-hours by the end of the decade, according to an estimate by the International Energy Agency , solar arrays in space , on the Moon, around the Moon beaming energy back to earth isn’t as crazy as it sounds. SpaceX might end up being more of an energy company with Tesla’s help than a rocket company as competition heats up. All of this presumably to power the future of AI as well.
SpaceX carries a lot of Debt and Expenses amid a Highly speculative Future 🌊
SpaceX has to carry a lot, X still carries roughly $12 billion in acquisition debt while making 35% less revenue as compared to 2022 . TeraFab as a future project makes even less sense than most of Elon Musk’s projects. It was announced on March 21, 2026. A Full-stack Fab for orbital compute? The project is designed to bring every stage of chip manufacturing—design, lithography, fabrication, memory production, advanced packaging, and testing—under one roof to enable "recursive loops" of rapid iteration.
What would a SpaceX led lunar colony look like? If Elon Musk becomes Earth’s first Trillionaire I’m guessing he’ll want his space legacy to flourish. But what will be the price tag? Meanwhile xAI is being rebuilt in a massive pivot of its own. Tesla’s sales declined 14.3% from the previous quarter with deliveries only improving 6% from a year ago when the backlash for his politics had hit. Just two models, 3 and Y accounted for 97% of the company’s deliveries last year. From Optimus robots to a space-age full-stack Chips Fab, it always seems Elon Musk is starting over. The proceeds from the IPO will at least help with the mounting debt. But the specs look disorientating.
TeraFab is targeting 1 terawatt ($10^{12}$ watts) of total annual AI compute capacity. Targeting a node level of the 2-nanometer (2nm) process. A prototype facility is currently being built at Giga Texas in Austin, with a larger-scale facility planned for a yet-to-be-determined location. They ain’t no TSMC.
📚 Cyclop SpaceTech Articles:
If you enjoy speculating on the future of space and technology, check out some of the articles of the guest contributor:
The Rise of Data Centers in Space : Solving Earth’s Growing Digital Demand
Top 5 Emerging SpaceTech Trends to Watch in 2026
SpaceTech Stock Performance: Public Equities and Top Subsector Winners
Elon Musk’s SpaceX: The Triple Engine Behind Success
The 2026 Space Investment Landscape: Key Capital Shifts
What sectors of Space-Tech will be the most profitable for investors? How much will Aerospace companies of the future intersect with national defense?
Cyclop SpaceTech
Top Space-Tech Areas to Invest?
Analysts project the global space market will grow from approximately $630 billion in 2023 to over $1.1 trillion by 2030. With capital also coming from higher National Defense budgets Space-tech seems poised to have considerable geopolitical implications in the future of technology.
Space-Tech as the intersection of National Defense
"The Golden Dome" & Hypersonic Defense
Proliferated Warfighter Space Architecture (PWSA)
In-Space Infrastructure & Servicing
Satellite Data & AI-Driven Analytics
Satellite Communications
Lunar Economy & Deep Space Logistics
Aerospace National Defense: Tactically Responsive Space (TacRS)
Become a Paying Subscriber to Cyclop SpaceTech
If you consider thinking about this important, consider going deeper with the contributing writer’s expertise. Learn more about what to expect.
Upgrade
How Bullish are you on SpaceX’s Futuristic Ambitions? ⏳
Guys we still have the deep dive to get into:
SpaceX’s AI Endgame: Owning the Infrastructure Layer of Intelligence
By , March, 2026.
The strategy is simple: don’t just join the AI race - own the infrastructure behind it.
1. SpaceX is becoming the infrastructure powerhouse of the 21st century
SpaceX is “quietly” becoming the defining infrastructure layer of the 21st century, as critical to the coming decade as fiber cables were to the 1990s internet boom.
In the past four months alone, the company completed the largest private merger in history, filed to launch one million orbital data center satellites, pivoted from Mars to the Moon in response to a $185 billion government defense program and confirmed an IPO targeting mid-2026 at a potential $1.5 trillion valuation.
SpaceX is not a company that AI is happening to. It is a company actively positioning itself as the physical stack on which the next phase of AI gets built.
The xAI merger, the orbital data center FCC filing and the Starlink global connectivity network are not separate bets. They are three parts of the same thesis.
Read more
|
|
|
The AI industry’s race for profits is now existential |
the_verge_ai |
09.04.2026 14:00 |
0.711
|
| Embedding sim. | 0.8362 |
| Entity overlap | 0 |
| Title sim. | 0.099 |
| Time proximity | 0.8942 |
| NLP тип | other |
| NLP организация | Anthropic |
| NLP тема | ai agents |
| NLP страна | |
Открыть оригинал
Today on Decoder , let’s talk about the looming AI monetization cliff, and whether some of the biggest companies in the space can become real, profitable businesses before they careen right off it.
My guest today is Hayden Field, who’s our senior AI reporter here at The Verge . She’s been keeping close tabs on both Anthropic and OpenAI, and how these two companies in particular tell us a whole lot about the AI industry in 2026.
You’ve certainly heard a version of the monetization cliff story before. The biggest AI firms are built off the back of hundreds of billions in capital investment, and they’re linked to even greater amounts of forward-looking investment in data center build-out, chips, and other infrastructure spend. At some point, the profits have to materialize, or the bubble pops. Maybe AGI arrives, maybe the economy crashes, who knows.
You’ve heard me ask some version of this question to scores of CEOs here on this show, and a majority of them have hinted toward the bubble popping — they think some companies will fail in spectacular fashion, some will succeed, and the opportunities, especially the money, are simply too big to ignore. We’re doing this, whether we want to or not — the market depends on it.
Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here . Not a subscriber? You can sign up here .
So these last few weeks have felt like a very important inflection point, as both Anthropic and OpenAI have started to react to the reality of needing to go public — needing to make money,
The catalyst for this change is AI agents, and products like Claude Code and Cowork, as well as the open-source OpenClaw and OpenAI’s Codex, have radically changed how these companies are thinking about their resources. And this is starting to affect how they behave — the products they support or suddenly kill, the restrictions they impose on customers, and the money they’re willing to burn toward their next big milestone.
That’s because agents are valuable to customers right now, but agents also use far more compute. So the way people are using agents is burning tokens at a rate way faster than these companies anticipated, and that’s causing them to make hard decisions.
We saw this most evidently last month when OpenAI abruptly killed its video-generation app Sora , ditching a $1 billion Disney licensing deal in the process. Why? It costs too much to run, and OpenAI needs the compute for Codex. We saw it again just last week, when Anthropic decided it would no longer let Claude users burn through compute resources using the OpenClaw agent framework through a standard subscription plan, instead forcing those users onto pay-as-you-go plans , which cost substantially more.
As you’ll hear Hayden explain here, these are glimmers of a make-or-break moment for the AI industry, as both Anthropic and OpenAI barrel toward two of the biggest IPOs in history. And the pressure on these companies to make money has never been this intense.
The projections these companies have made, which just this week were leaked to the Wall Street Journal , tell a story of mind-boggling growth, to the tune of hundreds of billions in revenue and profitability by the end of the decade. But the most important questions now are can the AI companies pull this off, and what compromises will they make to reach that goal and avoid crashing and burning?
Okay: Verge senior policy reporter Hayden Field on the AI monetization cliff and the race to profitability. Here we go.
If you’d like to read more about what we discussed in this episode, check out these links:
The vibes are off at OpenAI | The Verge
Anthropic essentially bans OpenClaw from Claude | The Verge
Why OpenAI killed Sora | The Verge
OpenAI just bought TBPN | The Verge
National poll shows voters like AI less than ICE | The Verge
The spiraling cost of making AI | WSJ
OpenAI’s Fidji Simo taking leave amid exec shake-up | Wired
OpenAI raises another $122B at $850B valuation | The Verge
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
|
|
|
The 70-Person AI Image Startup Taking on Silicon Valley's Giants |
wired |
09.04.2026 18:00 |
0.711
|
| Embedding sim. | 0.8179 |
| Entity overlap | 0.0952 |
| Title sim. | 0.1188 |
| Time proximity | 0.9762 |
| NLP тип | other |
| NLP организация | Black Forest Labs |
| NLP тема | generative ai |
| NLP страна | Germany |
Открыть оригинал
Maxwell Zeff
Business
Apr 9, 2026 2:00 PM
The 70-Person AI Image Startup Taking on Silicon Valley's Giants
Black Forest Labs has long punched above its weight in the AI image generation space. Its next move? Powering physical AI.
Photo-Illustration: WIRED Staff; Matthias Balk/Getty Images
Save this story
Save this story
Standing inside the HumanX conference in San Francisco’s Moscone Center, it’s hard not to feel like you’re at the center of the AI universe. Technology leaders swarm the building, and the headquarters of OpenAI and Anthropic are just down the block. But a 70-person startup headquartered 5,000 miles away in Germany’s Black Forest—a region famous for its ham—has become a top competitor to Silicon Valley’s leading labs in AI image generation.
In December, Black Forest Labs raised funds at a $3.25 billion valuation, after signing deals to power AI image-generation features in Adobe and the graphic design platform Canva. It has even struck agreements with major AI labs like Microsoft, Meta, and xAI to power similar features in their products.
Nearly two years after launch, Black Forest Labs can afford to be picky about who it works with. In 2024, Elon Musk’s xAI tapped Black Forest Labs to power Grok’s first image generator . That partnership put Black Forest Labs on the map but generated a lot of controversy due to the chatbot’s limited safeguards. It ended months later when xAI developed an in-house AI image model.
In recent months, xAI approached Black Forest Labs about licensing the startup's technology again, sources familiar with the matter tell WIRED. This time around, Black Forest Labs declined, the sources said, deeming it too operationally difficult to partner with xAI, which has a famously chaotic work environment. xAI did not immediately respond to WIRED’s request for comment.
In September, Black Forest Labs struck a $140 million multiyear deal to give Meta access to its AI image-generation technology.
These AI labs want to work with Black Forest Labs because its image generators are among the world's best, ranking just below OpenAI and Google's offerings on the third-party firm Artificial Analysis' benchmarks . The startup also offers some of the most downloaded text-to-image models on Hugging Face , indicating that a lot of AI image tools on the market are likely powered by a free version of Black Forest Labs’ technology.
It’s particularly impressive since the company has historically had far fewer resources than its competitors. This has led it to a more efficient line of research called latent diffusion, which is essentially when an AI model first sketches out a rough blueprint of an image, and then paints in more detail.
Latent diffusion “enabled us to put out very powerful models that took orders of magnitude less resources than our competitor’s models,” said cofounder Andreas Blattmann in an interview with WIRED onstage at HumanX this week.
Despite its success, Black Forest Labs believes image generation is just the beginning. Blattmann said the startup plans to unveil a robot powered by one of its AI models later this year. (He did not reveal what company is making the hardware.) The push is part of a larger opportunity the company sees to build AI that can perceive and take actions in the physical world.
“Visual intelligence is so much more than content creation. Content creation is just the first segue into this entire technology,” said Blattmann. “What I’m personally super excited about—and that’s a pattern throughout this conference—is physical AI.”
Black Forest Labs is also in talks with a handful of hardware companies, to power features in products like smart glasses and robots, sources tell WIRED.
Building in the Black Forest
Blattmann and his cofounders, Robin Rombach and Patrick Esser, made a name for themselves publishing some groundbreaking research on AI image models in 2021. In 2022, they were hired by Stability AI and released Stable Diffusion, a popular open source AI image generator based on their prior research. But two years later, they announced their departure and launched Black Forest Labs.
Rather than move to San Francisco, the trio decided to maintain a headquarters near their hometowns in Freiburg, Germany. Blattmann said the decision has been key to the company’s success.
“It can be a huge asset to not be where everyone else is,” he added. “Everyone who has ever run a startup knows that it’s a lot about the ability to focus and work on what matters. Whenever I’m here in SF I love it, but it’s also very hard to focus because there’s so much stuff going on.”
It’s clear that several American AI labs have struggled with focus in recent years. The most top-of-mind example is OpenAI, which recently killed off its AI video generation app Sora to prioritize core business efforts. (It then bought the popular tech talk show TBPN a few weeks later, though.) Black Forest Labs has been one of the more disciplined AI labs thus far, but as it expands into physical AI, the company’s focus might be tested.
|
|
|
Integrate Physical AI Capabilities into Existing Apps with NVIDIA Omniverse Libraries |
nvidia_dev_blog |
08.04.2026 16:00 |
0.71
|
| Embedding sim. | 0.8406 |
| Entity overlap | 0.037 |
| Title sim. | 0.0813 |
| Time proximity | 0.8452 |
| NLP тип | product_launch |
| NLP организация | NVIDIA |
| NLP тема | robotics |
| NLP страна | |
Открыть оригинал
Physical AI—AI systems that perceive, reason, and act in physically grounded simulated environments—is changing how teams design and validate robots and... Physical AI—AI systems that perceive, reason, and act in physically grounded simulated environments—is changing how teams design and validate robots and industrial systems, long before anything ships to the factory floor. At GTC 2026, NVIDIA highlighted physical AI as a key direction for robotics and digital twins, where policies are trained and validated against physically grounded environments.
Source
|
|
|
Accelerate Token Production in AI Factories Using Unified Services and Real-Time AI |
nvidia_dev_blog |
01.04.2026 15:00 |
0.707
|
| Embedding sim. | 0.8193 |
| Entity overlap | 0.1111 |
| Title sim. | 0.1596 |
| Time proximity | 0.8452 |
| NLP тип | other |
| NLP организация | |
| NLP тема | ai infrastructure |
| NLP страна | |
Открыть оригинал
In today’s AI factory environment, performance is not theoretical. It is economic, competitive, and existential. A 1% drop in usable GPU time can mean... In today’s AI factory environment, performance is not theoretical. It is economic, competitive, and existential. A 1% drop in usable GPU time can mean millions of tokens lost per hour. Minutes of congestion can cascade into hours of recovery. A rack-level power oversubscription can lead to stranded power and reduced tokens per watt, silently eroding factory output at scale. As AI factories scale…
Source
|
|
|
Shifting to AI model customization is an architectural imperative |
mit_tech_review |
31.03.2026 14:12 |
0.706
|
| Embedding sim. | 0.808 |
| Entity overlap | 0.1333 |
| Title sim. | 0.1176 |
| Time proximity | 0.9928 |
| NLP тип | other |
| NLP организация | Mistral AI |
| NLP тема | large language models |
| NLP страна | Southeast Asia |
Открыть оригинал
In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every new model iteration. Today, those jumps have flattened into incremental gains. The exception is domain-specialized intelligence, where true step-function improvements are still the norm.
When a model is fused with an organization’s proprietary data and internal logic, it encodes the company’s history into its future workflows. This alignment creates a compounding advantage: a competitive moat built on a model that understands the business intimately. This is more than fine-tuning; it is the institutionalization of expertise into an AI system. This is the power of customization.
Intelligence tuned to context
Every sector operates within its own specific lexicon. In automotive engineering, the “language” of the firm revolves around tolerance stacks, validation cycles, and revision control. In capital markets, reasoning is dictated by risk-weighted assets and liquidity buffers. In security operations, patterns are extracted from the noise of telemetry signals and identity anomalies.
Custom-adapted models internalize the nuances of the field. They recognize which variables dictate a “go/no-go” decision, and they think in the language of the industry.
Domain expertise in action
The transition from general-purpose to tailored AI centers on one goal: encoding an organization’s unique logic directly into a model’s weights.
Mistral AI partners with organizations to incorporate domain expertise into their training ecosystems. A few use cases illustrate customized implementations in practice:
Software engineering and assisting at scale: A network hardware company with proprietary languages and specialized codebases found that out-of-the-box models could not grasp their internal stack. By training a custom model on their own development patterns, they achieved a step function in fluency. Integrated into Mistral’s software development scaffolding, this customized model now supports the entire lifecycle—from maintaining legacy systems to autonomous code modernization via reinforcement learning. This turns once-opaque, niche code into a space where AI reliably assists at scale.
Automotive and the engineering copilot : A leading automotive company uses customization to revolutionize crash test simulations. Previously, specialists spent entire days manually comparing digital simulations with physical results to find divergences. By training a model on proprietary simulation data and internal analyses, they automated this visual inspection, flagging deformations in real time. Moving beyond detection, the model now acts as a copilot, proposing design adjustments to bring simulations closer to real-world behavior and radically accelerating the R&D loop.
Public sector and sovereign AI: In Southeast Asia, a government agency is building a sovereign AI layer to move beyond Western-centric models. By commissioning a foundation model tailored to regional languages, local idioms, and cultural contexts, they created a strategic infrastructure asset. This ensures sensitive data remains under local governance while powering inclusive citizen services and regulatory assistants. Here, customization is the key to deploying AI that is both technically effective and genuinely sovereign.
The blueprint for strategic customization
Moving from a general-purpose AI strategy to a domain-specific advantage requires a structural rethinking of the model’s role within the enterprise. Success is defined by three shifts in organizational logic.
1. Treat AI as infrastructure, not an experiment. Historically, enterprises have treated model customization as an ad hoc experiment—a single fine-tuning run for a niche use case or a localized pilot. While these bespoke silos often yield promising results, they are rarely built to scale. They produce brittle pipelines, improvised governance, and limited portability. When the underlying base models evolve, the adaptation work must often be discarded and rebuilt from scratch.
In contrast, a durable strategy treats customization as foundational infrastructure. In this model, adaptation workflows are reproducible, version-controlled, and engineered for production. Success is measured against deterministic business outcomes. By decoupling the customization logic from the underlying model, firms ensure that their “digital nervous system” remains resilient, even as the frontier of base models shifts.
2. Retain control of your own data and models. As AI migrates from the periphery to core operations, the question of control becomes existential. Reliance on a single cloud provider or vendor for model alignment creates a dangerous asymmetry of power regarding data residency, pricing, and architectural updates.
Enterprises that retain control of their training pipelines and deployment environments preserve their strategic agency. By adapting models within controlled environments, organizations can enforce their own data residency requirements and dictate their own update cycles. This approach transforms AI from a service consumed into an asset governed, reducing structural dependency and allowing for cost and energy optimizations aligned with internal priorities rather than vendor roadmaps.
3. Design for continuous adaptation. The enterprise environment is never static: regulations shift, taxonomies evolve, and market conditions fluctuate. A common failure is treating a customized model as a finished artifact. In reality, a domain-aligned model is a living asset subject to model decay if left unmanaged.
Designing for continuous adaptation requires a disciplined approach to ModelOps. This includes automated drift detection, event-driven retraining, and incremental updates. By building the capacity for constant recalibration, the organization ensures that its AI does not just reflect its history, but it evolves in lockstep with its future. This is the stage where the competitive moat begins to compound: the model’s utility grows as it internalizes the organization’s ongoing response to change.
Control is the new leverage
We have entered an era where generic intelligence is a commodity, but contextual intelligence is a scarcity. While raw model power is now a baseline requirement, the true differentiator is alignment—AI calibrated to an organization’s unique data, mandates, and decision logic.
In the next decade, the most valuable AI won’t be the one that knows everything about the world; it will be the one that knows everything about you . The firms that own the model weights of that intelligence will own the market.
This content was produced by Mistral AI. It was not written by MIT Technology Review’s editorial staff.
|
|
|
Закрытие Sora — не конец AI-пузыря. Это его взросление |
habr_ai |
31.03.2026 21:23 |
0.706
|
| Embedding sim. | 0.8341 |
| Entity overlap | 0.25 |
| Title sim. | 0.0682 |
| Time proximity | 0.7486 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
24 марта OpenAI объявила о закрытии Sora , и многие сразу записали это в доказательства того, что AI-пузырь начал сдуваться. Мол, хайп закончился, экономика не сошлась, и вот оно — начало конца.
А за неделю до этого Дженсен Хуанг, основатель и CEO NVIDIA, со сцены GTC 2026 бросил тезис , который рынок ещё долго будет переваривать: если инженер с зарплатой $500k тратит на токены меньше $250k в год — это тревожный сигнал. Если тратит $5k — вообще катастрофа.
На первый взгляд не очень понятно как это связано с закрытием Sora. Один из самых влиятельных людей в AI говорит: тратьте больше токенов, а OpenAI почти сразу после этого закрывает один из самых ресурсоёмких AI-продуктов.
Но на самом деле эти две новости говорят об одном и том же: токены надо тратить больше — но только туда, где они дают реальную отдачу. ☝️
Читать далее
|
|
|
Suits won't quit AI spending, even if they can't prove ROI |
the_register_ai |
10.04.2026 12:10 |
0.704
|
| Embedding sim. | 0.8359 |
| Entity overlap | 0.4091 |
| Title sim. | 0.087 |
| Time proximity | 0.5777 |
| NLP тип | other |
| NLP организация | KPMG |
| NLP тема | ai adoption |
| NLP страна | United Kingdom |
Открыть оригинал
AI + ML
28
Suits won't quit AI spending, even if they can't prove it's working
28
Forget about investment value! Call it a 'strategic enabler for enterprise‑wide transformation,' says KPMG
Lindsay Clark
Fri 10 Apr 2026 //
12:10 UTC
Most UK business leaders will keep AI at the top of their spending priorities, with 65 percent planning to maintain investment whether they see immediate measurable returns or not.
As debate rages about the need for proving return on investment (ROI) before tech departments open their wallets to buy AI platforms, agents, or enterprise software add-ons, research from KPMG shows the notion is sliding down the priority list for business leaders.
In a survey of 2,110 business leaders globally, the consultancy found 70 percent of UK business leaders think AI will remain high on their spending agendas even in the face of an economic downturn. Ninety-four percent plan to use AI agents in their businesses, but their experience varies.
The poll, conducted in February and March, found ROI is not a primary driver of AI investment for many organizations, although they can measure it in specific areas. Most said they could measure ROI in productivity (76 percent), quality and performance of work (71 percent), speed and accuracy of decision-making (67 percent), and profitability (64 percent).
However, just 14 percent were confident in measuring business value from improved analytics used by the C-suite for business decision-making.
Leanne Allen, head of AI at KPMG, said businesses are changing the way they see AI investment. "This shift in mindset from viewing AI as something that must deliver an immediate return to one that sees AI as a long-term investment, recognizing it as a strategic enabler for enterprise‑wide transformation, is an important milestone."
Some techies running the department for their employer might be forgiven for thinking they are getting mixed messages.
Software vendors and cloud providers are currently bearing the burden of the expected increase in AI spending this year, with investment forecast to hit $2.52 trillion for 2026 , according to Gartner. In the long run, however, enterprise customers and consumers will pay one way or the other.
Accenture tells staffers: If you want a promotion, use AI at work
KPMG partner in Oz turned to AI to pass an exam on... AI
PwC will say goodbye to staff who aren't convinced about AI
AWS CEO: It's funny when people ask me if AI is overhyped
At the enterprise level, John-David Lovelock, distinguished VP analyst at Gartner, told The Register in January that the conversation had gone from some board-appointed special group saying "get me something AI" to a more cautious approach.
"We're starting to see the end of the investment line. We had a thousand flowers blooming, now it's time to prune the garden. We are getting to the point where we go from 'that was a great idea' to 'where's my revenue?' That's a normal part of any new technology," he said.
KPMG's findings come against a backdrop of companies struggling to justify AI spending. In February, a survey of almost 6,000 corporate execs across the US, UK, Germany, and Australia found that more than 80 percent detect no discernible impact from AI on either employment or productivity , even though 69 percent of businesses currently use some form of AI.
A Gartner report last week found only 28 percent of use cases for AI in technology infrastructure fully succeed and offer ROI.
According to a Harris Poll study commissioned by Dataiku, 98 percent of tech leaders said they were coming under increasing pressure from the board to demonstrate ROI, while 71 percent of the CIOs surveyed believed their AI budget would likely face cuts or a freeze if targets were not met by the end of the first half of this year. ®
Share
More about
AI
KPMG
More like these
×
More about
AI
KPMG
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
James Marwick
Roger Mitchell
Self-driving Car
William Barclay Peat
More about
Share
28
COMMENTS
More about
AI
KPMG
More like these
×
More about
AI
KPMG
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
James Marwick
Roger Mitchell
Self-driving Car
William Barclay Peat
TIP US OFF
Send us news
|
|
|
Sora закрывается: Почему это произошло и какие есть альтернативы в 2026 |
habr_ai |
03.04.2026 11:22 |
0.704
|
| Embedding sim. | 0.8244 |
| Entity overlap | 0.25 |
| Title sim. | 0.1863 |
| Time proximity | 0.6311 |
| NLP тип | other |
| NLP организация | |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
25 марта 2026 года команда Sora опубликовала в X короткое прощальное сообщение: "Мы прощаемся с Sora". Без точных дат и без объяснений. Только обещание позже рассказать, как сохранить созданный контент. Именно так и закончилась история одного из самых хайповых ИИ-инструментов последних двух лет.
Читать далее
|
|
|
AI chip startup Rebellions raises $400 million at $2.3B valuation in pre-IPO round | TechCrunch |
techcrunch |
30.03.2026 13:00 |
0.704
|
| Embedding sim. | 0.7805 |
| Entity overlap | 0.1042 |
| Title sim. | 0.2823 |
| Time proximity | 0.9881 |
| NLP тип | funding |
| NLP организация | Rebellions |
| NLP тема | ai hardware |
| NLP страна | South Korea |
Открыть оригинал
Fresh off a successful Series C funding round in November, the South Korean fabless AI chip startup Rebellions has raised an additional $400 million.
The latest funding infusion, which comes before a planned IPO later this year, was led by Mirae Asset Financial Group and the Korea National Growth Fund. It also comes at the same time that the company is engaging in an aggressive expansion effort — with recently announced plans to grow its presence not only in Asia but also in the Middle East and the U.S.
Founded in 2020, Rebellions develops and designs AI chips while outsourcing their fabrication. The startup’s chips are designed for inference — the compute necessary for AI models to respond to user queries. Inference has grown in importance as LLMs have matured and begun to see widespread commercial deployment.
The company closed $124 million in a Series B in 2024 . Then, in November, Rebellions raised an additional $250 million during its Series C. As of today, the company’s total fundraising haul now stands at $850 million — $650 million of which was raised in the last six months. Meanwhile, the startup’s valuation sits at approximately $2.34 billion, the company said Monday.
In addition to the funding round, Rebellions also announced the release of two new products: RebelRack and RebelPOD, which are described as AI infrastructure platforms. POD represents a production-ready unit of inference compute, while Rack “integrates multiple racks into a scalable cluster designed for large-scale AI deployment,” the company said.
In a conversation with TechCrunch, Rebellions’ Chief Business Officer Marshall Choy — who is leading the company’s global expansion efforts — said it had recently established entities in the U.S., Japan, Saudi Arabia, and Taiwan. Choy said the company was building out its ecosystem of technology partners in the U.S., where it plans to court cloud providers, government agencies, telecom operators, and neoclouds. He declined to comment on IPO timing.
“AI is now measured by its ability to operate in the real world at scale, under power constraints, and with clear economic return,” said Sunghyun Park, co-founder and CEO of Rebellions. “That shifts the center of gravity toward inference infrastructure and software that makes that infrastructure usable.”
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Rebellions is one of a new generation of chip startups that have sought to challenge Nvidia’s once iron-clad dominance within the chip industry. As that dominance has begun to wane, other major tech companies like AWS , Meta, and Google — along with the new generation of startups — have also sought to produce their own chips.
Topics
AI , chips , Hardware , Mirae Asset , Rebellions , South Korea
Lucas Ropek
Senior Writer, TechCrunch
Lucas is a senior writer at TechCrunch, where he covers artificial intelligence, consumer tech, and startups. He previously covered AI and cybersecurity at Gizmodo.
You can contact Lucas by emailing lucas.ropek@techcrunch.com.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Accelerating the next phase of AI |
openai |
31.03.2026 13:00 |
0.698
|
| Embedding sim. | 0.8339 |
| Entity overlap | 0.4545 |
| Title sim. | 0.037 |
| Time proximity | 0.5655 |
| NLP тип | funding |
| NLP организация | OpenAI |
| NLP тема | foundation models |
| NLP страна | |
Открыть оригинал
OpenAI raises $122 billion in new funding to expand frontier AI globally, invest in next-generation compute, and meet growing demand for ChatGPT, Codex, and enterprise AI.
|
|
|
Decentralized Training Can Help Solve AI’s Energy Woes |
ieee_spectrum_ai |
07.04.2026 14:00 |
0.695
|
| Embedding sim. | 0.8137 |
| Entity overlap | 0.0294 |
| Title sim. | 0.0309 |
| Time proximity | 0.9954 |
| NLP тип | other |
| NLP организация | Nvidia |
| NLP тема | ai infrastructure |
| NLP страна | |
Открыть оригинал
Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models .
No wonder big tech companies are warming up to nuclear energy , envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.
Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a solar-powered home. Instead of constructing more data centers that require electric grids to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.
Hardware in harmony
Training AI models is a huge data center sport, synchronized across clusters of closely connected GPUs . But as hardware improvements struggle to keep up with the swift rise in size of large language models , even massive single data centers are no longer cutting it.
Tech firms are turning to the pooled power of multiple data centers—no matter their location. Nvidia , for instance, launched the Spectrum-XGS Ethernet for scale-across networking , which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, Cisco introduced its 8223 router designed to “connect geographically dispersed AI clusters.”
Other companies are harvesting idle compute in servers , sparking the emergence of a GPU-as-a-Service business model. Take Akash Network , a peer-to-peer cloud computing marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.
“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO Greg Osuri . “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”
Software in sync
In addition to orchestrating the hardware , decentralized AI training also requires algorithmic changes on the software side. This is where federated learning , a form of distributed machine learning , comes in.
It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains Lalana Kagal , a principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads the Decentralized Information Group . The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.
But there are drawbacks to distributing both data and computation. The constant back and forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.
“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”
To overcome these hurdles, researchers at Google DeepMind developed DiLoCo , a distributed low-communication optimization algorithm . DiLoCo forms what Google DeepMind research scientist Arthur Douillard calls “islands of compute,” where each island consists of a group of chips . Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.
An improved version dubbed Streaming DiLoCo further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.
AI development platform Prime Intellect implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter INTELLECT-1 model trained across five countries spanning three continents. Upping the ante, 0G Labs , makers of a decentralized AI operating system , adapted DiLoCo to train a 107-billion-parameter foundation model under a network of segregated clusters with limited bandwidth. Meanwhile, popular open-source deep learning framework PyTorch included DiLoCo in its repository of fault tolerance techniques .
“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”
A more energy-efficient way to train AI
With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.
And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting tradeoff of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”
Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its Starcluster program . One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.
Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in batteries for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.
Backend work is already underway to enable homes to participate as providers in the Akash Network , and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.
Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”
|
|
|
Achieving Single-Digit Microsecond Latency Inference for Capital Markets |
nvidia_dev_blog |
02.04.2026 16:00 |
0.694
|
| Embedding sim. | 0.8063 |
| Entity overlap | 0.2 |
| Title sim. | 0.0896 |
| Time proximity | 0.8512 |
| NLP тип | other |
| NLP организация | |
| NLP тема | deep learning |
| NLP страна | |
Открыть оригинал
In algorithmic trading, reducing response times to market events is crucial. To keep pace with high-speed electronic markets, latency-sensitive firms often use... In algorithmic trading, reducing response times to market events is crucial. To keep pace with high-speed electronic markets, latency-sensitive firms often use specialized hardware like FPGAs and ASICs. Yet, as markets grow more efficient, traders increasingly depend on advanced models such as deep neural networks to enhance profitability. Because implementing these complex models on low-level…
Source
|
|
|
OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day work week | TechCrunch |
techcrunch |
06.04.2026 15:55 |
0.693
|
| Embedding sim. | 0.7958 |
| Entity overlap | 0.0455 |
| Title sim. | 0.2158 |
| Time proximity | 0.8456 |
| NLP тип | regulation |
| NLP организация | OpenAI |
| NLP тема | ai policy |
| NLP страна | United States |
Открыть оригинал
As governments grapple with how to manage the economic fallout of superintelligent machines, OpenAI has released a set of policy proposals outlining the ways wealth and work could be reshaped in an “intelligence age.” The ideas blend traditionally left-leaning mechanisms like public wealth funds and expanded social safety nets with a fundamentally capitalist, market-driven economic framework.
OpenAI’s proposals are essentially a wish list, a public declaration that helps elected officials, investors, and the public understand how the $852 billion company sees the world shifting in an age where artificial intelligence transforms labor and the economy.
The proposals were released amid intensifying anxiety around AI , which has been colored by concerns over job displacement , wealth concentration, and data center buildouts across the country. They’ve also arrived as the Trump administration moves toward a national AI framework and in the run-up to the midterm elections, signaling an attempt at bipartisan positioning. That effort sits alongside a more direct political push: OpenAI president Greg Brockman — who has donated millions to President Donald Trump — and other tech billionaires have funneled hundreds of millions into super PACs supporting light-touch AI policies.
OpenAI’s proposed framework centers on three stated goals: distributing AI-driven prosperity more broadly, building safeguards to reduce systemic risks, and ensuring widespread access to AI capabilities so that economic power and opportunity don’t become too concentrated.
OpenAI has proposed shifting the tax burden from labor to capital. The company stops short of specifying a corporate tax rate — which Trump dropped to 21% from 35% during his first term. But OpenAI warns that AI-driven growth could hollow out the tax base that funds Social Security, Medicaid, SNAP, and housing assistance as corporate profits expand and reliance on labor income shrinks.
“As AI reshapes work and production, the composition of economic activity may shift — expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes,” OpenAI wrote.
The company suggests higher taxes on corporate income, AI-driven returns, or capital gains at the top — a category of policy that pushed Marc Andreessen to back Trump after Biden proposed taxing unrealized capital gains in 2024. OpenAI also floats a potential robot tax, something Microsoft founder Bill Gates proposed in 2017 , which involved the robot paying the same amount of taxes into the system as the human it replaced.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
The document also includes a proposal to create a Public Wealth Fund to give Americans an automatic public stake in AI companies and AI infrastructure, even if they’re not invested in the market. Any returns would be distributed directly to citizens. The prospect may appeal to Americans who have watched AI inflate the market without seeing any of those gains themselves.
Several of OpenAI’s proposals were also more labor-focused, including one to subsidize a four-day workweek with no loss in pay — a proposal that aligns with the tech industry’s promises that AI will give humans better work-life balance. OpenAI also suggests that companies boost retirement matches or contributions, cover a larger share of healthcare costs, and subsidize child or eldercare. Notably, OpenAI frames these as corporate responsibilities rather than government ones, leaving out the people AI is most likely to displace. If automation eliminates your job, your employer-subsidized healthcare and retirement match may go with it.
That said, OpenAI does separately propose portable benefit accounts that follow workers across jobs, but these still likely depend on employer or platform contributions and stop short of the government-backed universal coverage that would actually protect people AI displaces entirely.
OpenAI acknowledges that the risks of AI go beyond job loss, including misuse by governments or bad actors and the possibility of systems operating beyond human control. To mitigate those threats, it proposes containment plans for dangerous AI, new oversight bodies, and targeted safeguards against high-risk uses like cyberattacks and biological threats.
But with the safety nets and guardrails come the growth proposals, including expanding electricity infrastructure to support AI’s power demands and accelerating AI infrastructure buildouts by offering subsidies, tax credits, or equity stakes. OpenAI says AI should be treated like a utility, and to that end, suggests industry and government work together to ensure AI remains affordable and widely available, rather than controlled by just a few firms.
OpenAI’s framework comes six months after rival Anthropic released its policy blueprint, which laid out a range of possible responses to AI-driven disruption.
“We are entering a new phase of economic and social organization that will fundamentally reshape work, knowledge, and production,” OpenAI wrote. This, the company says, requires a “new industrial policy agenda that ensures superintelligence benefits everyone.”
OpenAI was founded as a nonprofit premised on AI benefiting all of humanity. It became a for-profit company last year, a shift that has led critics to question whether its stated mission is compatible with its need to grow and fulfill its fiduciary duty to shareholders.
The company cited previous ages of economic upheaval like the Industrial Age, pointing to how new economic and financial movements like the New Deal ensured “growth translated into broader opportunity and greater security” by “building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education.”
“The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone,” OpenAI wrote.
Topics
AGI , AI , AI policy , Government & Policy , OpenAI
Rebecca Bellan
Senior Reporter
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
How Gartner will help accelerate and scale your AI strategy |
the_register_ai |
09.04.2026 16:00 |
0.691
|
| Embedding sim. | 0.8225 |
| Entity overlap | 0.2667 |
| Title sim. | 0.0412 |
| Time proximity | 0.6978 |
| NLP тип | other |
| NLP организация | Gartner |
| NLP тема | enterprise ai |
| NLP страна | United States |
Открыть оригинал
AI + ML
How Gartner will help accelerate and scale your AI strategy
Feeling stuck with your AI initiatives? Gartner has the answer
David Gordon
Thu 9 Apr 2026 //
16:00 UTC
Sponsored Post Most enterprise application and software engineering leaders have an AI strategy , but moving from a boardroom vision to a working production environment can be challenging. Many companies are feeling the pressure to deliver results.
What are these hurdles? Many people attending the event will have the same challenges in common:
Cultural change is a must if companies are to drive AI through the organization. That takes leadership.
Decidng when to build or buy AI application capabilities — necessary to future-proof enterprise applications — requires a comprehensive market overview and deep understanding of the current applications landscape.
The skills gap around the use of AI plus adapting to changes in job roles as AI reshapes presents a significant obstacle.
Governance is critical. Balancing AI deployment risks such as bias and data privacy against the potential commercial rewards is a difficult line to walk.
Architectural issues are also top-of-mind. Data must be accessible, and any new development must be designed with an eye on AI.
Cost optimization while modernizing remains an issue especially during a period of dynamic economic shifts
If you're facing issues like these, the Gartner Application Innovation & Business Solutions Summit2026, 2 – 4 June,Las Vegas , will help you turn your AI vision into reality.
The conference theme (Deliver the future: Innovate, Lead, Act) sets the tone; applications and software engineering leaders are at a turning point. Its time to grapple with practical problems and get to market.
What youll learn
The summit focuses on the practical application of AI, exploring how to turn those boardroom visions into reality. So you can expect actionable strategies and roadmaps for GenAI adoption and application modernization.
Sessions cover best practices for balancing AI innovation with business goals and IT alignment. They will deliver insights into modernizing app portfolios and streamlining software delivery with integrated DevOps platforms. These are all grounded in real-world experience. Peer-led discussions will explore strategies for leading cultural change and operational transformation in the age of AI.
Leaders of enterprise applications, software engineering and enterprise architecture are interconnected roles, especially when building wide-ranging AI-driven solutions. Attending as a team gives them a great opportunity to cover a wide set of event sessions and bring back valuable insights to their business.
New for 2026
Attendees can look forward to three spotlight tracks offering actionable guidance to lay the groundwork for AI, scale adoption, and optimize costs. Theres also the Senior Leadership Circle , offering a curated executive experience to senior-most enterprise applications and software engineering leaders (application required to participate).
Returning this year, the Bake-Off offers attendees a unique opportunity to watch Gartner experts guide solution providers through a live, competitive challenge based on a shared use case — delivering both insight and entertainment.
What does all this translate to? Over 50 actionable AI-focused sessions, 25+ interactive workshops and roundtables, along with three keynotes from Gartner and industry leaders.
Attendees can share experiences and strategies with 2,300+ fellow senior leaders and get answers to their toughest questions in one-on-one sessions with Gartner experts.
The agenda is live now. Early bird ends 10 April 2026 and registering before saves you $450 on the standard ticket. View the full agenda and secure your spot.
Sponsored by Gartner.
Share
More about
AI
Gartner
More like these
×
More about
AI
Gartner
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
More about
Share
More about
AI
Gartner
More like these
×
More about
AI
Gartner
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
TIP US OFF
Send us news
|
|
|
The AI gold rush is pulling private wealth into riskier, earlier bets | TechCrunch |
techcrunch |
07.04.2026 13:00 |
0.689
|
| Embedding sim. | 0.795 |
| Entity overlap | 0.0469 |
| Title sim. | 0.1471 |
| Time proximity | 0.9102 |
| NLP тип | funding |
| NLP организация | Arena Private Wealth |
| NLP тема | ai investment |
| NLP страна | United States |
Открыть оригинал
Loading the player…
For decades, buying stock in a hot startup meant being allowed to invest in the funds run by the top VCs. But with the AI boom causing an investment frenzy, more family offices and private wealth are skipping the VC middlemen to get directly onto the cap table.
“Companies are staying private longer, and there are fewer IPOs now than we’ve seen historically,” Mitch Stein, founder of Arena Private Wealth, an investment advisory firm for high-net-worth individuals, told TechCrunch on a recent episode of Equity. “A lot of money is being made well before companies go public, and right now the private markets are dominated by a lot of these AI names. The family offices who are allocating [directly into AI startups] are right on.”
Arena recently co-led a $230 million round into AI chip startup Positron, an investment that earned the midwestern firm a board seat. Stein says that’s part of a deliberate shift away from being passive allocators and toward becoming “active participants in the capital markets.”
The urgency amongst today’s family offices is real.
“The world’s AI infrastructure is being built now, so you’re either going to get in early and have an opportunity to do more primary investing…and really build a portfolio, or you’re going to miss it and be taking random bets,” Ari Schottenstein, Arena’s head of alternatives, told TechCrunch.
Stein put it more bluntly: “Your biggest risk is not having exposure to AI, not what could happen to your AI investments.”
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
The numbers reflect this sentiment. In February, family offices made 41 direct investments into startups, nearly all of them tied to AI. Among those are high-profile names like Laurene Powell Jobs’ Emerson Collective into World Labs, Azim Premji’s family office into Runway, and Eric Schmidt’s Hillspire into Goodfire. According to BNY Wealth research, 83% of family offices say AI is a top strategic priority over the next five years, and more than half have AI exposure through investments.
Some are going further still. A growing number of family offices are incubating their own AI companies, seeding the first several million, taking on operational roles, and deploying the same entrepreneurial instincts that built their wealth in the first place, according to Schottenstein. Jeff Bezos’ decision to serve as CEO of his own robotics company , which raised an initial $6.2 billion last year at a nearly $30 billion valuation, is a high-profile example of the model.
On a smaller scale, Stein pointed to Tyson Tuttle, an Austin-based angel investor and former CEO of Silicon Labs — which agreed to be acquired by Texas Instruments for $7.5 billion. Tuttle co-founded Circuit, a startup using AI to improve manufacturing and distribution, raising a $30 million angel round that includes $5 million from his own family office.
Not everyone coming to the table has founded a company, though. Arena’s team comes from institutional finance, and they argue that rigorous due diligence is what earns them the right to lead rounds.
“We take our time, we’re a very slow ‘yes,’ we say ‘no’ a lot,” Schottenstein said. “We definitely invest in the sources and experts and people necessary to make sure that a company is what it says it is and can do what it says it will do.”
For the Positron deal, that meant working with third-party experts to validate the technology, but also reading the cap table itself as a signal: “If Arm is coming into a deal, we’d like to think your technology is real,” Schottenstein said. Arena also knew Oracle was a major customer, making Positron one of the only AI chips deployed into a hyperscaler not named Nvidia or AMD.
That selectivity shapes how Arena participates once it’s in. Unlike a typical VC spreading risk across a portfolio, Arena makes a small handful of direct deals per year, which changes the stakes entirely. When they’re in, they’re all in; Positron is their one and only AI inference chip investment.
“When we participate in single asset direct deals and only do a small handful every year, our stakes are incredibly high,” Stein said. “We are not managing portfolio-level returns. We don’t model in failure on a single asset transaction. We are taking a tremendous amount of risk with concentrated client capital. We’re taking on reputational risk as a firm. We’re allocating a tremendous amount of time and resources. There’s an alignment there that founders appreciate.”
Topics
AI , ai investment , Arena Private Wealth , Equity podcast , Exclusive , family office , Fundraising , Private Wealth , Venture
Rebecca Bellan
Senior Reporter
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Exclusive: Runway launches $10M fund, Builders program to support early-stage AI startups | TechCrunch |
techcrunch |
31.03.2026 14:00 |
0.688
|
| Embedding sim. | 0.7953 |
| Entity overlap | 0.1379 |
| Title sim. | 0.0427 |
| Time proximity | 0.994 |
| NLP тип | funding |
| NLP организация | Runway |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
Runway is moving beyond building AI video models and into shaping what gets built on top of them.
The AI video-generation startup has launched a $10 million venture fund to invest in early-stage companies building across AI, media, and world simulation, the company’s founders told TechCrunch. It’s also rolling out a Builders program offering seed to Series C startups free API credits, a move that suggests Runway wants to create an ecosystem around what it calls “video intelligence.”
Runway has become one of the leading players in AI video generation, with its tools used across film, advertising, and marketing. But with the launch of its “general world models” last December, the company is now pushing beyond creative tooling into broader applications. And it’s looking to tap startups as a way to explore use cases it can’t pursue alone.
“We think that through video, we’re going to get to video intelligence, and it’s going to open a wider set of use cases in different industries that we can’t double down on today, but that maybe we can support with our research,” Alejandro Matamala-Ortiz, Runway’s co-founder and chief innovation officer, told TechCrunch.
Runway’s thesis for the fund is divided into three buckets:
Technical teams that are pushing the frontier of AI and building new kinds of architecture.
Builders creating the application layer on top of foundation models and bringing AI to new use cases.
Companies experimenting with new forms of media creation, storytelling, and distribution.
For the past year and a half, Runway has quietly backed a handful of early-stage founders and companies, Matamala-Ortiz said. Those include LanceDB , which builds databases for AI applications, and Tamarind Bio , which uses AI to design new proteins for drug discovery. Some startups, like real-time audio-generation firm Cartesia , are working on products that complement its own.
“The next generation of AI models will be built on multimodal data – video, audio, images, text together,” Chang She, co-founder and CEO of LanceDB, told TechCrunch in a statement. “LanceDB is building the infrastructure layer that makes that possible, and Runway is one of the few investors who understands why that matters.”
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Runway has raised close to $860 million to date from backers like Nvidia and Qatar Investment Authority, and is valued at around $5.3 billion post-money . It seeded the $10 million fund with existing investors and close partners, with plans to write checks of up to $500,000 for pre-seed and seed-stage startups.
Runway isn’t the only AI startup that’s turning around to invest in companies just starting out on their journeys. OpenAI is the OG with its Startup Fund, and AI search startup Perplexity launched its own $50 million venture fund last year for seed-stage startups. CoreWeave also launched CoreWeave Ventures in September to back AI companies.
“Many companies like ours are investing heavily on the primitives that will unlock a new set of applications or new types of companies,” Matamala-Ortiz said. “Companies like ours that are still fairly small with only 150 people can’t focus on everything. But we do see opportunities in partnering very early with new teams that can benefit from what we’re doing.”
Building with Characters
A sample Character made by Runway Image Credits: Runway AI
That same philosophy is what is driving Runway’s new program for builders. Eligible early-stage startups can start applying for the program to get 500,000 API credits and access to Characters , Runway’s recently released real-time video agent API that’s powered by its new family of general world models.
Characters lets users interact with generative AI agents in real time, giving them a face and a voice that can range from cartoonish to photorealistic. The Builders program is designed, in part, to see what startups build with the technology.
“Until [recently], we didn’t have the possibilities of talking to a real-time video agent, so we are really trying to see which teams see the potential and positive impacts of this technology,” Matamala-Ortiz said.
The program is already live, with a founding cohort that includes Cartesia, MSCHF, Oasys Health, Spara, Subject, and Supersonik. They’re using Characters to power things like AI customer support agents, interactive brand characters, personalized onboarding experiences, real-time sales assistants, and synthetic media tools.
Matamala-Ortiz said he’s excited about the potential for telemedicine and education. And since entertainment is Runway’s bread and butter, Matamala-Ortiz said he expects Characters to be used in gaming and new kinds of entertainment experiences.
“This is part of our general world models, which is what we’re pushing for next: a set of models that are interactive, real-time, and immersive,” Matamala-Ortiz said. “When you start combining all of these pieces, you can imagine that you will be able to generate and simulate entire environments, and participate and have conversations with the characters in these worlds.”
Other startups like Inworld and Charisma are also building interactive AI characters for games and storytelling, while companies like StoReel are experimenting with AI-generated shows users can engage with directly. Some, like Character AI , are already popular for their AI characters you can talk to.
“We do really believe that there’s a new kind of internet that’s going to be more personalized, more immersive, and in real time,” Matamala-Ortiz said.
Correction: An earlier version of this article misstated the title and surname of Alejandro Matamala-Ortiz. He is the chief innovation officer, not the chief design officer. Additionally, his last name is hyphenated; he should be referred to as Matamala-Ortiz, not Ortiz.
Topics
AI , AI video generation , Exclusive , Generative AI , runway , Startups , Venture , world models
Rebecca Bellan
Senior Reporter
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google is now letting users in the US change their Gmail address
Ivan Mehta
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Google and Intel deepen AI infrastructure partnership | TechCrunch |
techcrunch |
09.04.2026 18:27 |
0.685
|
| Embedding sim. | 0.806 |
| Entity overlap | 0.2105 |
| Title sim. | 0.1215 |
| Time proximity | 0.698 |
| NLP тип | partnership |
| NLP организация | Google Cloud |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
Google and Intel announced an expanded multiyear partnership on Thursday for Google Cloud to continue utilizing Intel AI infrastructure and to keep developing processors together.
Google Cloud will use Intel’s Xeon processors, including Intel’s latest Xeon 6 chips, for AI, cloud, and inference tasks. The company has used Intel’s various Xeon processors for decades.
The companies will also expand the co-development of custom infrastructure processing units (IPUs), which help accelerate and manage data center tasks by offloading them from CPUs.
This chip development partnership, which started in 2021, will focus on custom ASIC-based IPUs.
Intel declined to share any information regarding pricing for the deal.
This expansion comes as the industry is hungry for CPUs. While GPUs are used for developing and training AI models, CPUs are crucial for running AI models and within general AI infrastructure.
“AI is reshaping how infrastructure is built and scaled,” Intel chief executive Lip-Bu Tan said in a company press release . “Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
More companies have been turning their focus to CPUs in recent months as there is a growing shortage for the chips.
SoftBank-owned Arm Holdings recently announced the Arm AGI CPU , the first chip that the semiconductor giant has produced itself, amid a worldwide crunch for CPUs.
Topics
AI , ai infrastructure , cpus , Enterprise , google cloud , GPUS , Hardware , In Brief , Intel , IPUs , United States
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
Enterprise
Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
Julie Bort
1 day ago
AI
India bids to attract over $200B in AI infrastructure investment by 2028
Jagmeet Singh
Feb 17, 2026
AI
Amazon and Google are winning the AI capex race — but what’s the prize?
Russell Brandom
Feb 5, 2026
Latest in AI
TechCrunch Disrupt 2026
Last 24 hours: Save up to $500 on your TechCrunch Disrupt 2026 pass
TechCrunch Events
2 hours ago
AI
ChatGPT finally offers $100/month Pro plan
Julie Bort
18 hours ago
AI
Florida AG announces investigation into OpenAI over shooting that allegedly involved ChatGPT
Lucas Ropek
20 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
In Japan, the robot isn't coming for your job; it's filling the one nobody wants | TechCrunch |
techcrunch |
05.04.2026 14:00 |
0.685
|
| Embedding sim. | 0.8082 |
| Entity overlap | 0.0263 |
| Title sim. | 0.0638 |
| Time proximity | 0.8751 |
| NLP тип | other |
| NLP организация | Ministry of Economy, Trade and Industry |
| NLP тема | robotics |
| NLP страна | Japan |
Открыть оригинал
Physical AI is emerging as one of the next major industrial battlegrounds, with Japan’s push driven more by necessity than anything else. With workforces shrinking and pressure mounting to sustain productivity, companies are increasingly deploying AI-powered robots across factories, warehouses, and critical infrastructure.
Japan’s Ministry of Economy, Trade and Industry said in March 2026 that it aims to build a domestic physical AI sector and capture a 30% share of the global market by 2040. The country already holds a strong position in industrial robotics, with Japanese manufacturers accounting for about 70% of the global market in 2022, according to the ministry .
Based on conversations with investors and industry executives, TechCrunch explored what’s driving that shift, how Japan’s approach differs from the U.S. and China, and where value is likely to emerge as the technology matures.
Driven by labor shortages
Several factors are driving adoption in Japan, including cultural acceptance of robotics, labor shortages driven by demographic pressures, and deep industrial strength in mechatronics and hardware supply chains, Woven Capital managing director Ro Gupta told TechCrunch.
“Physical AI is being bought as a continuity tool: how do you keep factories, warehouses, infrastructure, and service operations running with fewer people?” Hogil Doh, Global Brain general partner, also said. “From what I’m seeing, labor shortages are the primary driver.”
Japan’s demographic crunch is accelerating. The population declined for a 14th straight year in 2024 ; those of working age make up just to 59.6% of the total, a share projected to shrink by nearly 15 million over the next 20 years, Doh pointed out. It’s already reshaping how companies operate: a 2024 Reuters/Nikkei survey found labor shortages are the main force pushing Japanese firms to adopt AI.
“The driver has shifted from simple efficiency to industrial survival,” Sho Yamanaka, a principal with Salesforce Ventures, said in an interview with TechCrunch. “Japan faces a physical supply constraint where essential services cannot be sustained due to a lack of labor. Given the shrinking working-age population, physical AI is a matter of national urgency to maintain industrial standards and social services.”
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Japan is stepping up efforts to advance automation across manufacturing and logistics, according to Mujin CEO and co-founder Issei Takino. The government has been promoting automation to address structural challenges such as labor shortages. Mujin, a Japanese company, has built software that lets industrial robots handle picking and logistics tasks autonomously. Mujin’s approach centers on software — specifically robotics control platforms — that allows existing hardware to perform more autonomously and efficiently, Takino said.
Hardware strength, system risk
Where Japan has historically excelled is in the physical building blocks of robotics. Whether that advantage translates into the AI era is a more open question. The country continues to demonstrate strength in core robotics components such as actuators, sensors and control systems, according to Japan-based venture capitalists, while the U.S. and China are moving more quickly to develop full-stack systems that integrate hardware, software and data.
“Japan’s expertise in high-precision components – the critical physical interface between AI and the real world – is a strategic moat,” Yamanaka said. “Controlling this touchpoint provides a significant competitive advantage in the global supply chain. The current priority is to accelerate system-level optimization by integrating AI models deeply with this hardware.”
Hardware capabilities are strongest in China and Japan, with Japan particularly strong in robot motion control, while the U.S. leads in the service layer and market development, Takino said. Historically, many U.S. companies have leveraged their software strengths to build integrated businesses – similar to Apple – pairing strong software platforms with high-quality hardware sourced from Asia. However, this model may not fully translate to the emerging world of physical AI, Takino said.
“In robotics, and especially in Physical AI, it is critical to have a deep understanding of the physical characteristics of hardware,” Takino said. “This requires not only software capabilities, but also highly specialized control technologies, which take significant time to develop and involve high costs of failure.”
WHILL, a Tokyo- and San Francisco-based startup that makes autonomous personal mobility vehicles, is drawing on Japan’s “monozukuri,” or craftsmanship heritage, as it takes a broader, full-stack approach to global expansion, CEO Satoshi Sugie told TechCrunch. The company has developed an integrated platform combining electric vehicles, onboard sensors, navigation systems and cloud-based fleet management for short-distance and autonomous transport. The company is leveraging both Japan and the U.S. for development, using Japan to refine hardware and address aging population needs, and the U.S. to accelerate software development and test large-scale commercial models, Sugie noted.
From pilots to real-world deployment
The government is putting money behind the push. Under Prime Minister Sanae Takaichi, Japan has committed about $6.3 billion to strengthen core AI capabilities , advance robotics integration and support industrial deployment.
The shift from experimentation to real deployment is already underway. Industrial automation remains the most advanced segment, with Japan installing tens of thousands of robots each year , particularly in the automotive sector. Newer applications are also beginning to gain traction, Doh said.
“The signal is simple – customer-paid deployments rather than vendor-funded trials, reliable operation across full shifts, and measurable performance metrics such as uptime, human intervention rates and productivity impact,” Doh said.
In logistics, companies are deploying automated forklifts and warehouse systems, while in facilities management, inspection robots are being used in data centers and industrial sites.
Companies like SoftBank are already applying physical AI in practice, combining vision-language models with real-time control systems to enable robots to interpret environments and execute complex tasks autonomously.
In defense, where autonomous systems are becoming foundational, competitiveness will depend not just on platforms but on operational intelligence powered by physical AI, Terra Drone CEO Toru Tokushige told TechCrunch. Tokushige added that by combining operational data with AI, Terra Drone is working to enable autonomous systems to function reliably in real-world environments and support the advancement of Japan’s defense infrastructure.
Investment is shifting beyond hardware, with companies allocating more capital to orchestration software, digital twins, simulation tools and integration platforms, according to investors and industry sources.
The rise of hybrid ecosystems
Japan’s physical AI ecosystem is also evolving in ways that differ from traditional tech disruption models. Rather than a winner-take-all dynamic, industry participants expect a hybrid model, with established companies providing scale and reliability, while startups drive innovation in software and system design.
Large incumbents, including Toyota Motor Corporation, Mitsubishi Electric, and Honda Motor, retain significant advantages in manufacturing scale, customer relationships, and deployment capabilities. But startups are carving out critical roles in emerging areas such as orchestration software, perception systems, and workflow automation.
“The relationship between startups and established corporations is a mutually complementary ecosystem,” Yamanaka said. “Robotics requires heavy hardware development, deep operational know-how, and significant capital expenditure. By fusing the vast assets and domain expertise of major corporations with the disruptive innovation of startups, the industry can strengthen its collective global competitiveness.”
Japan’s defense ecosystem is also shifting away from dominance by large corporations toward greater collaboration with startups, the Terra Drone CEO said. Large companies remain focused on platforms, scale and integration, while startups are driving development in smaller systems, software and operations, with speed and adaptability becoming key competitive factors.
Companies like Mujin are developing platforms that sit above hardware, enabling multi-vendor automation and faster deployment across industries. Others, including Terra Drone, are applying similar approaches to autonomous systems, combining AI and operational data to support real-world applications at scale.
“The most defensible value will sit with whoever owns deployment, integration, and continuous improvement,” Doh said.
Topics
AI , Asia , Global Brain , Hardware , Japan , Japan , physical ai , Salesforce Ventures , woven capital
Kate Park
Reporter, Asia
Kate Park is a reporter at TechCrunch, with a focus on technology, startups and venture capital in Asia. She previously was a financial journalist at Mergermarket covering M&A, private equity and venture capital.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Salesforce announces an AI-heavy makeover for Slack, with 30 new features
Lucas Ropek
Google is now letting users in the US change their Gmail address
Ivan Mehta
Allbirds is selling for $39M. It raised nearly 10 times that amount in its IPO.
Connie Loizos
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI made economic proposals — here’s what DC thinks of them |
the_verge_ai |
08.04.2026 20:14 |
0.685
|
| Embedding sim. | 0.812 |
| Entity overlap | 0 |
| Title sim. | 0.1719 |
| Time proximity | 0.6887 |
| NLP тип | other |
| NLP организация | |
| NLP тема | |
| NLP страна | |
Открыть оригинал
Happy ceasefire day and welcome to Regulator , a newsletter for Verge subscribers about Big Tech's rocky journey through the world of politics. If you're not a subscriber yet, you can do so here , but my only request is that you sign up before Donald Trump decides to revisit his previous threats toward Iran and kickstart World War III.
I'm back after being waylaid last week by the deadly combo of a moderate cold and the beginning of pollen season. (Twenty-one percent of the District's acreage is taken up by public green space, and DC is consistently ranked the best city park system in America . Unfortunately, I am allergic to every tree and gr …
Read the full story at The Verge.
|
|
|
The Future of AI Is Open and Proprietary |
nvidia_blog |
25.03.2026 19:00 |
0.684
|
| Embedding sim. | 0.7856 |
| Entity overlap | 0.0213 |
| Title sim. | 0.125 |
| Time proximity | 0.9858 |
| NLP тип | other |
| NLP организация | NVIDIA |
| NLP тема | foundation models |
| NLP страна | |
Открыть оригинал
The Future of AI Is Open and Proprietary
AI leaders — including the CEOs of Mistral, Perplexity, Cursor, Reflection AI and Thinking Machines Lab — agree that open model efforts are beneficial for innovation across the AI ecosystem.
March 25, 2026 by Kari Briski
0 Comments
Share
Share This Article
X
Facebook
LinkedIn
Copy link
Link copied!
AI is the defining technology of our time, quickly becoming core business infrastructure. It’s fueled by a diverse ecosystem of models: large and small, open and proprietary, generalist and specialist.
This variety is essential for a future where every application will be powered by AI, every country will build it and every company will use it. And it’s not a debate between open versus closed innovation.
As NVIDIA founder and CEO Jensen Huang told attendees at a special session on open frontier models at NVIDIA GTC , “Proprietary versus open is not a thing. It’s proprietary and open.”
That’s why the future of AI innovation isn’t about a single massive model. Every industry — healthcare, finance, manufacturing — tackles its own unique challenges. They all need AI that can reason about their data and workflows in various ways. And that requires systems of models, tuned and specialized for different modalities, domains and organizations, working together to solve a specific business problem.
NVIDIA is a major contributor to open source AI: it’s now the largest organization on Hugging Face , with nearly 4,000 team members . And at GTC, the company announced the NVIDIA Nemotron Coalition , a first-of-its-kind global collaboration of model builders and AI labs working to advance open, frontier-level foundation models through shared expertise, data and compute.
The first project stemming from the coalition will be a base model codeveloped by Mistral AI and NVIDIA, with coalition members contributing data, evaluations and domain expertise to support the model’s post-training and continued development. It’ll be shared with the open ecosystem and underpin the next generation of NVIDIA Nemotron models, which have been downloaded more than 45 million times from Hugging Face.
Several Nemotron Coalition members joined other leaders building and consuming open models for a back-to-back panel session at GTC.
The first panel featured LangChain cofounder and CEO Harrison Chase, Thinking Machines Lab founder and CEO Mira Murati, Perplexity CEO and cofounder Aravind Srinivas, Cursor CEO and cofounder Michael Truell, and Reflection AI cofounder and CEO Misha Laskin. The second included Mistral cofounder and CEO Arthur Mensch, OpenEvidence CEO Daniel Nadler, and Black Forest Labs cofounder and CEO Robin Rombach, alongside Hanna Hajishirzi, senior director of natural language processing at Ai2, and Anjney Midha, founder of AMP PBC.
Five key points stood out from the conversation:
1. AI agents are becoming highly capable coworkers.
“We’re soon going to see agents really be coworkers that can take on tasks that take many hours or many days, and do incredibly complex workloads,” said Cursor’s Truell.
2. AI is not a single model — it’s an orchestrated system.
“What you want is a multimodal, multi-model and multi-cloud orchestra,” said Perplexity’s Srinivas. “All you’ve got to do is delegate your task. You don’t have to worry about which model is good at what — it’s for the orchestration system to figure it out.”
3. Openness fuels innovation across the model ecosystem.
“Models are fundamental knowledge infrastructure, and fundamental knowledge infrastructure yearns for openness,” said Reflection AI’s Laskin. “There’s a flourishing ecosystem of powerful, closed models but equally capable open models that are going to be coming over the next couple years.”
This combination of open and proprietary models drives advancements at frontier AI companies as well as in academia.
“There’s a lot of study to be done, and it cannot be done completely in the large labs,” said Thinking Machines Lab’s Murati. “This is where openness can be very helpful…it advances the science of AI, the science of intelligence.”
From left to right: NVIDIA founder and CEO Jensen Huang, LangChain cofounder and CEO Harrison Chase, Thinking Machines Lab founder and CEO Mira Murati, Perplexity CEO and cofounder Aravind Srinivas, Cursor CEO and cofounder Michael Truell, and Reflection AI cofounder and CEO Misha Laskin.
4. Open systems are trustworthy and accessible.
“At the end of the day, you’re delegating trust…and it’s much easier to trust an open system,” said AMP PBC’s Midha.
With a trusted system, developers can deploy long-running AI agents that can tackle virtually any task.
“The models and the systems orchestrating the models are going to get much more capable,” said LangChain’s Chase. “And so you’ll be able to have personal productivity agents that can take on more complex tasks that run for longer.”
Open ecosystems also foster collaboration, helping democratize access to AI.
“We believe that open-wide models should be the basis for building all the AI software in the world,” said Mistral’s Mensch. “By having an open ecosystem of people that have aligned incentives to create assets that are going to be great for humanity, we can accelerate progress and make sure that everybody gets access in a fair way across the world to artificial intelligence.”
From left to right: NVIDIA founder and CEO Jensen Huang; Mistral cofounder and CEO Arthur Mensch; OpenEvidence CEO Daniel Nadler; Hanna Hajishirzi, senior director of natural language processing at Ai2; Black Forest Labs cofounder and CEO Robin Rombach; and Anjney Midha, founder of AMP PBC.
5. Society needs generalist and specialist AI to provide value.
“You have to sort of shape AI the way you shape society,” said OpenEvidence’s Nadler, describing how hospitals are organized into generalists working alongside world-class specialists. “I think the shape of AI is going to reflect that.”
Specialized AI is on the rise because it lets organizations combine open foundations with their own proprietary data. That unique data is where they unlock real, differentiated value across business and academia.
“These days you might argue that progress in AI is getting limited into a few closed labs, but it’s actually very important to the vast majority of academia and researchers, or nonprofit and other places who want to also be part of this progress,” said Ai2’s Hajishirzi. “And we’ve seen that all this progress already has happened by everything being open.”
“It’s actually one of the most exciting times to work on either the frontier models, the big models or more specialized open models that then get deployed on device,” said Black Forest Labs’ Rombach. “There’s so many different frontiers, and all of them should have some open component.”
NVIDIA CEO Jensen Huang, sporting a custom leather jacket from Cursor, meets with open model ecosystem leaders before a panel discussion at GTC.
Watch the GTC session highlights on YouTube and start building with NVIDIA Nemotron open models.
Explore the Best of GTC 2026 Sessions
Learn about the breakthroughs shaping the next chapter of AI anytime, anywhere.
Watch On Demand
Recent News
AI Infrastructure
Efficiency at Scale: NVIDIA, Energy Leaders Accelerating Power‑Flexible AI Factories to Fortify the Grid
March 31, 2026
|
|
|
Rebrand automation as 'zero-token architecture' to master AI |
the_register_ai |
08.04.2026 20:21 |
0.683
|
| Embedding sim. | 0.7995 |
| Entity overlap | 0.1304 |
| Title sim. | 0.0978 |
| Time proximity | 0.8147 |
| NLP тип | other |
| NLP организация | Google |
| NLP тема | ai adoption |
| NLP страна | United States |
Открыть оригинал
On-Prem
7
Call your existing automation ‘zero-token architecture’ to become an instant agentic AI wiz
7
Kubernetes luminary Kelsey Hightower thinks IT pros need to get smart about thriving in a world that’s trying to hide deep tech
Simon Sharwood
Wed 8 Apr 2026 //
20:21 UTC
As businesses drink the agentic AI Kool-Aid and go looking for productivity enhancements, IT professionals can deliver by rebranding their existing automations as “zero-token architecture,” according to Kelsey Hightower, a former Google distinguished engineer and a notable early promoter of Kubernetes.
Speaking at Nutanix’s .NEXT conference in Chicago on Wednesday, Hightower said he’s spoken to IT professionals who say they intend to use agentic AI to automatically handle requests for new passwords by parsing users’ requests typed into Slack messages.
“The agent will burn $2 trillion worth of tokens and call an API,” he joked. He also shared a four-letter acronym he recommends as an alternative: B-A-S-H, for the Bash command line tool which he pointed out can use the data transfer tool cURL to automate password resets.
Hightower suggested describing the combination of Bash and cURL as “the zero-token architecture,” because many organizations are starting to introduce token consumption quotas to control their AI bills.
He also recommended continuing to use automation tools like Puppet, Ansible, and Chef, and techniques like storing configuration files in etc/cron.d
, but rebranding them.
“Just rename that to etc/agent.d
, and you’ll have all these agents doing all these automatic things using the zero-token architecture,” he suggested.
On a more serious note, Hightower suggested cheeky rebranding can help IT pros to keep their jobs as AI enables more, and more sophisticated, automations – if they first develop deep technical skills.
If an AI agent screws up while running your business, there's nobody to sue
Oracle: AI agents can reason, decide and act - liability question remains
AI isn't killing jobs, it's 'unbundling' them into lower-paid chunks
AI has made the Command Line Interface more important and powerful than ever before
To show how, he said some careers build 20 years of work on one year of learning.
“They learn how to install Linux, and make no progress after that, or learn how to manage the switches and made no progress after that,” he said.
Hightower thinks IT professionals need to go deeper and develop an understanding of technology platform fundamentals – so-called “hard skills.” But he also thinks soft skills are increasingly important, because the aim of AI and other automation tools is to reduce reliance on hard skills by learning from IT professionals’ experience.
“We train the machines,” he said. “It's your real-life experiences, every bug you fix, everything you share with other people on GitHub, all of that became the training data.”
Automation can’t replace soft skills like intuition, expressing a style, or sharing an opinion based on human experience.
“You know that piece of software is going to crash in the middle of the night, because that's when the backups run,” he said.
“I guarantee you in 10 years, hopefully most of you will still be here,” Hightower added. “Maybe your job will be slightly different. But I guarantee you those that understand the fundamentals will be the most creative among us,” he said. “The people who know the underlying parts of the stack, those are the people who create the new programming languages, they create the new abstractions, because they understand the details below.”
And their soft skills will remain important too, to help IT pros do clever things like knowing when to deploy a zero-token architecture. ®
Share
More about
Automation
Kubernetes
Nutanix
More like these
×
More about
Automation
Kubernetes
Nutanix
Tech Jobs
Narrower topics
Kubecon
Self-driving Car
Broader topics
Cloud Computing
Cloud Native Computing Foundation
Google Cloud
More about
Share
7
COMMENTS
More about
Automation
Kubernetes
Nutanix
More like these
×
More about
Automation
Kubernetes
Nutanix
Tech Jobs
Narrower topics
Kubecon
Self-driving Car
Broader topics
Cloud Computing
Cloud Native Computing Foundation
Google Cloud
TIP US OFF
Send us news
|
|
|
The AI Data Centers That Fit on a Truck |
ieee_spectrum_ai |
30.03.2026 14:00 |
0.682
|
| Embedding sim. | 0.7839 |
| Entity overlap | 0.0625 |
| Title sim. | 0.3571 |
| Time proximity | 0.5988 |
| NLP тип | product_launch |
| NLP организация | Duos Edge AI |
| NLP тема | ai infrastructure |
| NLP страна | South Korea |
Открыть оригинал
A traditional data center protects the expensive hardware inside it with a “shell” constructed from steel and concrete. Constructing a data center’s shell is inexpensive compared to the cost of the hardware and infrastructure inside it, but it’s not trivial. It takes time for engineers to consider potential sites, apply for permits, and coordinate with construction contractors.
That’s a problem for those looking to quickly deploy AI hardware, which has led companies like Duos Edge AI and LG CNS to respond with a more modular approach. They use pre-fabricated, self-contained boxes that can be deployed in months instead of years. The boxes can operate alone or in tandem with others, providing the option to add more if required.
“I just came back from Nvidia’s GTC, and a lot of [companies] are sitting on their deployment because their data centers aren’t ready, or they can’t find the space,” said Doug Recker , CEO of Duos Edge AI. “We see the demand there, and we can deploy faster.”
GPUs shipped straight to you
Duos Edge AI’s modular compute pods are 55 feet long and 12.5 feet wide. Though they look similar to a shipping container, they’re actually a bit larger and designed primarily for transportation by truck. Each compute pod contains racks of GPUs much like those used in other data centers. Duos recently entered a deal with AI infrastructure company Hydra Host to deploy four pods with 576 GPUs per pod. That’s a total of 2,304 GPUs, with the option to later double the deployment to 4,608 GPUs.
Modular data centers aren’t new for Duos; the company previously deployed edge data centers for rural customers, such as the Amarillo, Texas school district . However, the pods for the Hydra Host deployment will be upgraded to handle more intense AI workloads. They’ll contain more racks, draw more power, and use liquid cooling to keep the GPUs running efficiently.
Across the Pacific, Korean technology giant LG is taking a similar approach. The company’s CNS subsidiary, which provides IT infrastructure and services, has announced the AI Modular Data Center which , like the Duos unit, contains racks of GPUs and supporting hardware in a pre-fabricated enclosure.
Also like Duos’ deployment, LG’s AI Modular Data Center contains 576 Nvidia GPUs with the option to scale up in the future. “We are currently developing an expanded version that can support more than 4,600 GPUs within a single unit, with a service launch planned within this year,” said Heon Hyeock Cho , vice president and head of the datacenter business unit at LG CNS. LG’s first Modular Data Center will roll out in the South Korean port city of Busan, where it could deploy up to 50 units.
LG and Duos are not alone. Hewlett Packard Enterprise, Vertiv , and Schneider Electric now have modular data centers available or in development. A report from market research firm Grand View Research estimates that the market for modular data centers could more than double by 2030.
On the grid, but under the radar
A modular data center site is quite different from traditional data center because there’s no need to construct a large steel-and-concrete shell. Instead, the site can be made ready by pouring a concrete pad. The pre-fabricated modules are delivered by truck, placed on the pad where desired, and then networked on-site.
Duos’ deployments, for instance, include power modules placed alongside the compute pods, and the pods are networked together with redundant fiber connections that allow the pods to operate in unison. Recker compared it to lining up school buses in a parking lot. “Everything is built off-site at a factory, and we can put it together like a jigsaw puzzle,” he said.
That simplicity is the point. Both Duos and LG CNS expect a modular data center can be deployed in about six months, compared to the roughly two or three years a conventional data center requires. Recker said that, for Duos, the turnaround is so quick that building the pre-fabricated unit isn’t always the constraint. While it’s possible to construct a pre-fabricated unit in 60 or 90 days, site preparation extends the timeline “because you can’t get the permits that fast.”
Modular data centers may also provide good value. Recker said a five-megawatt modular deployment can be built for about $25 million, and that Duos’ cost per megawatt is roughly half what larger facilities charge. For Duos, savings are possible in part because its modular data centers can target smaller deployments where the permitting is less complex. Smaller, modular deployments also meet less resistance from local governments, which are increasingly skeptical about data center construction.
While Duos targets smaller deployments, LG hopes to go big. Its planned Busan campus of 50 AI Modular Data Centers suggests an ambition to achieve deployments that rival the capacity of conventional facilities. A site with 50 units would bring the total number of GPUs to over 28,000. Here, the benefits of a modular approach could stem mostly from scalability, as a modular data center could start small and grow as required.
“By adopting a modular approach, the AI Modular Data Center can be incrementally expanded through the combination of dozens of AI Boxes,” Cho said. “It’s enabling the construction of even hyperscale-level AI data centers.”
|
|
|
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters |
wired |
10.04.2026 00:19 |
0.681
|
| Embedding sim. | 0.7872 |
| Entity overlap | 0.0606 |
| Title sim. | 0.104 |
| Time proximity | 0.9385 |
| NLP тип | regulation |
| NLP организация | OpenAI |
| NLP тема | ai regulation |
| NLP страна | United States |
Открыть оригинал
Maxwell Zeff
Business
Apr 9, 2026 8:19 PM
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”
Photograph: Hwawon Ceci Lee/Getty Images
Save this story
Save this story
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy . Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical , biological, radiological, or nuclear weapon . If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos , these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws , claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois' reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
He notes that the lawmakers in Illinois have also submitted bills increasing liability on AI model developers. Last August, the state became the first in the country to pass legislation limiting the use of AI in mental health services. Illinois was also early to regulate biometric data collection, passing the Biometric Information Privacy Act in 2008.
While SB 3444 focuses on mass casualty events and large financial disasters, AI labs are also facing a question around the harms their AI models can cause on an individual level. Several family members of children that died by suicide after allegedly developing unhealthy relationships with ChatGPT have sued OpenAI in the last year.
The federal AI legislation Niedermeyer advocates for in her testimony remains an elusive goal for Congress. While the Trump administration has issued executive orders and published frameworks in an attempt to catalyze some federal AI legislation, talks about actually passing such a measure don’t seem to be going anywhere . In the absence of federal guidance, states including California and New York have passed bills, such as SB 53 and the Raise Act, which require AI model developers to submit safety and transparency reports.
Years into the AI boom, there’s still an open legal question around what happens if an AI model causes a catastrophic event.
|
|
|
Only 28% of AI infrastructure projects fully pay off |
the_register_ai |
07.04.2026 13:13 |
0.68
|
| Embedding sim. | 0.7759 |
| Entity overlap | 0.0732 |
| Title sim. | 0.2333 |
| Time proximity | 0.8351 |
| NLP тип | other |
| NLP организация | Gartner |
| NLP тема | ai adoption |
| NLP страна | United States |
Открыть оригинал
AI + ML
19
Only 28% of AI infrastructure projects fully pay off, survey finds
19
ITSM the area most likely to offer wins, according to Gartner research
Lindsay Clark
Tue 7 Apr 2026 //
13:13 UTC
Tech leaders hoping AI might help save money and improve efficiency in IT infrastructure should know that only 28 percent of use cases fully succeed and offer return on investment (ROI).
According to new figures from Gartner, one in five AI projects in IT infrastructure and operations (I&O) fail outright.
Its survey of 782 I&O managers conducted in November and December last year found that 57 percent have suffered at least one failure in applying AI to their area.
Melanie Freeze, research director at Gartner, said many AI initiatives flopped because of unrealistic expectations.
"They assumed AI would immediately automate complex tasks, cut costs, or fix long‑standing operational issues," she said. "When expectations are not realistically set and the results don't appear quickly, confidence drops and projects stall."
"The 20 percent failure rate is largely driven by AI initiatives that are either overly ambitious or poorly scoped. AI that doesn't fit into the organization's operations simply can't deliver ROI."
I&O leaders most frequently observe AI failures in auto-remediation, self-healing infrastructure, and agent-led management of workflows within and between systems, Gartner found.
Among I&O leaders who faced setbacks, 38 percent said persistent skill gaps continue to hamper AI success. The same proportion cited poor data quality or limited data availability as a direct cause of AI project failure.
The research found tech managers were more successful where the technology was more mature, such as applying GenAI to IT service management (ITSM) and cloud operations, with 53 percent of I&O leaders reporting success in those areas.
If an AI agent screws up while running your business, there's nobody to sue
Leaked memo suggests Red Hat's chugging the AI Kool-Aid
PwC will say goodbye to staff who aren't convinced about AI
AI finally delivers those elusive productivity gains... for cybercriminals
However, there were challenges in getting funding for using AI in tech infrastructure "Many AI initiatives are still funded by individual business units, Freeze said. "However, as AI infrastructure spending continues to rise, CEOs and CFOs need to play a more active role in setting funding criteria and approving major investments."
The findings come against a backdrop of companies struggling to justify AI spending. In February, a survey of almost 6,000 corporate execs across the US, UK, Germany, and Australia found that more than 80 percent detect no discernible impact from AI on either employment or productivity even though 69 percent of businesses currently use some form of AI.
Another study from Harris Poll, commissioned by Dataiku, found tech leaders would come under pressure to show returns on AI investment in 2026: 98 percent said there was increasing pressure from the board to demonstrate ROI , while 71 percent of the CIOs surveyed believed their AI budget would likely face cuts or a freeze if targets were not met by the end of the first half of the year. ®
Share
More about
AI
Gartner
More like these
×
More about
AI
Gartner
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
More about
Share
19
COMMENTS
More about
AI
Gartner
More like these
×
More about
AI
Gartner
Narrower topics
AIOps
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Self-driving Car
TIP US OFF
Send us news
|
|
|
Why SoftBank’s new $40B loan points to a 2026 OpenAI IPO | TechCrunch |
techcrunch |
27.03.2026 21:44 |
0.669
|
| Embedding sim. | 0.7553 |
| Entity overlap | 0.1154 |
| Title sim. | 0.1667 |
| Time proximity | 0.9638 |
| NLP тип | funding |
| NLP организация | SoftBank Group |
| NLP тема | artificial intelligence |
| NLP страна | Japan |
Открыть оригинал
SoftBank has taken on a new $40 billion loan to help it cover its $30 billion commitment to invest in OpenAI as part of the AI model maker’s record-breaking $110 billion raise last month , the Japanese conglomerate said on Friday .
Most striking is that the loan is unsecured and has a 12-month term, meaning it must be repaid or refinanced by next year. This could be a signal that the lenders believe OpenAI’s highly anticipated public listing will indeed come later this year, as some outlets, like CNBC , have reported. The loan is provided by JPMorgan Chase, Goldman Sachs, and four Japanese banks.
Since OpenAI’s IPO is bound to be one of the largest listings ever, if it does happen this year, that would presumably give SoftBank the liquidity to settle the debt in such a short time span. SoftBank’s new $30 billion investment in OpenAI brings its total bet on ChatGPT’s maker to over $60 billion.
Topics
AI , Goldman Sachs , In Brief , IPO , jp morgan , OpenAI , SoftBank Group International , Venture
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
Social
Meta was finally held accountable for harming teens. Now what?
Amanda Silberling
17 minutes ago
Apps
Go play this secret game in your TikTok DMs
Aisha Malik
1 hour ago
Apps
Caller ID app Truecaller hits 500 million monthly users
Ivan Mehta
1 hour ago
Latest in Venture
AI
Exclusive: Runway launches $10M fund, Builders program to support early-stage AI startups
Rebecca Bellan
6 hours ago
Venture
Former Coatue partner raises huge $65M seed for enterprise AI agent startup
Julie Bort
22 hours ago
Venture
From moon hotels to cattle herding: 8 startups investors chased at YC Demo Day
Marina Temkin
3 days ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
DFRobot Showcases AI Maker Projects at Robot Hokoten in Akihabara |
prnewswire |
05.04.2026 13:54 |
0.667
|
| Embedding sim. | 0.7801 |
| Entity overlap | 0.0417 |
| Title sim. | 0.0813 |
| Time proximity | 0.8757 |
| NLP тип | product_launch |
| NLP организация | DFRobot |
| NLP тема | edge computing |
| NLP страна | China |
Открыть оригинал
DFRobot Showcases AI Maker Projects at Robot Hokoten in Akihabara
News provided by
DFRobot
Apr 05, 2026, 09:54 ET
Share this article
Share to X
Share this article
Share to X
SHANGHAI , April 5, 2026 /PRNewswire/ -- DFRobot , a global leader in open-source hardware, recently participated in the Robot Hokoten @ Akihabara event in Tokyo, appearing at the DigiKey booth. The company presented two AI-driven projects based on open-source hardware—an "Electronic Nose" gas recognition system and an AI-powered cell recognition teaching system—demonstrating how AI and open hardware can be effectively applied in STEAM education and maker scenarios.
Electronic Nose: Integrating TinyML with On-Device AI
Continue Reading
DFRobot Showcases AI Maker Projects at Robot Hokoten in Akihabara
The "Electronic Nose" project combines edge AI with embedded hardware. It uses four MEMS gas sensors connected to an ESP32 running a TinyML model for real-time odor analysis.
During the demonstration, the sensor probe was placed above a glass of beer. Within 20 to 30 seconds, the system completed odor sampling and analysis. The results were then transmitted to the LattePanda Sigma , a compact x86 computing module, which generated descriptive content or tasting notes using a locally deployed language model. The entire process was executed on-device, without relying on network connectivity.
Xia Qing, Senior Engineer at DFRobot, commented: "This demonstration shows how makers can combine TinyML-based sensing with local AI models to transform sensor data into intuitive insights. Potential applications include coffee flavor analysis, fermentation monitoring, and food freshness detection."
AI Cell Recognition: Bringing AI into the STEAM Classroom
Another featured project focused on educational applications. DFRobot presented an AI-powered cell recognition teaching system designed to integrate artificial intelligence into middle school biology education. The system is built using the HUSKYLENS 2 AI vision sensor and the UNIHIKER K10 development board.
Powered by the K230 processor with up to 6 TOPS of AI computing performance, HUSKYLENS 2 can efficiently run both pre-trained and user-trained models with low latency. In the demonstration, the system performed real-time identification and classification of cells under a microscope, making abstract AI and machine learning concepts tangible through hands-on interaction.
The project showcases the complete AI workflow—from data collection and model training to edge inference—highlighting its practical applicability in educational settings.
Partnering with DigiKey to Expand the Open-Source Hardware Ecosystem
DFRobot and DigiKey jointly showcased at Robot Hokoten to promote open-source hardware and AI education. The two parties will continue collaborating on technical content, global marketing, and educational solutions, lowering the barrier to AI and open hardware adoption, and accelerating the transition from maker projects to real STEAM classroom applications.
SOURCE DFRobot
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Can orbital data centers help justify a massive valuation for SpaceX? | TechCrunch |
techcrunch |
05.04.2026 15:40 |
0.667
|
| Embedding sim. | 0.8303 |
| Entity overlap | 0.0208 |
| Title sim. | 0.281 |
| Time proximity | 0.115 |
| NLP тип | other |
| NLP организация | spacex |
| NLP тема | ai infrastructure |
| NLP страна | united states |
Открыть оригинал
SpaceX has reportedly filed confidential paperwork for an initial public offering in which the company would raise $75 billion at a $1.75 trillion valuation. And according to CEO Elon Musk, orbital data centers will be a big part of SpaceX’s future.
On the latest episode of TechCrunch’s Equity podcast , Kirsten Korosec, Sean O’Kane, and I discussed Musk’s vision, as well as other companies that are pursuing similar goals.
It will take significant tech development and massive capital spending to make orbital data centers a reality, but as Sean noted, with “opposition happening around the country to data centers in general,” executives like Musk and Jeff Bezos may be thinking, “The engineering challenge may be less than the social challenge back here” on Earth.
Read a preview of our conversation, edited for length and clarity, below.
Sean: This has been a trend — I would say a rapidly forming trend — over the last half year to a year, and we have different examples of it. We have SpaceX; I feel like in some ways, Elon Musk was late on this trend. And for the moment, let’s set aside the actual mechanics and the viability of data centers in space. We could talk about that in a second if we want, but —
Kirsten: We have a really good story we’ll link to in the show notes , by the way. One of our most recent hires, Tim Fernholz, is amazing. He writes all about the physics and the constraints of that.
Sean: Yeah, I think it’s a really interesting engineering challenge. It’s a really interesting physics challenge. It’s a really interesting orbital mechanics challenge. But it’s something that clearly a bunch of companies and people are going to try and chase. [There’s] going to be SpaceX doing it, with a kind of variance of what they’re already working on with their Starlink network.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
There’s a startup that had come out of Y Combinator, originally called Starcloud, that was really one of the first ones out there trying to build a huge business around this, that just raised $170 million this week , their valuation [on] that tipped them over into a unicorn status.
Jeff Bezos is trying to go after this as well. This is a next generation version of the competition that we’ve seen happening between Starlink and Amazon’s Leo satellite network, and Blue Origin has its own satellite network coming online as well in the next couple of years.
So there’s going to be a whole bunch of this happening, and it feels like it wasn’t happening a year ago. I know the way that Elon Musk pitches it is — we know he’s allergic to red tape, he’s built a data center in Memphis, too. Maybe now he knows the challenges and the risks you have to take to sidestep that red tape.
There’s a lot of opposition happening around the country to data centers in general. And these people say, “We have access to space, so let’s just try and do it up there.” The engineering challenge may be less than the social challenge back here on our [planet].
Kirsten: And it also creates excitement, right? If a company is about to go [public] and they’re working on data centers in space, this is something that people can have expectations about in a positive way and ignore the constraints. It feels like a company that is working on something that’s not old and outdated, but signals the future. And it’s really a great strategy when you think about it.
Anthony: Not that Elon Musk is the only one who does this, but it seems like he’s incredibly successful at being like, “Don’t judge my companies based on how much money they’re making now, judge them based on these grand visions that I can spin out about what will happen in the future.”
And going back to a point that Sean was making, I think that part of what’s interesting is to [ask]: How does this fit in with the broader data center rollout? How does it fit in with opposition and the idea that maybe people are not going to be able to build as many data centers as they want to?
I don’t think any of us are engineers who can really assess the viability of these plans. It does certainly have a tinge of fantasy to it, but even when they do lay out these plans, it feels like just a drop in the bucket in terms of compute capabilities compared to what they want to build out on Earth. So it feels like there’s not a scenario where this replaces a whole bunch of new data centers on Earth. It’s just sort of a […] supplement to it.
Sean: The last two things I’ll point out that are really front and center for me is, one, we’ve seen a backing off in some ways [from] data centers — not just because of opposition, but because maybe we don’t need as much, right? We see a bunch of jockeying from some of the AI labs about, “Well, maybe we don’t need to lease this much from this company,” or whatever. And if that becomes a thing that is more true than it was five months ago, do you all of a sudden lose all that momentum to do something as crazy as putting the data centers in space? Providing that it works, even.
The other thing is that the idea of building these massive data centers in space, with all these satellites that make up the quote unquote “data center,” is business for SpaceX. And I think this is unique to them compared to these other companies: They are a launch company primarily, even though they generate a bunch of revenue from Starlink. They are the vehicle that gets the data centers to space. They get to book that as revenue for SpaceX.
And so it becomes this thing where, of course [Musk] wants — whether or not it works, he would eventually have to prove it — but of course he wants to send more and more satellites into space because it’s more revenue for SpaceX. And that makes SpaceX look better as a public company. And then you just kind of tumble down the path until he finds something else to pitch the investors on.
Loading the player…
Topics
AI , Elon Musk , Equity podcast , Space
Anthony Ha
Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.
You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Salesforce announces an AI-heavy makeover for Slack, with 30 new features
Lucas Ropek
Google is now letting users in the US change their Gmail address
Ivan Mehta
Allbirds is selling for $39M. It raised nearly 10 times that amount in its IPO.
Connie Loizos
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch |
techcrunch |
31.03.2026 15:00 |
0.667
|
| Embedding sim. | 0.7693 |
| Entity overlap | 0.0189 |
| Title sim. | 0.1727 |
| Time proximity | 0.8504 |
| NLP тип | funding |
| NLP организация | NomadicML |
| NLP тема | autonomous systems |
| NLP страна | United States |
Открыть оригинал
To build the autonomous machines of the future, sometimes your model needs a model.
Companies developing self-driving cars, robots manipulating the physical environment, or autonomous construction equipment collect thousands, if not millions, of hours of video data for evaluation and training.
Organizing and cataloging that video is now a job for humans, who have to watch all of it. Even fast-forwarding, that doesn’t scale. NomadicML , a startup founded by CEO Mustafa Bal and CTO Varun Krishnan, wants to solve problems for customers who have 95% of their fleet data sitting in archives.
The challenge becomes harder when looking for edge cases — the most valuable data depicts events that rarely occur and can befuddle inexperienced physical AI models.
Nomadic is working to solve that problem with a platform that turns footage into a structured, searchable dataset through a collection of vision language models. That, in turn, allows for better fleet monitoring and the creation of unique datasets for reinforcement learning and faster iteration.
The company announced an $8.4 million seed round Tuesday at a post-money valuation of $50 million. The round was led by TQ Ventures, with participation from Pear VC and Jeff Dean, and will allow the company to onboard more customers and continue refining its platform. Nomadic also won first prize at Nvidia GTC’s pitch contest last month.
The two founders, who met as Harvard computer science undergrads, “kept running into the same technical challenges again and again at our jobs” at companies like Lyft and Snowflake, Bal told TechCrunch.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
“We are providing folks insight on their own footage, whatever drives their own AVs [and] robots,” he said. ”That is what moves these autonomous systems builders forward, not random data.”
Imagine, for example, trying to fine-tune an AV’s understanding that it can run a red light if a police officer is directing it to do so, or isolating every time that vehicles drive under a specific type of bridge. Nomadic’s platform allows these incidents to be identified both for compliance purposes, and to be fed directly into training pipelines.
Customers like Zoox, Mitsubishi Electric, Natix Network, and Zendar are already using the platform to develop intelligent machines. Antonio Puglielli, the VP of Engineering at Zendar, said that Nomadic’s tool allowed the company to scale up its work much faster than the alternative of outsourcing, and that its domain expertise set it apart from other competitors.
This kind of model-based, auto-annotation tool is emerging as a key workflow for physical AI. Established data labeling firms like Scale, Kognic, and Encord are developing AI tools to do this work, while Nvidia has released a family of open source models, Alpamayo , that can be adapted to tackle the problem.
Varun argues that his company’s tool is more than a labeler; it is an “agentic reasoning system: you describe what it needs and it figures out how to find it,” using multiple models to understand action taking place and put it in context. Nomadic’s backers expect the startup’s focus on this specific infrastructure to win out.
“It’s the same reason Salesforce doesn’t build its own cloud and Netflix doesn’t build its own [content distribution facilities],” Schuster Tanger, a partner at TQ Ventures who led the round, told TechCrunch. “The second an autonomous vehicle company tries to build Nomadic internally, they’re distracted from what makes them win, which is the robot itself.”
Tanger praises Nomadic’s talent, noting that Krishnan is an international chess master ranked as the world’s 1,549th-best player. Krishnan, meanwhile, brags that all of the company’s dozen or so engineers have published scientific papers.
Now, they’re hard at work developing specific tools, like one that understands the physics of lane changes from camera footage, or another that derives more precise locations for a robot’s grippers in a video. The next challenge, from the point of view of Nomadic and its customers, is to develop similar tools for non-visual data like lidar sensor readings, or to integrate sensor data across multiple modes.
“Juggling around terabytes of video, slamming that against hundreds of 100 billion-plus parameter models, and then extracting their accurate insights, is really insanely difficult,” Bal said.
Topics
AI , Exclusive , nomadic , physical ai , Robotics , Startups , TQ Ventures
Tim Fernholz
Tim Fernholz is a journalist who writes about technology, finance and public policy. He has closely covered the rise of the private space industry and is the author of Rocket Billionaires: Elon Musk, Jeff Bezos and the New Space Race. Formerly, he was a senior reporter at Quartz, the global business news site, for more than a decade, and began his career as a political reporter in Washington, D.C.
You can contact or verify outreach from Tim by emailing tim.fernholz@techcrunch.com or via an encrypted message to tim_fernholz.21 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google is now letting users in the US change their Gmail address
Ivan Mehta
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI закрывают Sora, MiniMax обучает сама себя, Nvidia ставит на Миру Мурати: главные события марта в ИИ |
habr_ai |
02.04.2026 11:13 |
0.665
|
| Embedding sim. | 0.7803 |
| Entity overlap | 0.1111 |
| Title sim. | 0.1029 |
| Time proximity | 0.7748 |
| NLP тип | product_launch |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Обычно к середине весны рабочий ритм устаканивается, превращаясь в привычную череду тасков, созвонов и коммитов. Единственное, что мешает этой стабильности — регулярные апдейты технологий, которые вынуждают снова обновлять свой набор инструментов. Этот месяц не стал исключением: OpenAI выпустили GPT-5.4 с нативным доступом к компьютеру, Google ответили шустрой Gemini 3.1 Flash-Lite, а Anthropic дали Claude еще больше свободы на рабочем столе.
Но одними обновлениями чат-ботов дело не ограничилось. В этом выпуске рассказываем, как Google сделали эмбеддинги мультимодальными, действительно ли MiniMax M2.7 участвовал в собственной разработке и чем Claude удивил самого Дональда Кнута. А на десерт — традиционная подборка новых утилит и свежих исследований. Давайте вместе смотреть, что принес нам март!
Читать далее
|
|
|
Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch |
techcrunch |
01.04.2026 01:42 |
0.664
|
| Embedding sim. | 0.7652 |
| Entity overlap | 0.069 |
| Title sim. | 0.162 |
| Time proximity | 0.8418 |
| NLP тип | other |
| NLP организация | Mercor |
| NLP тема | ai security |
| NLP страна | United States |
Открыть оригинал
Mercor , a popular AI recruiting startup, has confirmed a security incident linked to a supply chain attack involving the open source project LiteLLM.
The AI startup told TechCrunch on Tuesday that it was “one of thousands of companies” affected by a recent compromise of LiteLLM’s project, which was linked to a hacking group called TeamPCP. Confirmation of the incident comes as extortion hacking group Lapsus$ claimed it had targeted Mercor and gained access to its data.
It’s not immediately clear how the Lapsus$ gang obtained the stolen data from Mercor as part of TeamPCP’s cyberattack.
Founded in 2023, Mercor works with companies, including OpenAI and Anthropic, to train AI models by contracting specialized domain experts such as scientists, doctors, and lawyers from markets, including India. The startup says it facilitates more than $2 million in daily payouts and was valued at $10 billion following a $350 million Series C round led by Felicis Ventures in October 2025.
Mercor spokesperson Heidi Hagberg confirmed to TechCrunch that the company had “moved promptly” to contain and remediate the security incident.
“We are conducting a thorough investigation supported by leading third-party forensics experts,” said Hagberg. “We will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible.”
Earlier, Lapsus$ claimed responsibility for the apparent data breach on its leak site and shared a sample of data allegedly taken from Mercor, which TechCrunch reviewed. The sample included material referencing Slack data and what appeared to be ticketing data, as well as two videos purportedly showing conversations between Mercor’s AI systems and contractors on its platform.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Hagberg declined to answer follow-up questions on whether the incident was connected to claims by Lapsus$, or whether any customer or contractor data had been accessed, exfiltrated, or misused.
The compromise of LiteLLM originally surfaced last week after malicious code was discovered in a package associated with the Y Combinator-backed startup’s open source project. While the malicious code was identified and removed within hours, the incident drew scrutiny due to LiteLLM’s widespread use around the internet, with the library downloaded millions of times per day, per security firm Snyk. The incident also prompted LiteLLM to make changes to its compliance processes, including shifting from controversial startup Delve to Vanta for compliance certifications.
It remains unclear how many companies were affected by the LiteLLM-related incident or whether any data exposure occurred, as investigations continue.
Topics
AI , Lapsus$ , LiteLLM , Mercor , Security , Startups , United States
Jagmeet Singh
Reporter
Jagmeet covers startups, tech policy-related updates, and all other major tech-centric developments from India for TechCrunch. He previously worked as a principal correspondent at NDTV.
You can contact or verify outreach from Jagmeet by emailing mail@journalistjagmeet.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Let’s take a look at the retro tech making a comeback
Lauren Forristal
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources |
nvidia_blog |
04.04.2026 17:00 |
0.661
|
| Embedding sim. | 0.8216 |
| Entity overlap | 0.0714 |
| Title sim. | 0.0707 |
| Time proximity | 0.4047 |
| NLP тип | other |
| NLP организация | NVIDIA |
| NLP тема | robotics |
| NLP страна | |
Открыть оригинал
This National Robotics Week , NVIDIA is highlighting the breakthroughs that are bringing AI into the physical world — as well as the growing wave of robots transforming industries, from agricultural and manufacturing to energy and beyond.
Advancements in robot learning, simulation and foundation models are accelerating development, enabling robots to move from training in virtual environments to real-world deployment faster than ever.
With NVIDIA platforms for simulation , synthetic data and AI-powered robot learning , developers now have the tools to build machines that can perceive, reason and act in complex environments.
Check back here all week for coverage on the latest NVIDIA physical AI technologies.
Building the Next Generation of AI Robots
At NVIDIA GTC last month, a new wave of technologies was introduced to accelerate the development of AI-powered robots.
At the core is a full-stack, cloud-to-robot workflow that connects simulation, robot learning and edge computing — making it faster to build, train and deploy intelligent machines.
https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4
Key announcements include:
New NVIDIA Isaac GR00T open models enable robots to understand natural language instructions and perform complex, multistep tasks using vision language action reasoning.
New NVIDIA Cosmos world models for generating synthetic data and training robots at scale help systems learn more efficiently and generalize across environments.
The general availability of open source physics engine Newton 1.0 provides a fast and reliable foundation for dexterous robot manipulation with accurate collision detection, realistic object contact and stable simulation of complex systems with both rigid and flexible parts.
Expanded simulation capabilities with the general availability of NVIDIA Isaac Sim 6.0 , Isaac Lab 3.0 and Omniverse NuRec technologies allow developers to model real-world scenarios and validate robotic systems before deployment.
Watch on-demand sessions from the NVIDIA GTC global AI conference to catch up on recent breakthroughs in robotics, showcased by leading experts in the field.
Driving Breakthroughs in Surgical Precision
PeritasAI is advancing a new generation of surgical robotics by integrating physical AI into real-world operating environments. Using NVIDIA Isaac for Healthcare and the Rheo blueprint for hospital automation, the company is developing multi-agent intelligence that can sense, coordinate and act in real time.
In collaboration with Lightwheel and Advent Health Hospitals, this work brings embodied intelligence into the operating room — supporting surgical teams with situational awareness, sterile coordination and intelligent management of instruments, implants and workflows.
From Words to Motion: NVIDIA NemoClaw Brings Natural Language Commands to Isaac Sim
NVIDIA Omniverse developer Umang Chudasama has integrated NVIDIA NemoClaw with NVIDIA Isaac Sim to navigate a Nova Carter autonomous robot using plain natural language commands — no manual coding required. NemoClaw translates text instructions (like “move two meters forward”) into executable Python scripts, which are then sent to Isaac Sim via a custom REST application programming interface in real time.
https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4
The entire system runs within Isaac Sim, giving the robot a realistic, physics-accurate warehouse environment to operate in before ever touching the real world. Pairing Isaac Sim with NemoClaw means faster development, safer testing and a smarter path to deployment. Rather than programming robots line by line, developers can now simply talk to them, marking a meaningful shift toward truly collaborative, language-driven robotics.
OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework
Underwater simulators are crucial for developing reliable perception systems, but they still struggle with accurate physics‑based sensor modeling and fast rendering.
Helping close this gap is OceanSim , a GPU‑accelerated, high‑fidelity simulator developed by researchers at the University of Michigan. It uses advanced physics‑based rendering techniques to make synthetic underwater images look more realistic. Using GPUs, the simulator can render imaging sonar in real time and generate synthetic data quickly.
OceanSim uses NVIDIA Isaac Sim and plugs into NVIDIA Omniverse libraries, creating a seamless link between robot‑learning research and underwater robotics. This integration lets developers easily develop and deploy embodied AI techniques for underwater applications.
RoboLab: Benchmarking the Next Generation of Generalist Robots
RoboLab is a high-fidelity simulation benchmark for developing and evaluating generalist robot policies — powering systems designed to perform diverse tasks across environments.
https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4
Built on NVIDIA Isaac and NVIDIA Omniverse simulation technologies, RoboLab taps into photorealistic environments and physics-based modeling to train and test robotic policies at scale. This enables researchers to measure how well behaviors learned in simulation transfer to the real world as tasks grow in complexity.
By combining advanced simulation with structured evaluation, RoboLab accelerates the path from virtual training to real-world deployment.
RoboLab features will be incorporated into the roadmap of NVIDIA Isaac Lab-Arena , an open source framework for large-scale policy setup and evaluation.
Smarter Palletizing With AI-Driven Reasoning
In warehouse environments, palletizing robots typically follow fixed rules — handling boxes the same way regardless of contents, condition or fragility. A project developed by Doosan Robotics introduces a more adaptive approach using NVIDIA Cosmos Reason .
By analyzing a single camera image, the system can infer box contents, detect damage and adjust how each item is handled — such as placement, speed and grip — based on estimated weight and fragility. This reduces common issues like incorrectly stacking damaged or fragile goods.
To build robots that understand the physical world before they ever deploy in it, robotics researchers and developers are building policy models powered by NVIDIA Cosmos world foundation models (WFMs). Toyota Research Institute customizes Cosmos WFMs for their own world model to achieve state-of-the-art results across dynamic view synthesis, state-of-the-art teleoperation data augmentation and navigation world models.
https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4
Mimic robotics takes a different angle with mimic-video, a video-action model that pairs a pretrained internet-scale video model with a flow-matching action decoder, replacing the static image-language backbones of traditional VLAs with video-learned physical dynamics — achieving 10x better sample efficiency and 2x faster convergence on real-world manipulation tasks.
Together, both teams demonstrate a fundamental shift: robots trained on world models that capture physics and causality need dramatically less real-world data to perform reliably in conditions they’ve never seen.
Open, Intelligent Robotics on NVIDIA Jetson: Community Innovations Powering the Next Wave of Physical AI
This National Robotics Week, OpenClaw running on the NVIDIA Jetson platform showcases how quickly open source innovation is evolving into real-world, intelligent robotics.
From practical applications to innovative projects, the robotics community is building what’s next — and fast.
Developers are pushing the boundaries of autonomy — including hardware-in-the-loop testing powered by Jetson Thor , evaluating camera streams from NVIDIA Isaac Sim and even building systems that can generate their own code to complete tasks.
https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4
In addition, OpenClaw now running entirely locally on NVIDIA Jetson Thor — powered by optimized NVIDIA Nemotron open models and the vLLM open inference library — marks a major leap toward private, low-latency edge AI for robotics. And innovations like the NVIDIA NemoClaw stack on Jetson are expanding what’s possible at the intersection of open source and high-performance robotics platforms.
Training and Refining Movement in Simulation
Gennady Plyushchev, a robotics creator known as Skyentific, is documenting the process of building a walking bipedal robot, from simulation and design to real-world deployment — showcasing a simulation-first approach to robot development.
By using NVIDIA Isaac based simulation workflows alongside NVIDIA Jetson for on-device AI and control, the project demonstrates how developers can rapidly iterate in virtual environments before deploying to physical systems.
The result highlights a broader shift in robotics: using AI, simulation and edge computing to accelerate development and bring increasingly capable humanoid robots to life.
University of Maryland Researchers Develop Robots for Complex Household Tasks
To bring robots into everyday life, researchers at the University of Maryland , recipients of a grant from the NVIDIA Academic Grant Program , are developing AI-powered humanoid systems capable of performing complex household tasks with greater autonomy.
The project centers on building robot foundation models that unify perception, planning and control. Using the NVIDIA Isaac open robotics development platform, researchers can create photorealistic, high-fidelity virtual home environments populated with diverse objects and layouts, allowing robots to practice millions of task variations and safely test rare or complex scenarios.
NVIDIA RTX PRO 6000 Blackwell GPUs for training large models and NVIDIA Jetson AGX Thor developer kits for efficient deployment on physical robots help bridge the gap between research and real-world applications.
By combining advancements in generative AI, sequential decision-making and scalable computing, the work represents a key step toward general-purpose robots that can support people in homes, healthcare settings and beyond.
Announcing the MassRobotics Fellowship
The second cohort of the Amazon Web Services (AWS) MassRobotics fellowship comprises startups being recognized for compelling industrial use cases harnessing robotics and computer vision . They will receive access to technical resources and AWS cloud credits.
The cohort includes NVIDIA Inception members Burro, Config Intelligence, Deltia, Haply Robotics, Luminous Robotics, Roboto AI, Telexistence, Terra Robotics and WiRobotics, each developing technologies spanning humanoid robotics, industrial automation, haptics and agricultural systems.
Burro creates autonomous agricultural robots for tasks like grape harvesting and crop scouting.
Config Intelligence builds data infrastructure for general-purpose bimanual robotics to enable reliable two-handed tasks in real-world settings.
Deltia provides AI-driven manufacturing intelligence that optimizes assembly lines using computer vision and analytics.
Haply Robotics designs haptic control devices that serve as “steering wheels” for physical AI systems across industries.
Luminous Robotics deploys AI-powered robotic systems for fast, low-cost solar-panel installation and maintenance.
Roboto AI offers a data-analytics platform that accelerates robot development by managing and analyzing robotics data.
Telexistence develops AI-powered humanoid robots and remote-controlled systems for retail and logistics.
Terra Robotics develops laser-weeding agricultural robots to automate sustainable farming.
WiRobotics creates wearable walking-assist and humanoid robots to enhance mobility and physical interaction, using training data from assisted products to train its humanoids.
Accelerating How Utility-Scale Solar Projects Are Built in the Field
Maximo , a solar robotics business incubated within The AES Corporation, recently completed a 100-megawatt solar installation using its robot fleet. Developed with NVIDIA accelerated computing, NVIDIA Omniverse libraries and the NVIDIA Isaac Sim framework , Maximo demonstrated that autonomous installations can operate reliably for utility-scale projects.
https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4
The solution improves installation speed, safety and consistency, helping close the gap between rising demand for faster time to power and construction capacity.
As solar expansion faces ongoing labor constraints and rising demand, AI-driven field robotics systems like Maximo are helping accelerate infrastructure buildout, reduce costs and redefine how energy projects are delivered.
Aigen Advances Sustainable Farming With Agricultural Robotics
To help regenerate the Earth, Aigen’s solar-powered autonomous robots are breaking farmers’ dependency on chemicals through precision weed control powered by vision AI.
The NVIDIA Inception startup is building a new kind of farming system that’s powered by clean energy and continuously enriched by data. Aigen’s fleet of solar-driven rovers uses advanced computer vision to identify and remove weeds, dramatically reducing the need for herbicides.
Farming has no standard environment. Every field is different — different crops, different soil, different equipment, weeds, growth stages and geographies. That fragmentation makes real-world data collection slow, expensive and inconsistent. By post-training NVIDIA Cosmos open world foundation models on their specialized data and harnessing NVIDIA Isaac Sim pipelines, Aigen is building the system that generalizes for millions of agriculture scenarios.
On the ground, each rover runs inference using an NVIDIA Jetson Orin edge AI module to distinguish crops from weeds in real time.
https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4
Using these rovers, farmers can grow crops more sustainably and profitably, using regenerative practices that heal the land and foster ecological balance.
|
|
|
PR Newswire Sets the Record Straight on AI Visibility: "Be the Source" |
prnewswire |
10.04.2026 15:49 |
0.658
|
| Embedding sim. | 0.7586 |
| Entity overlap | 0.0625 |
| Title sim. | 0.15 |
| Time proximity | 0.8463 |
| NLP тип | other |
| NLP организация | PR Newswire |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
PR Newswire Sets the Record Straight on AI Visibility: "Be the Source"
USA - English
USA - English
News provided by
PR Newswire
Apr 10, 2026, 11:49 ET
Share this article
Share to X
Share this article
Share to X
New insights from GEO webinar highlight how brands can win in the era of AI summaries through authority, consistency and Multichannel Amplification™
NEW YORK , April 10, 2026 /PRNewswire/ -- PR Newswire today released key insights from its recent webinar, " GEO: Owning the AI Summary ," reinforcing a critical shift in communications strategy: in the age of generative AI, visibility is no longer about clicks – it's about being cited as a trusted source.
As AI-powered search and summaries reshape how audiences discover information, PR Newswire emphasized that brands must move beyond optimization frameworks and instead focus on building authoritative, consistent and multichannel narratives that AI systems trust and reference.
Continue Reading
"You can optimize content, or you can be the source AI trusts," said Jeff Hicks, Chief Product & Technology Officer at PR Newswire . "The brands that win in this next era won't just structure content well. They'll build durable authority across every channel with a consistent brand voice."
From Clicks to Citations: A Shift in What Matters
Insights shared during the webinar highlighted a fundamental evolution:
AI search is merging with traditional search – not replacing it.
Visibility is increasingly driven by citations, not rankings .
AI systems prioritize structured, authoritative and consistent content .
Brand narratives now have a long shelf life , with AI referencing content years after publication.
"What good content looked like 10 years ago still applies today," said Glenn Frates, RVP Distribution at PR Newswire . "Now your audience includes machines – and they expect clarity, authority and consistency, just like your human audiences."
Key Takeaways from the Webinar
Authority is cumulative: Earned media, owned content and press releases work together to build AI trust.
Consistency beats volume: A steady narrative outperforms one-off announcements.
Structure matters: Headlines, bullet points and section headers help both humans and AI parse content.
Multichannel Amplification™ is essential: Press releases, blogs, social media and earned coverage reinforce each other.
AI has a "long memory": Older content continues shaping brand perception.
"AI isn't just citing what's visible. It's informed by everything beneath the surface," said Scott Newton, Director of Solutions Consulting at Cision and Brandwatch . "That underlying narrative you build over time is what ultimately shapes how your brand shows up in AI answers."
FAQ: Real Questions from the live GEO webinar
Q1: What counts as an "authoritative source" in AI search – an SME quote or a C-suite voice? A: Authority comes from relevance and expertise, not just title. A subject matter expert often provides more valuable, context-rich insight than a generic executive quote. AI systems prioritize depth, clarity and expertise over hierarchy.
Q2: Does keeping content behind a paywall hurt AI visibility? A: It can limit discoverability. While premium content strategies remain valuable, brands should ensure some authoritative, indexable content exists publicly to inform AI systems and support citation potential.
Q3: If older content still gets cited, does deleting archives hurt GEO performance? A: Yes. AI systems frequently reference older content, especially in answers to nuanced or topic-specific queries. Removing historical content can weaken your long-term narrative authority and visibility.
Q4: Should brands publish everything at once or spread content over time? A: Both strategies have value, but consistency is key. A steady cadence across channels reinforces narrative strength more effectively than isolated bursts. Think of it as a "drumbeat," not a spike.
Q5: How do you measure how your brand shows up in AI platforms? A: Measurement requires actively testing prompts across platforms like ChatGPT, Gemini and others. Tools like PR Newswire's AEO & GEO Brand Report help brands track citation frequency, sentiment and share of voice across AI-generated responses.
Q6: Do integrations like photos, videos and IMC campaigns impact AI visibility? A: Yes. Multimedia content enhances engagement and can be cited (e.g., YouTube), while integrated campaigns reinforce consistent messaging – strengthening both human and AI discoverability.
A New Standard for AI-Era Communications
While new frameworks and acronyms continue to emerge in the market, PR Newswire emphasized that success in AI search is not about reinventing communications – but executing fundamentals at scale and with precision.
Build trustworthy, authoritative content .
Maintain consistent storytelling over time .
Leverage Multichannel Amplification™ .
Focus on becoming a primary source of truth .
"AI doesn't browse – it cites," added Frates. "If your brand isn't part of the source layer, it won't be part of the answer."
Additional resources
PR Newswire Launches AEO & GEO Report for AI Brand Visibility
Why FAQs are Built for AI
On-demand webinar - GEO: Owning the AI Summary
About PR Newswire
PR Newswire is the industry's leading press release distribution partner with an unparalleled global reach of more than 500,000 newsrooms, websites, direct feeds, journalists and influencers and is available in more than 170 countries and 40 languages. From our innovative AI-powered PR Newswire Amplify™ platform, award-winning Content Services offerings, integrated media newsroom and microsite products, Investor Relations suite of services, paid placement and social sharing tools, PR Newswire has a comprehensive Multichannel Amplification™ catalogue of solutions to solve the modern-day challenges PR and communications teams face. For more than 70 years, PR Newswire has been the preferred destination worldwide for brands to share their most important news stories.
About Cision
Cision is the global leader in consumer and media intelligence, engagement, and communication solutions. We equip PR and corporate communications, marketing, and social media professionals with the tools they need to excel in today's data-driven world. Our deep expertise, exclusive data partnerships, and award-winning products, including CisionOne, Brandwatch, PR Newswire, and Trajaan, enable over 75,000 companies and organizations, including 84% of the Fortune 500, to see and be seen, understand and be understood by the audiences that matter most to them.
For questions, contact the team at [email protected] .
SOURCE PR Newswire
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
After data breach, $10B-valued startup Mercor is having a month | TechCrunch |
techcrunch |
09.04.2026 19:33 |
0.657
|
| Embedding sim. | 0.8059 |
| Entity overlap | 0.2222 |
| Title sim. | 0.2456 |
| Time proximity | 0.1542 |
| NLP тип | other |
| NLP организация | Mercor |
| NLP тема | ai security |
| NLP страна | |
Открыть оригинал
Six months ago, Mercor was flying high after raising a massive $350 million Series C that valued the AI data training startup at $10 billion. But after admitting on March 31 that it was the target of a data breach , the company has been facing a world of trouble.
Since then, a hacker group has claimed to have obtained 4TB of stolen data from Mercor’s systems, including candidate profiles, personally identifiable information, employer data, source code, and API keys. Mercor has not commented on the authenticity of the data, reiterating only that it is investigating and “will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible.”
Mercor said its data breach was the result of a hack of the open source tool LiteLLM . This tool is so popular that it’s downloaded millions of times a day. For 40 minutes, the tool harbored credential harvesting malware — rogue software that could steal login credentials. Those credentials were used to gain access to more software and accounts, which it used to harvest more credentials, and so on.
While there have been no formal acknowledgments of how much data was scooped up from Mercor, there have been repercussions all the same. Meta has paused its contracts with Mercor indefinitely, sources told Wired . (Mercor declined to comment to TechCrunch about this.)
Like other contract AI data training companies, Mercor handles some of the model makers’ biggest trade secrets: the custom data sets and processes they use to teach their models. This is so important to them that even after Meta spent $14.3 billion on Mercor’s competitor Scale AI , it continued working with Mercor.
In a spot of good news for Mercor (maybe…we’ll see): OpenAI also confirmed to Wired that it was investigating its exposure in Mercor’s breach, but said it had not paused or ended its contracts at the time. However, TechCrunch has heard from multiple sources that other large model makers may also be weighing their relationships with Mercor after the breach, although we have not confirmed enough details to name names as of yet.
In the meantime, five of Mercor’s contractors have filed lawsuits, Business Insider reports , over their alleged personal data exposure. Whether these suits represent a serious threat or are just opportunistic and a nuisance remains to be seen. (Mercor declined to comment.)
Techcrunch event
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Save up to $500 for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
One lawsuit, reviewed by TechCrunch, even named LiteLLM and Delve as defendants. This is wild, and perhaps a stretch, but here’s the connection: LiteLLM used AI compliance startup Delve to obtain its security certifications. Delve has been accused by an anonymous whistleblower of allegedly faking data for security certifications and using rubber-stamping auditors.
A security certification does not directly prevent hackers from launching successful attacks, but it is intended to ensure that companies have processes in place to minimize such threats.
Although Delve has denied those allegations while simultaneously instituting operational changes, it has been in a world of hurt of its own, to the point where Y Combinator severed ties with the company.
LiteLLM ditched Delve and is now working with another AI compliance startup to obtain its security certifications again. LiteLLM also published a complete report on the security incident.
But Mercor itself was not a Delve customer, the company confirmed to TechCrunch. If, however, the fallout for Mercor continues, a lot of revenue could be at stake. The company was reportedly on pace to hit over $1 billion in annualized revenue earlier this year before the data leak, an anonymous source told The Information.
Topics
AI , Delve , LiteLLM , Mercor , Startups , TC
Julie Bort
Venture Editor
Julie Bort is the Startups/Venture Desk editor for TechCrunch.
You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
Apple’s foldable iPhone is on track to launch in September, report says
Aisha Malik
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost
Jagmeet Singh
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Oracle: AI agents decide and act. Liability question remains |
the_register_ai |
25.03.2026 17:47 |
0.657
|
| Embedding sim. | 0.7577 |
| Entity overlap | 0.0556 |
| Title sim. | 0.0538 |
| Time proximity | 0.993 |
| NLP тип | product_launch |
| NLP организация | Oracle |
| NLP тема | ai agents |
| NLP страна | United Kingdom |
Открыть оригинал
Applications
15
Oracle: AI agents can reason, decide and act - liability question remains
15
Fusion Agentic Applications promise autonomous enterprise decisions. Gartner urges caution
Lindsay Clark
Wed 25 Mar 2026 //
17:47 UTC
Oracle says it's building a suite of AI agents into its cloud-based enterprise applications, claiming they can make and execute decisions autonmomously within business processes. But analysts are urging caution given unresolved questions around data integration and liability.
Unveiled in London this week, Fusion Agentic Applications will be integrated with the Oracle Fusion Cloud Applications suite, covering financials, ERP, HR, payroll and supply chain management. Oracle argues it has a structural advantage here: the data needed to train and run these agents already lives inside its enterprise applications.
"Applications that can reason, decide, and act in pursuit of defined business objectives," is how Big Red's application development executive veep Steve Miranda framed the shift, a move away from process-focused software toward outcome-driven automation.
Oracle, for example, promises a Design-to-Source Workspace Agentic Application, which it says can work across engineering, supplier, and sourcing decisions to create one "coordinated and continuous process."
However, Balaji Abbabatulla, Gartner vice president and vendor lead for Oracle, was more measured, pointing to unanswered questions about how the technology will be implemented in an enterprise setting.
"Our position is that this sounds good, but be cautious. It doesn't necessarily look as glittery as it sounds. There are challenges under the hood which are not being overcome right now, but maybe over time," he said.
In January, Gartner said boards of global businesses are putting tech teams under pressure to implement AI agents . Application, database, service layer, and cloud vendors are all scrabbling over the expected bonanza, trying to build influence over enterprise AI strategy.
Oracle’s pitch is to house AI agents within its enterprise application suite, and sell AI Agent Studio for Fusion Applications to help organizations build, connect, and run AI automation and agentic applications. Oracle has also launched an AI Data Platform to integrate data from different sources to build AI agents.
Gartner's Abbabatulla said that via the Platform Oracle wants to connect non-Oracle repositories, legacy applications - such as SharePoint repositories - and extract information from them. Although Big Red provides tools for data or technology experts to do that, it is not automated.
“There's no kind of autonomous way of synchronizing these different data repositories in the background,” he said.
AI agents are 'gullible' and easy to turn into your minions
Salesforce snaps up the team who built calendar app Clockwise to work on Agentforce
SAP already shifting focus from ERP migration disaster in pursuit of AI-driven growth
AI for software developers is in a 'dangerous state'
UK Treasury not sure about ditching Oracle to join £1.7 billion shared services program it is funding
Building agents to run application-based processes will require a lot of work – and most likely spending money with Oracle to get the right engineering expertise, he added.
That's a hurdle for some large enterprises already invested in data platforms from Databricks, Snowflake, Cloudera or other vendors, with some initiatives harking back to the "big data" investment era. Abbabatulla sees Oracle's pitch as partly defensive, using data-in-context as an incentive to keep customers within its ecosystem.
“The transition overhead is massive, because these are investments people have made for years now,” Abbabatulla said. “This is unlikely to actually attract them to let go of this investment, but I'm sure there'll be organizations willing to try this in addition to some of those other investments they have made.”
Oracle and other vendors must still answer the question of who takes responsibility for AI decision-making should it go wrong, a problem The Register has been raising for a couple of years .
If an AI agent makes a bad decision at scale and speed, cascading errors could spread before anyone notices. Oracle's answer so far is monitoring and audit tooling, but Abbabatulla is unconvinced: "I don't see a clear response from any vendor on the liability issue."
Mickey North Rizza, IDC group vice-president enterprise software, was more bullish, calling it a "significant shift" in agentic systems as they continuously complete work within the enterprise software system.
“Overall, this is a great move for Oracle positioning it as a market shaper towards the Agents as Apps. It won’t be the app with the best UI that does well, but rather the agent that reliably completes outcomes that are at scale, with trust and bring sustained economic leverage," she said.
With boards pressuring tech teams to deploy agents, Oracle, like every major platform vendor, is fighting for a piece of that pie. ®
Share
More about
AI
Oracle
More like these
×
More about
AI
Oracle
Narrower topics
AIOps
Database
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
Mark Hurd
MCubed
MySQL
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Larry Ellison
Self-driving Car
More about
Share
15
COMMENTS
More about
AI
Oracle
More like these
×
More about
AI
Oracle
Narrower topics
AIOps
Database
DeepSeek
Gemini
Google AI
GPT-3
GPT-4
Large Language Model
Machine Learning
Mark Hurd
MCubed
MySQL
Neural Networks
NLP
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Broader topics
Larry Ellison
Self-driving Car
TIP US OFF
Send us news
|
|
|
ScaleOps raises $130M to improve computing efficiency amid AI demand | TechCrunch |
techcrunch |
30.03.2026 13:52 |
0.657
|
| Embedding sim. | 0.7355 |
| Entity overlap | 0.0351 |
| Title sim. | 0.2047 |
| Time proximity | 0.9829 |
| NLP тип | funding |
| NLP организация | scaleops |
| NLP тема | ai infrastructure |
| NLP страна | united states |
Открыть оригинал
AI may be booming, but behind the scenes, companies are wasting vast amounts of expensive compute. GPUs sit idle, workloads are over-provisioned, and cloud costs continue to climb. ScaleOps believes the problem isn’t a shortage — it’s mismanagement.
The startup, which builds software that automatically manages and reallocates computing resources in real time, has raised $130 million at an $800 million valuation, ScaleOps said Monday. The Series C funding round was led by Insight Partners, with participation from existing investors, including Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital. The company says its software reduces cloud and AI infrastructure costs by as much as 80%.
ScaleOps was co-founded in 2022 by Yodar Shafrir, a former engineer at Run:ai, a GPU orchestration startup acquired by Nvidia , after seeing firsthand how difficult it was for companies to manage increasingly complex AI workloads. While tools like Kubernetes help run applications across large clusters of machines, they often rely on static configurations that struggle to keep up with fast-changing demand, leading to underused GPUs, performance issues, and costly inefficiencies.
“As part of my role [at Run:ai], I met many customers, especially DevOps teams,” Shafrir, who is the company’s CEO, told TechCrunch. “While they really liked what Run:ai provided, they still struggled to manage their production workloads, especially as inference workloads became more common in the AI era. When I zoomed out, I realized the problem wasn’t just GPUs. It extended to compute, memory, storage, and networking. The same patterns kept repeating; teams were failing to manage resources efficiently.”
DevOps teams often found themselves chasing down multiple stakeholders to resolve issues, and too often, those efforts fell short. Most existing tools offered visibility into problems, but stopped short of delivering actual solutions. That gap revealed a significant market opportunity.
ScaleOps connects application needs with infrastructure decisions in real time and provides a fully autonomous solution that manages infrastructure end-to-end, Shafrir said.
“Kubernetes is a great system. It’s flexible and highly configurable. But that’s also the problem,” Shafrir said. “Kubernetes relies heavily on static configurations. Applications today are highly dynamic, which requires constant manual work across teams. You need something that understands the context of each application — what it needs, how it behaves, and how the environment is changing.”
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Image Credits: ScaleOps
There are several players in this space, including Cast AI , Kubecost and Spot . While many companies have introduced automation tools, they often operate without full context, which can lead to performance issues and even downtime, limiting trust among teams running production environments, according to the CEO.
The startup says its platform was built specifically for production from the ground up. It is fully autonomous, context-aware, and works out of the box without requiring manual configuration — capabilities the company believes differentiate ScaleOps from competitors.
The New York-headquartered company serves enterprise customers globally, particularly those operating Kubernetes-based infrastructure, with a footprint that spans large organizations as well as companies across Europe and India. ScaleOps says its platform is used by a range of enterprise clients, including Adobe, Wiz, DocuSign, Salesforce, and Coupa.
The Series C funding comes roughly a year and a half after ScaleOps raised $58 million in its Series B round in November 2024. Since then, the team has seen strong demand for autonomous solutions to manage cloud infrastructure, Shafrir said, adding that it is still in the early stages of its growth. The company’s total funding is about $210 million, according to a spokesperson.
ScaleOps said it has seen more than 450% year-over-year growth and that it has tripled its headcount over the past 12 months, with plans to more than triple it again by year-end.
With the new capital, ScaleOps plans to roll out new products and expand its platform. As AI drives demand for compute, managing that infrastructure is becoming increasingly critical. The startup said it will continue building toward fully autonomous infrastructure.
Topics
AI , GPU , GPU optimization , Insight Partners , Israel , Kubernetes , ScaleOps , Startups , United States
Kate Park
Reporter, Asia
Kate Park is a reporter at TechCrunch, with a focus on technology, startups and venture capital in Asia. She previously was a financial journalist at Mergermarket covering M&A, private equity and venture capital.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk |
wired |
03.04.2026 21:28 |
0.656
|
| Embedding sim. | 0.7859 |
| Entity overlap | 0.1667 |
| Title sim. | 0.0861 |
| Time proximity | 0.5967 |
| NLP тип | other |
| NLP организация | Meta |
| NLP тема | ai security |
| NLP страна | |
Открыть оригинал
Maxwell Zeff Zoë Schiffer Lily Hay Newman
Business
Apr 3, 2026 5:28 PM
Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk
Major AI labs are investigating a security incident that impacted Mercor, a leading data vendor. The incident could have exposed key data about how they train AI models.
Photo-Illustration: WIRED Staff; Getty Images
Save this story
Save this story
Meta has paused all its work with the data contracting firm Mercor while it investigates a major security breach that impacted the startup, two sources confirmed to WIRED. The pause is indefinite, the sources said. Other major AI labs are also reevaluating their work with Mercor as they assess the scope of the incident, according to people familiar with the matter.
Mercor is one of a few firms that OpenAI , Anthropic , and other AI labs rely on to generate training data for their models. The company hires massive networks of human contractors to generate bespoke, proprietary datasets for these labs, which are typically kept highly secret as they’re a core ingredient in the recipe to generate valuable AI models that power products like ChatGPT and Claude Code . AI labs are sensitive about this data because it can reveal to competitors—including other AI labs in the US and China—key details about the ways they train AI models. It’s unclear at this time whether the data exposed in Mercor’s breach would meaningfully help a competitor.
While OpenAI has not stopped its current projects with Mercor, it is investigating the startup’s security incident to see how its proprietary training data may have been exposed, a spokesperson for the company confirmed to WIRED. The spokesperson says that the incident in no way affects OpenAI user data, however. Anthropic did not immediately respond to WIRED’s request for comment.
Mercor confirmed the attack in an email to staff on March 31. “There was a recent security incident that affected our systems along with thousands of other organizations worldwide,” the company wrote.
A Mercor employee echoed these points in a message to contractors on Thursday, WIRED has learned. Contractors who were staffed on Meta projects cannot log hours until—and if—the project resumes, meaning they could functionally be out of work, a source familiar claims. The company is working to find additional projects for those impacted, according to internal conversations viewed by WIRED.
Mercor contractors were not told exactly why their Meta projects were being paused. In a Slack channel related to the Chordus initiative—a Meta-specific project to teach AI models to use multiple internet sources to verify their responses to user queries—a project lead told staff that Mercor was “currently reassessing the project scope.”
An attacker known as TeamPCP appears to have recently compromised two versions of the AI API tool LiteLLM. The breach exposed companies and services that incorporate LiteLLM and installed the tainted updates. There could be thousands of victims, including other major AI companies, but the breach at Mercor illustrates the sensitivity of the compromised data.
Mercor and its competitors—such as Surge, Handshake, Turing, Labelbox, and Scale AI—have developed a reputation for being incredibly secretive about the services they offer to major AI labs. It’s rare to see the CEOs of these firms speaking publicly about the specific work they offer, and they internally use codenames to describe their projects.
Adding to the confusion around the hack, a group going by the well-known name Lapsus$ claimed this week that it had breached Mercor. In a Telegram account and on a BreachForums clone, the actor offered to sell an array of alleged Mercor data, including a 200-plus GB database, nearly 1 TB of source code, and 3 TBs of video and other information. But researchers say that many cybercriminal groups now periodically take up the Lapsus$ name and that Mercor’s confirmation of the LiteLLM connection means that the attacker is likely TeamPCP or an actor connected to the group.
TeamPCP appears to have compromised the two LiteLLM updates as part of an even larger supply chain hacking spree in recent months that has been gaining momentum, catapulting TeamPCP to prominence. And while launching data extortion attacks and working with ransomware groups, such as the group known as Vect, TeamPCP has also strayed into political territory, spreading a data wiping worm known as “CanisterWorm” through vulnerable cloud instances with Farsi as their default language or clocks set to Iran’s time zone.
“TeamPCP is definitely financially motivated,” says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. “There might be some geopolitical stuff as well, but it’s hard to determine what’s real and what’s bluster, especially with a group this new.”
Looking at the dark-web posts of the alleged Mercor data, Liska adds, “There is absolutely nothing that connects this to the original Lapsus$.”
|
|
|
Elon Musk’s last co-founder reportedly leaves xAI | TechCrunch |
techcrunch |
28.03.2026 16:11 |
0.655
|
| Embedding sim. | 0.7649 |
| Entity overlap | 0 |
| Title sim. | 0.0247 |
| Time proximity | 0.9751 |
| NLP тип | leadership_change |
| NLP организация | xAI |
| NLP тема | artificial intelligence |
| NLP страна | United States |
Открыть оригинал
Earlier this month, it looked like all but two of Elon Musk’s 11 co-founders at his AI startup xAI had departed the company . Now, according to Business Insider, the remaining two co-founders, Manuel Kroiss and Ross Nordeen, have left as well .
Business Insider said on Wednesday that Kroiss had told people that he’s leaving xAI, then reported that Nordeen left the company on Friday .
Musk recently claimed xAI “was not built right [the] first time around,” so it’s now “being rebuilt from the foundations up.” The company was recently acquired by Musk’s SpaceX , bringing SpaceX, xAI, and X (formerly Twitter) together under one corporate umbrella, all as SpaceX is reportedly planning to go public .
Kroiss and Nordeen both reported directly to Musk, according to Business Insider, with Kroiss leading the company’s pretraining team, while Nordeen was Musk’s “right-hand operator.” Nordeen reportedly came to xAI from Tesla and was involved in planning major layoffs at Twitter after Musk acquired the company in 2022.
TechCrunch has reached out to xAI for comment.
Topics
AI , Elon Musk , In Brief , manuel kroiss , ross nordeen , SpaceX , Startups , xAI
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
1 hour ago
Social
Meta was finally held accountable for harming teens. Now what?
Amanda Silberling
2 hours ago
Apps
Go play this secret game in your TikTok DMs
Aisha Malik
2 hours ago
Latest in AI
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
1 hour ago
AI
Alexa+ gets new food ordering experiences with Uber Eats and Grubhub
Lauren Forristal
3 hours ago
Robotics
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles
Tim Fernholz
6 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
‘Thank You for Generating With Us!’ Hollywood's AI Acolytes Stay on the Hype Train |
wired |
01.04.2026 18:13 |
0.654
|
| Embedding sim. | 0.7646 |
| Entity overlap | 0.125 |
| Title sim. | 0.0654 |
| Time proximity | 0.832 |
| NLP тип | other |
| NLP организация | Runway |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
John Semley
Culture
Apr 1, 2026 2:13 PM
‘Thank You for Generating With Us!’ Hollywood's AI Acolytes Stay on the Hype Train
Star Wars producer Kathleen Kennedy was one of the few skeptics at the Runway AI Summit, where AI was compared to fire and the printing press just a week after Sora’s death.
Photo-Illustration: WIRED Staff; Getty Images
Save this story
Save this story
Kathleen Kennedy, the Hollywood super-producer behind culture-defining megahits like Jurassic Park and the Star Wars franchise , recently put a question to the head of the American Film Institute: “How are you going to teach taste?”
As Kennedy told an audience of industry insiders who gathered in Manhattan this week for the Runway AI Summit, the venerable LA film academy has been incorporating certain artificial intelligence tools into their curriculum. Kennedy says she asked the institute’s dean how the school would continue to raise generations of not just prompt-generators but discerning filmmakers with a distinct point of view. “Taste is fundamental,” Kennedy, 72, told the crowd. “It does define the choices you’re making.”
In other words, how could the AFI ensure that these AI tools were being used to make work that is, you know, good?
It’s a great question. And the sort that was in short supply during this industry confab, which New York-based AI company Runway hosted less than a week after OpenAI killed its video app Sora , disrupting the company’s $1 billion deal with Disney. Despite that blow to early prophecies that Sora would remake Hollywood , the hype machine was working overtime Tuesday, as executives labeled AI as a technological feat on par with the harnessing discovery of fire.
“AI has become the conversation,” Runway’s cofounder and CEO Cristóbal Valenzuela told the audience at the event while an AI-generated video showed an old man on the subway reading a newspaper when the big bold headline “AI Has Become the Conversation.” In addition to offering a suite of text-to-video generation and VFX tools for “creatives,” Runway also operates an annual AI-generated film competition . It’s positioned itself at the forefront of the creative revolution in AI. As I discovered at the event, that also involves trying to make “generate” happen. As in popularizing the verb. Summit guests were offered free T-shirts exclaiming “Thank You For Generating With Us!” in the iconic Bookman font of those “ Thank You For Shopping With Us! ” plastic bags.
“We’re living in magic times,” Valenzuela told the crowd, in a tone-setting, 10 am keynote titled “The Normalization of Magic: AI and What’s Ahead of Us.” The title was a nod to sci-fi giant Arthur C. Clarke’s “three laws” outlined in a 1962 essay, the third and most famous of which claims that "Any sufficiently advanced technology is indistinguishable from magic.” As if to prove the point, another AI-generated image was projected on big screens spread across an enormous high-rise ballroom, showing Apple Computer cofounder Steve Jobs striding the ancient Athenian agora with a be-toga’d sage (Socrates, I’d guess). “We are literally here!” Valenzuela beamed.
Well, not literally. But you know what he means.
By and large, Runway’s AI summit was marked by this sort of wild, declarative enthusiasm. Early in the day, Paramount’s chief technology officer, Phil Wiser, cautioned that he wanted to describe the benefits of AI without being “hypey or hyperbolic.” He then immediately went so far as to claim that generative AI ranks among the top 10–and maybe even top five–“technology trends of all time,” ranking it right alongside the printing press, and fire.
The mood at these kinds of events brings to mind one of the only funny Bluesky posts : “CEO of Oreo cookies: The Oreo cookie is as important as oxygen.” Another speaker compared AI’s revolutionary potential to that of the printing press (again), the photographic film camera, and Adobe Photoshop (she, incidentally, heads up Adobe’s new AI business ventures). An executive from video game studio Electronic Arts boasted that AI was able to “close the gap between imagination and creation.”
While this type of hype is predictable at industry-led events, again and again summit attendees were reminded that generative AI isn’t just another flash-in-the-pan techno-bauble, like VR headsets, the “metaverse,” or NFTs. It’s actually revolutionary.
The insistence betrays the measure of anxiety one might expect at a confab celebrating a power–hungry industry staring down an energy crisis . And the shuttering of a video-generating tool from one of the biggest companies in the game. And protests against the data centers necessary for the technology to work.
Indeed, there was plenty of talk about how AI— despite concerns about how its great many “efficiencies” may change, or render totally redundant, the work of those toiling in creative fields—is not an affront to human creativity.
Everyone seemed in agreement that what AI cannot do—yet, anyway—is “generate” its own ideas. “The origin of creativity is the human mind,” said EA’s Mihir Vaidya. Adobe’s Hannah Elsakr offered similar sentiments, projected onscreen as an equation: (Humanity x Creativity) AI = Unlimited Possibility . We were told that “stories are human” and that, in this brave new world of unlimited possibility, “human judgment” will be key. But AI’s promise of instant gratification misunderstands the very core of human creativity.
AI boosters see human beings as almost purely idealized, creative engines: prime movers in an increasingly technologized process. In reality, creativity is revealed in work and the toil of figuring things out. One learns to play guitar by stumbling through Green Day power chords. One learns to write by writing, and rewriting, and futzing around with the shape and structure of sentences. You can’t learn to write by just thinking about writing. Or “generate” a killer guitar riff by imagining it. Creativity is not just some commodity, trapped in the imagination, that can be tapped and sieved by technology. It is a skill that must be learned, not just unleashed. The dreaded “gap between imagination and creation” is not some inefficiency that can be ironed out by a computer program. It is where creativity itself emerges.
The other nagging issue is the results. A lot of the images demo’d at the summit looked plain awful. They are conspicuously synthetic, digital, inhuman. Yet everyone applauds for them, as if they actually look good. In another session, Rob Wrubel, founder and managing director of AI studio Silverside, bragged about how his company used the tech to make a completely AI-generated holiday ad for Coca-Cola. Maybe I, too, live in a bubble, but I recall that spot being widely despised and mocked. This, of course, was never mentioned.
The suffocating hype-o-rama made Kennedy’s fireside chat a healthy dose of reality.
In addition to stressing the importance of human virtues like taste, and even basic ability, she outlined a few instances in which technological advances had failed her productions. Kennedy, who stepped down as head of Lucasfilm earlier this year, cited a recent Star Wars film—the forthcoming The Mandalorian and Grogu , one presumes—where 3-D printed props began breaking after a few takes. Because they were not built by skilled prop masters, whose experience grants them intuitions about how objects will behave, and not just how they look, they turned out flimsy and subpar.
She stressed the importance of uniquely human experiences like chance and accident to the creative process and underlined the value of the sort of “thinking time” that other speakers onsite seemed keen to streamline or eliminate altogether. “I’m gonna sound like a traditionalist!” she said. And to be fair, she did. Refreshingly so.
If AI is indeed a useful tool for the blockbuster filmmaking process, then it’s little surprise that someone with a CV as long and impressive as Kennedy’s would have more sophisticated thoughts about how to use this tool. The younger, jumpier AI upstarts seem more eager to use technology as an end run around a creative process they talk about as sacred.
Maybe it’s the sort of circumspection experience—combined with tremendous success—brings. Or maybe it’s just the difference between knowing how to create something and merely being able to generate it.
|
|
|
Vertiv to Expand Ohio Manufacturing to Boost U.S. Production of Critical Thermal Management Technologies for AI Data Centers |
prnewswire |
30.03.2026 20:00 |
0.654
|
| Embedding sim. | 0.7775 |
| Entity overlap | 0.087 |
| Title sim. | 0.1783 |
| Time proximity | 0.5631 |
| NLP тип | product_launch |
| NLP организация | Vertiv Holdings Co |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
Vertiv to Expand Ohio Manufacturing to Boost U.S. Production of Critical Thermal Management Technologies for AI Data Centers
News provided by
Vertiv Holdings Co
Mar 30, 2026, 16:00 ET
Share this article
Share to X
Share this article
Share to X
Ironton, Ohio facility expansion strengthens supply chain and increases Vertiv's liquid cooling and chilled water system capacity for high-density computing
COLUMBUS, Ohio , March 30, 2026 /PRNewswire/ -- Vertiv (NYSE: VRT ), a global leader in critical digital infrastructure, today announced an investment of ~$50 million to expand its manufacturing presence in Ironton, Ohio, and its headquarters campus in Westerville, Ohio. The projects are expected to create hundreds of new jobs through 2029 and strengthen Vertiv's ability to support growing customer demand for AI, high-density computing, and other critical digital infrastructure applications.
Continue Reading
Vertiv’s Ironton facility will expand manufacturing capacity for advanced liquid cooling and chilled water systems, strengthening supply chains and supporting high-density AI infrastructure and next-generation data centers.
The Ironton expansion, which is expected to be operational in the second quarter of 2027, is planned to increase production capacity for Vertiv liquid cooling and chilled water systems used in advanced thermal management applications. With the expansion, total capacity at the facility is expected to increase by ~45% for these systems, helping Vertiv expand regional production, improve responsiveness to customer demand, and shorten supply chains.
As AI adoption accelerates and compute densities continue to rise, customers are requiring more advanced thermal management solutions to support next-generation GPU clusters, large-scale model training, and other high-performance workloads. Vertiv's investment is designed to expand the manufacturing, engineering, sales, services, and logistics capabilities needed to help customers deploy and scale this infrastructure more efficiently.
"Ohio operations remain integral to Vertiv's strategy," said Giordano (Gio) Albertazzi, CEO of Vertiv. "This investment expands our manufacturing capacity and strengthens the engineering, sales, service, and logistics capabilities that support customers building the next generation of digital infrastructure. It also reflects our confidence in the talent, commitment, and long-standing support we continue to see across Ohio and within the communities where we operate."
Vertiv has a rich history in Ohio, founded more than 60 years ago as Liebert Corporation, a pioneer in data center precision cooling. Today, Vertiv's Ohio footprint spans 14 facilities, including manufacturing, research and development, testing labs, service and sales offices, customer experience centers, a training facility, and its global headquarters. By expanding both its manufacturing footprint and headquarters capabilities in the state, Vertiv is further positioning its U.S. operations to serve customers with greater scale, speed, and operational resilience.
Vertiv delivers end-to-end infrastructure , from grid to chip and chip to heat reuse, where power, cooling, IT, and services operate in unison and are built for multiple compute generations ahead. With global reach and a portfolio of innovative industry-leading technologies and services, Vertiv is enabling customers to deploy efficiently and scale seamlessly, helping customers manage the challenges associated with modern digital infrastructure.
To learn more about Vertiv, visit Vertiv.com .
About Vertiv Vertiv (NYSE: VRT ) brings together hardware, software, analytics and ongoing services to enable its customers' vital applications to run continuously, perform optimally and grow with their business needs. Vertiv solves the most important challenges facing today's data centers, communication networks and commercial and industrial facilities with a portfolio of power, cooling and IT infrastructure solutions and services that extends from the cloud to the edge of the network. Headquartered in Westerville, Ohio, USA, Vertiv does business in more than 130 countries. For more information, and for the latest news and content from Vertiv, visit Vertiv.com .
Forward-looking statements This release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, Section 27 of the Securities Act, and Section 21E of the Securities Exchange Act. These statements are only a prediction. Actual events or results may differ materially from those in the forward-looking statements set forth herein. Readers are referred to Vertiv's filings with the Securities and Exchange Commission, including its most recent Annual Report on Form 10-K and any subsequent Quarterly Reports on Form 10-Q for a discussion of these and other important risk factors concerning Vertiv and its operations. Vertiv is under no obligation to, and expressly disclaims any obligation to, update or alter its forward-looking statements, whether as a result of new information, future events or otherwise.
CONTACT [email protected]
SOURCE Vertiv Holdings Co
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Japan relaxes privacy laws to make AI development easy |
the_register_ai |
08.04.2026 04:48 |
0.652
|
| Embedding sim. | 0.7886 |
| Entity overlap | 0.0571 |
| Title sim. | 0.0645 |
| Time proximity | 0.6262 |
| NLP тип | regulation |
| NLP организация | Government of Japan |
| NLP тема | ai regulation |
| NLP страна | Japan |
Открыть оригинал
Public Sector
21
Japan relaxes privacy laws to make itself the ‘easiest country to develop AI’
21
Opting out of personal data use won't be an option because Minister says that's a 'very big obstacle' to AI adoption
Simon Sharwood
Wed 8 Apr 2026 //
04:48 UTC
Japan’s Minister for Digital Transformation Hisashi Matsumoto has declared the nation will become the easiest place in the world to develop AI apps, thanks to legal changes that mean organizations won’t need to secure consent to use some personal information.
To make that happen, Japan’s government on Tuesday approved amendments to the nation’s Personal Information Protection Act that remove the requirement for opt-in consent before sharing personal data.
The changes only apply to data that poses little risk of infringing individuals’ rights, and when developers use it to compile statistics for research purposes. Even health-related data comes under the amendments, if it can improve public health.
Facial scans are also fair game. The amendments require those who acquire facial images to explain how they handle the data, but offering a chance to opt out won’t be mandatory.
Collecting the image of a child aged under 16 will require parental approval, a “best interests” test will apply when considering use of data that describes minors.
Organizations that collect the wrong data, or maliciously use it to harm citizens, will face fines equivalent to the profit they make from improperly using data. Japan’s government will also implement fines for obtaining data through fraudulent means.
But in the event of a data leak, organizations will not need to notify impacted citizens if there is little risk of harm to individuals.
Yahoo ! Japan’s owner consolidating 164 OpenStack clusters into one
Japanese shipper MOL wants a floating datacenter, and Hitachi just climbed aboard
Japan to allow ‘proactive cyber-defense’ from October 1st
Japan doubles down on Trump's Genesis AI supercomputing effort
Minister Matsumoto said Japan needs this legislative tweak because current laws represent “a very big obstacle to the development, and utilization of AI in Japan.”
“We must prevent this from happening,” he said, because without access to data Japan will struggle to develop and deploy useful AI.
Despite its reputation as a hotbed of technology, Japan has been markedly slow to digitize government services. These amendments are aimed, in part, at making sure Japan is not slow to catch the AI wave. ®
Share
More about
AI
Japan
Law
More like these
×
More about
AI
Japan
Law
Privacy
Narrower topics
AIOps
Antitrust
cookies
Cross-border data flow
DeepSeek
Digital Services Act
Digital sovereignty
Gemini
Google AI
GPT-3
GPT-4
JAXA
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Personally Identifiable Information
Privacy Sandbox
Privacy Shield
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Toyota
Broader topics
APAC
Self-driving Car
More about
Share
21
COMMENTS
More about
AI
Japan
Law
More like these
×
More about
AI
Japan
Law
Privacy
Narrower topics
AIOps
Antitrust
cookies
Cross-border data flow
DeepSeek
Digital Services Act
Digital sovereignty
Gemini
Google AI
GPT-3
GPT-4
JAXA
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Personally Identifiable Information
Privacy Sandbox
Privacy Shield
Retrieval Augmented Generation
Star Wars
Tensor Processing Unit
TOPS
Toyota
Broader topics
APAC
Self-driving Car
TIP US OFF
Send us news
|
|
|
AIxCrypto Co-CEO Jerry Wang Shares Weekly Investor Update: AI Agent Development and Internal Testing Progress |
prnewswire |
07.04.2026 03:42 |
0.652
|
| Embedding sim. | 0.7464 |
| Entity overlap | 0.0233 |
| Title sim. | 0.1125 |
| Time proximity | 0.9655 |
| NLP тип | other |
| NLP организация | AIxCrypto Holdings, Inc. |
| NLP тема | ai agents |
| NLP страна | United States |
Открыть оригинал
AIxCrypto Co-CEO Jerry Wang Shares Weekly Investor Update: AI Agent Development and Internal Testing Progress
News provided by
AIxCrypto Holdings, Inc.
Apr 06, 2026, 23:42 ET
Share this article
Share to X
Share this article
Share to X
LOS ANGELES , April 6, 2026 /PRNewswire/ -- AIxCrypto Holdings, Inc. (NASDAQ: AIXC ) ("AIxC" or the "Company"), a technology company focused on infrastructure for the emerging Embodied AI (EAI) and on-chain asset ecosystem, today shared a weekly business update from Co-CEO Jerry Wang highlighting continued progress in the Company's AI Agent strategy, including ongoing development across key modules and the start of initial internal enterprise testing aimed at supporting workflow optimization and practical use case refinement.
Advancing AI Agent-Related Development Efforts
This week, AIxC continued to advance its broader AI Agent strategy, with development work across key modules and supporting infrastructure remaining on track. As part of this effort, the Company has begun initial internal enterprise testing of certain AI Agent-related capabilities in order to evaluate workflow integration, identify optimization opportunities, and further refine vertical use cases within its own operating environment. AIxC believes that this stage of internal testing is an important step in moving selected Agent-related development from architecture and feature buildout toward more practical, real-world applications. The Company remains focused on advancing this work in a disciplined manner and within appropriate operational, regulatory, and compliance boundaries.
Expanding Strategic Visibility Through Broader Market Dialogue
Jerry Wang recently participated as a speaker in an X Space discussion focused on Bitcoin and broader digital asset market themes. The Company believes that participation in relevant industry dialogue may help broaden awareness of AIxC's strategic priorities and support ongoing communication with market participants. AIxC views this type of engagement as consistent with its broader effort to communicate its long-term positioning around tokenization infrastructure and the convergence of digital assets and real-world applications.
About AIxCrypto:
AIxCrypto Holdings, Inc. ("AIxCrypto") is a U.S.-Nasdaq listed company dedicated to building a world-leading ecosystem that integrates AI and blockchain while bridging Web2 and Web3.
FORWARD LOOKING STATEMENTS:
This press release contains "forward-looking statements", including statements regarding AIxCrypto Holdings, Inc. ("AIxCrypto") within the meaning of the "safe harbor" provisions of the Private Securities Litigation Reform Act of 1995. All of the statements in this press release, including financial projections, whether written or oral, that refer to expected or anticipated future actions and results of AIxCrypto are forward-looking statements. In addition, any statements that refer to expectations, projections, or other characterizations of future events or circumstances are forward-looking statements. These forward-looking statements reflect our current projections and expectations about future events as of the date of this presentation. AIxCrypto cannot give any assurance that such forward-looking statements and financial projections will prove to be correct.
The information provided in this press release does not identify or include any risk or exposures of AIxCrypto that would materially and adversely affect the performance or risk of the company. By their nature, forward-looking statements and financial projections involve numerous assumptions, known and unknown risks and uncertainties, both general and specific, that contribute to the possibility that the predictions, forecasts, projections and other forward-looking information will not occur, which may cause the Company's actual performance and financial results in future periods to differ materially from any estimates or projections of future performance or results expressed or implied by such forward-looking statements and financial projections. Important factors that could cause actual results to differ materially from expectations include, but are not limited to: business, economic and capital market conditions; the heavily regulated industry in which AIxCrypto carries on business; current or future laws or regulations and new interpretations of existing laws or regulations; the inherent volatility and regulatory uncertainty associated with cryptocurrency investments; legal and regulatory requirements; market conditions and the demand and pricing for our products; our relationships with our customers and business partners; our ability to successfully define, design and release new products in a timely manner that meet our customers' needs; our ability to attract, retain and motivate qualified personnel; competition in our industry; failure of counterparties to perform their contractual obligations; systems, networks, telecommunications or service disruptions or failures or cyber-attack; ability to obtain additional financing on reasonable terms or at all; litigation costs and outcomes; our ability to successfully maintain and enforce our intellectual property rights and defend third party claims of infringement of their intellectual property rights; and our ability to manage our growth. Readers are cautioned that this list of factors should not be construed as exhaustive.
All information contained in this press release is provided as of the date of the press release issuance and is subject to change without notice. Neither AIxCrypto, nor any other person undertakes any obligation to update or revise publicly any of the forward-looking statements and financial projections set out herein, whether as a result of new information, future events or otherwise, except as required by law. This is presented as a source of information and not an investment recommendation. This press release does not take into account, nor does it provide any tax, legal or investment advice or opinion regarding the specific investment objectives or financial situation of any person. AIxCrypto reserves the right to amend or replace the information contained herein, in part or entirely, at any time, and undertakes no obligation to provide the recipient with access to the amended information or to notify the recipient thereof.
Readers are advised not to place undue reliance on forward-looking statements, as there is no guarantee that the plans, intentions, or expectations they are based on will be realized. While management believes these statements are reasonable at the time of preparation, actual results may differ materially. These forward-looking statements reflect the Company's expectations as of the date of this presentation and are subject to change without notice. The Company is not obligated to update or revise these statements, unless required by law.
Forward-looking statements are often identified by words such as "may," "could," "would," "might," or "will," indicating possible future actions, events, or outcomes. These statements involve known and unknown risks, uncertainties, and other factors that could cause actual results to differ significantly from what is expected.
Actual results may differ materially due to factors such as the ability to secure financing, complete transactions, meet exchange requirements, consumer demand, competition, and unexpected costs. These forward-looking statements are based on assumptions that may prove incorrect, and the Company does not assume any obligation to update them except as required by law. Given the uncertainties involved, readers should not place undue reliance on these statements.
You are cautioned not to place undue reliance on these forward-looking statements, which are made only as of the date of this news release. The Company disclaims any intent or obligation to update these forward-looking statements beyond the date of this news release, except as required by law. This caution is made under the safe harbor provisions of the Private Securities Litigation Reform Act of 1995.
SOURCE AIxCrypto Holdings, Inc.
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Spain's Xoople raises $130 million Series B to map the Earth for AI | TechCrunch |
techcrunch |
06.04.2026 13:00 |
0.652
|
| Embedding sim. | 0.744 |
| Entity overlap | 0.0508 |
| Title sim. | 0.1057 |
| Time proximity | 0.9793 |
| NLP тип | funding |
| NLP организация | Xoople |
| NLP тема | artificial intelligence |
| NLP страна | Spain |
Открыть оригинал
Space data companies have argued for years that the private sector needs their products, but the real uptake has been from government buyers. Now, with artificial intelligence top of mind for business, one Spanish startup is trying to become the go-to source of ground truth for enterprise.
Xoople (said like “zoople’) is developing a satellite constellation to collect precise data aimed at deep learning models. The startup was founded in 2019 and has spent the last seven years developing its tech stack around data collected by government spacecraft, and integrating with cloud providers.
CEO and co-founder Fabrizio Pirondini told TechCrunch that the company has closed a $130 million Series B led by Nazca Capital. Other investors include MCH Private Equity, CDTI (a tech development fund backed by the Spanish government), Buenavista Equity Partners, and Endeavor Catalyst.
The startup also announced Monday a deal with U.S. space and defense contractor L3Harris Technologies to begin building sensors for Xoople’s spacecraft, which are designed to collect “a stream of data that is going to be two orders of magnitude better than existing monitoring systems,” Pirondini told TechCrunch.
L3Harris has built some of the most advanced commercial imaging systems on orbit. However, Pirondini wouldn’t share any details about the satellites, not even how many the company wants to build, except that the sensors will collect optical data. Those systems aren’t cheap, and the company continues to raise capital to fund its full development.
Pirondini declined to share his firm’s valuation after the current fundraising round, except to note that “we are in unicorn territory.” The company has raised $225 million in total.
The company’s focus on data quality is a key differentiator. Still, Xoople is entering a crowded space with several mature competitors, including Vantor, Planet, BlackSky, and Airbus in Europe, that are already operating satellites on orbit and developing AI-focused datasets.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
The twist for Xoople is its focus on enterprise platforms.
“Our business model is all about embedding our data and our solutions directly to the ecosystem of those so that they can provide those services directly to their customers,” Pirondini said.
Pirondini described use cases, including government agencies tracking transportation networks and damage from natural disasters, agribusiness monitoring crop health, or large firms keeping an eye on infrastructure projects or supply chains.
Aravind Ravichandran, the CEO of Earth observation sector consultancy TerraWatch Space , told TechCrunch that Xoople’s decision to prepare its distribution strategy before it has its own data is intriguing. For now, it relies on publicly available data, like that collected by the European Space Agency’s Sentinel-2 spacecraft.
“They laid the distribution pipes before having their own data supply — embedding into Microsoft and Esri, the two platforms where enterprise, government and most GIS buyers already live, but neither has proprietary EO data,” Ravichandran said. “Google’s head start on geospatial AI models is the benchmark they’ll be measured against.”
It’s not clear what balance Xoople will strike between providing raw data and developing its own analysis tools, but Pirondini hopes to build “Earth’s System of Record,” a project he expects will ultimately include the development of a true AI world model alongside partners.
Topics
AI , Exclusive , Space , Startups
Tim Fernholz
Tim Fernholz is a journalist who writes about technology, finance and public policy. He has closely covered the rise of the private space industry and is the author of Rocket Billionaires: Elon Musk, Jeff Bezos and the New Space Race. Formerly, he was a senior reporter at Quartz, the global business news site, for more than a decade, and began his career as a political reporter in Washington, D.C.
You can contact or verify outreach from Tim by emailing tim.fernholz@techcrunch.com or via an encrypted message to tim_fernholz.21 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
‘Uncanny Valley’: Iran’s Threats on US Tech, Trump’s Plans for Midterms, and Polymarket’s Pop-up Flop |
wired |
02.04.2026 21:04 |
0.649
|
| Embedding sim. | 0.7453 |
| Entity overlap | 0.1346 |
| Title sim. | 0.036 |
| Time proximity | 0.9906 |
| NLP тип | other |
| NLP организация | Polymarket |
| NLP тема | ai security |
| NLP страна | United States |
Открыть оригинал
Brian Barrett Zoë Schiffer Leah Feiger Makena Kelly Kate Knibbs
Security
Apr 2, 2026 5:04 PM
Uncanny Valley : Iran’s Threats on US Tech, Trump’s Plans for Midterms, and Polymarket’s Pop-up Flop
In this episode, we discuss Iran’s threats to target US tech firms, gear up for the midterm elections, and get a scene report from the Polymarket pop-up bar in DC.
Photo-Illustration: WIRED Staff; Getty Images
Save this story
Save this story
The team is back this week to discuss how top US tech companies are increasingly finding themselves as targets in the ongoing war with Iran. They also give an inside view into how Polymarket’s pop-up bar in DC went sideways. Plus, our hosts go through the steps that the Trump administration is taking to control the upcoming midterm elections.
Articles mentioned in this episode:
Iran Threatens to Start Attacking Major US Tech Firms on April 1
Polymarket’s Coming-Out Party in Washington Was a Disaster
This Is How Trump Is Already Threatening the Midterms
You can follow Brian Barrett on Bluesky at @brbarrett , Zoë Schiffer on Bluesky at @zoeschiffer , and Leah Feiger on Bluesky at @leahfeiger . Write to us at [email protected] .
How to Listen
You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:
If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link . You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Brian Barrett: Hey, it's Brian. Zoë, Leah, and I have really enjoyed being your new hosts these past few weeks, and we want to hear from you. If you like the show and have a minute, please leave us a review in the podcast or app of your choice. It really helps us reach more people, and for any questions and comments, you can always reach us at [email protected] . Thank you for listening. On to the show.
Zoë Schiffer: Leah, did you make it to Chicago?
Leah Feiger: Honestly, barely. I have spent more time in airports in the last three days than I care to admit.
Brian Barrett: You told me yesterday that you were excited to be flying out of Newark instead of the other—which is the first time I've heard that.
Leah Feiger: I was young when I said that.
Zoë Schiffer: Welcome to WIRED's Uncanny Valley . I'm Zoë Schiffer, director of business and industry.
Brian Barrett: I'm Brian Barrett, executive editor.
Leah Feiger: And I'm Leah Feiger, senior politics editor.
Zoë Schiffer: This week on the show, we have a pretty well-rounded episode for you all. A little bit of international politics as Iran threatens to target US tech firms . There's also election news as we're tracking Trump's attempts to control the midterms , and a scene report from our DC colleague who had the great assignment of hitting up the Polymarket pop-up bar , which was a bit of a Fyre Fest situation .
Brian Barrett: To let people in behind the scenes of the magic of the Uncanny Valley podcast, we're recording this on a Wednesday. It's going to come out on Thursday. That's the magic. So, things could happen between then, but on this Wednesday, yesterday, Tuesday, Iran's Islamic Revolutionary Guard Corps warned that it planned to begin attacking more than a dozen American companies across the Middle East. If more of Iran's leaders are killed during the ongoing war, they'd made this threat before, but what was different is that they set a deadline to it. They said on April 1st, we are going to start targeting companies in these regions. There are 18 total companies on that. They actually gave a list. On that list include Apple, Microsoft, Google, Meta, IBM, Tesla, Palantir, a bunch more. As of now, that hasn't happened other than an attack that we can talk about later that sort of affects Amazon Web Services. But it does seem to be another one of these escalations. And I'm really curious what is going on with these companies, what obligations they have to their employees to protect them, what it means for all kinds of investment in that region, which has been increasingly important. It feels like it opens up a lot of serious questions. Regardless of whether these attacks go through—hopefully they don't—but it's really an escalatory time.
Leah Feiger: I was pretty struck by parts of this where, you know, calling on employees of these tech firms in the region to distance themselves from workplaces, for residents living near offices of these companies to move away to a safe place, this is a very serious warning. And so much to me reminds us that what's happening here, this war that is very much becoming a war with a capital W is not Trump's childhood wars. We are in a globalized world where he is not going to be able to remove himself from the blowback if American companies are indeed attacked. This is very different than military. This is an impact that I think would be very hard to escape from beyond the fact that it's horrible, it's sad. These are people's lives.
Zoë Schiffer: Yeah. I mean, we reached out to every company on the list. It turns out that they don't largely want to comment on their feelings about this or what they're doing. I actually was kind of surprised. I was like, I don't know, you're on a target list and you don't want to say anything?
Leah Feiger: No, they don't want to share what their plans are. They don't want to say if they've moved out employees. No, but they don't even want to say if they're taking it seriously because if they're taking it seriously, they're not trusting that the US government and its military is going to be able to handle it. This is a lose-lose situation. You have Trump on one side posting messages about how the US is winning, and if it's not, then they're going to reign hail fire. You have to at least pretend if you're in charge of these companies that you believe that? While putting, of course, thousands of your employees at risk.
Brian Barrett: And the US did say after this latest thing, there was a comment that someone gave that was basically like, "Well, we'll respond if they do something here," which is a little bit like, "We will definitely put up a stop sign after someone gets run over." I alluded to this before. Iran is willing to do this. They've already had two strikes on Amazon Web Services data centers last month and damaged another one. It is sort of the first publicly confirmed attack on American owned hyperscale cloud infrastructure. And I guess my question is, how much do we think the targets are sort of the symbolic headquarters, even if they're empty versus actual critical infrastructure or actual infrastructure, powering the cloud, manufacturing facilities, whatever is there, curiously what shaped those targets. Take again, hopefully it's all bluster and this will move on.
Zoë Schiffer: I mean, but it does come on the heels of Sam Altman's trip to the Middle East with members of the Trump administration where he was there striking deals and presumably setting up what will become large scale data centers. So, he and other AI leaders have been eyeing that region as a really lucrative place to begin doing business or expanding business. And I think that that is something that, for example, Dario Amodei said, "Hey, we should be wary about putting data centers in the Middle East." And I think they're taking that seriously. It's been interesting though. I will say, I've reached out to people at Anthropic and sources at OpenAI being like, "What do you think of the war in Iran? What is top of mind for you right now?" On the whole, people who are working in these companies in San Francisco are like, "What war?" They are just focused on what is happening here at home and do not seem to be paying an enormous amount of attention. I don't think that's true for the executives, but the rank and file are shrugging.
Brian Barrett: I'm a little surprised by that because a knock-on effect of all of this is a stock market that is way down, including tech companies have been really, really hit down 20 percent in some cases. Nvidia is really pretty far down, Meta—so, I'm a little surprised in that I feel like the IPO climate is going to be less hospitable to a lot of these companies who are looking for that for their exit. And a lot of people who have or invested in their companies have options at these companies, they're seeing their value dwindled by the day. So, it's a shame that it takes hitting their wallet to get people to pay attention, but presumably at some point.
Zoë Schiffer: And I think if the effect on their wallets continues, we will see these people really, really care. I'm sure we will see chatter in Slack about this, but I think they're pretty used to the ups and downs. And so, while this is a pretty dramatic drop for some of the public companies, especially because when we're talking about say OpenAI, the thought was that they were eyeing an IPO near the end of the year. So, I think at least from the people I've talked to, which of course is a handful of the overall employee base, it's kind of like, well, a lot could change.
Leah Feiger: This week is going to be a real bellwether as well. Trump delivers an address Wednesday night about Iran, but regardless in some ways of what Trump says, Iran has indicated that it feels the exact opposite. Trump says the war is over in two weeks. Iran says that the war is over when they say it's over, when they have won. So, we have backed them against the wall in a very serious way, and it doesn't really appear that there's an end in sight, especially if these are the kinds of companies on a target list, which are so near and dear to the Trump administration's heart.
Brian Barrett: I'll say just one more thing on this off of what you just said, Leah, is that there were an amazing trifecta of quotes over the last couple of days where Trump said something like, "Negotiations are going great. We're making a lot of progress." Iran said, "We haven't even started negotiations, it’s not going to happen." And then Pete Hegseth jumped in and said—
Pete Hegseth, archival audio: We see ourselves as part of this negotiation as well. We negotiate with bombs.
Brian Barrett: We'll negotiate with bombs. And that really kind of sums up where we're at.
Leah Feiger: And of course, the Iran war is going to continue to be a point of contention probably going into the midterms. We have months to go here, but we are deep in primary season right now, which as you guys know, is one of my favorite times of the year because it's a moment where really we get our political crazies out. I love it, but we really do have to talk about all of the ways that the Trump administration is already making moves that threaten the integrity of the elections. David Gilbert, senior politics reporter at WIRED published a really good writeup on this this week. And one of the main things that this reporting draws attention to is the SAVE America Act. Are you guys super familiar with that?
Brian Barrett: I'm familiar-ish in that it seems bad.
Zoë Schiffer: Strong take.
Leah Feiger: This was a strong take. This is why people come to Uncanny Valley . It's for these kinds of takes.
Brian Barrett: OK, how about these? It seems really bad.
Leah Feiger: Good. Even better. Yes, that's what I was looking for. Look, it's basically the Republican response to the debunked conspiracy theory that millions of immigrants are flooding polling stations every election, voting for Democrats, making lives really, really bad for Republicans stealing elections around the country. This act would disenfranchise millions of people because it would require anyone trying to vote to produce a passport or a birth certificate, which is something that a lot of voting eligible Americans do not have access to. It's narrowly past the House. Democrats are still trying very, very hard to block its passage in the Senate. It has come up in conversation a lot. I feel like this is something that I and other politics focused people were talking a lot about a few weeks ago where everyone else would go, "What is that?" But now with everything with the TSA not getting funded and partial government shutdowns, et cetera, Trump has made this a core point of his administration. It's like, "We have to pass the SAVE Act." So, this is just one of the ways that the Trump administration is putting a lot of pressure and trying to make this happen that would result in a very inequitable midterms for all. And amongst that, they have a bunch of other things that they're working on as well, the war against mail-in voting. Trump historically hates it, even though he loves to mail in votes himself.
Brian Barrett: And has benefited from mail-in voting, I'd say.
Leah Feiger: Oh, yeah.
Brian Barrett: I feel like it is weird to me that Trump seems to think that mail-in voting is only a Democratic thing. A lot of Republican voters use mail-in voting.
Leah Feiger: Yes. And if anything, all of the pushing against mail-in voting has frankly hurt their bottom line a little bit because all these Republicans are like, "Oh God, screw mail-in voting." And then they're not mail-in voting. It's very messy, and that one is a very strange one, but they've continued to work on that. Election deniers are across government right now, recruited because they were boosting election conspiracy theories back when Trump was out of office. They have not stopped doing so since being appointed. They're all over the government in a variety of agencies. And even right down to possible concerns day of, the administration has suggested the possibility of sending ICE agents to election sites. So, there's a lot here. And I was really taken, I think, with some of the comments on wired.com or online just about how people were like, "Oh yes, this is horrible." I knew that the Trump administration was doing all these things. I think people were a little bit shocked by how many hands were in so many different pockets This is a very comprehensive approach. How do you change an entire vision of elections? This is it. It's an unbelievable roadmap. I've got to hand it to them.
Brian Barrett: Well, and then it continues, right? It's a dynamic thing. On Tuesday, Trump signed an executive order that is sort of part of that war on mail-in voting that we talked about. It would require states to give a list of eligible voters to the US government 60 days before the election in order for the right to have the postal service deliver those mail-in ballots. So, that's been a long time. No, no. I think one thing that I want to make really clear is that elections in the US are inherently structurally and for good reasons, very, very localized. You have your local election, and that is though by design, there is a reason the federal government doesn't control elections in the way that Trump seems to want to. For obvious reasons, one central authority having that much power over elections could do a lot of harm. Leah, my question for you is, how likely is any of this to actually get through? The SAVE Act is stalled and the Senate doesn't want to go for it. The executive order is going to get probably shot down in courts, although who knows? We know what Trump wants to do. And again, it's really bad as my analysis showed, but how much can he actually do? What's the appetite to actually push this stuff through and what are the mechanisms for that?
Leah Feiger: That's a really good question. And it's a little bit hard to say right now while we're in this primary period, which is why this approach, which is just throw a ton of spaghetti at the wall and see what sticks has worked for them very well in the past. So, they're making a lot of really educated guesses on what that would mean for them and what that would mean on the very specific voting breakdowns. When you say, for example, registering 60 days beforehand to vote, the election is in early November, that brings us to early September. That cuts out college students that are registering to vote on their college campuses. That's who we're looking. So, it's a very specific targeted approach. Also, I guess to be clear, there's a very decent chance that the Republicans do very, very badly come November. Things are not looking good for them polling-wise. And so, to amp up your populace into being like, "If we lose, it's because of cheating." And they're like, "They're not going to do great." If the war in Iran continues, if we all continue to not be able to travel, or some of us spent eight hours straight in the New York airport—
Brian Barrett: Leah is a single issue voter now.
Leah Feiger: I'm a single issue voter, and it's about funding TSA and making weather better. It's all to say that we're hitting an era where they're hedging their bets and going like, "If we lose, we need to figure out who to blame it on." And it's certainly not going to be Republican voters or Republican strategists.
Brian Barrett: Coming up after the break, we're going to take you inside Polymarket's pop-up bar in DC. Stay with us.
So, a few weeks ago, Polymarket , which is the online prediction market where people bet on the outcomes of real world events, it's insanely popular—you might have an account—ecided to create an in real life experience in the form of a pop-up bar, which they called the situation room, not to be confused with Wolf Blitzer's Situation Room, no betting there. The space was outfitted with tons of bright TV screens, showing everything from the news to stock quotes, even a Bloomberg terminal. So, you got to monitor global events while you bet and drink. What could go wrong?
Makena Kelly, archival audio: So, I just got in. We're waiting outside for an hour and a half almost.
Brian Barrett: It was a one weekend only type of thing. So, we sent our DC-based reporter Makena Kelly to check it out.
Makena Kelly, archival audio: Nothing is working. There's a couple of tablets set up. I'm seeing some people playing what looks like a video game of—
Brian Barrett: It seems to have been a messy experiment, to say the least. Hey, Makena.
Makena Kelly: Good to be here.
Brian Barrett: Good to have you. And here's Kate Knibbs, our in-house expert on all things prediction markets, who also has some thoughts to share. Hey, Kate.
Kate Knibbs: Hi, thanks for having me. And I'm truly sad that I missed going there in real life.
Makena Kelly: I don't know.
Kate Knibbs: Yes, but it's only because I hate myself.
Brian Barrett: OK. So, take us there. When you went, you visited the pop-up, what were you expecting and what did you actually see? I feel like there's a gap there.
Makena Kelly: Yeah. So, from the promotional materials that Polymarket put on X and blasted in the press release, my expectations were really high. They had this orb in the promotional images. There's all these Bloomberg terminals. People were supposed to be downing drinks and placing bets and wandering around in this kind of highly fluorescent room where there was just endless screens, endless content to be monitoring whatever situation you wanted to monitor, whether that was who the next Republican presidential nominee was going to be or the war in Iran or things like that. And so, when I got there, it was supposed to open at 5:0 PM. It was pouring rain and we all waited outside for about an hour and a half, getting soaked, getting drinks handed to us outside by a very apologetic Polymarket. And when doors opened, about an hour and a half after they originally scheduled to, nothing worked. Absolutely nothing worked. And really the only promise that they kept was a free night of drinks for anyone who showed up.
Zoë Schiffer: What was the ratio of reporters to—
Makena Kelly: Yeah. So, the ratio to reporters and everyone else, there was a lot of people who were leaving the line because it was starting to feel like this place was never going to open. And so, anyone who was a casual, "I'm going to show up here for a drink and gawk at the spectacle," for the most part left and all the reporters who were assigned to this, all for the most part stuck around. And later in the night after things had opened up, more people continued to come in. Some of the DOGE guys were there—
Brian Barrett: Of course.
Makena Kelly: —that we saw mixing around because they were part of the same kind of social circle—
Brian Barrett: Of course.
Makena Kelly: —as these folks. I saw some guys wearing Palantir hoodies and a shirt that said—this didn't make into the story because I had completely forgotten—but he was wearing this shirt that said, "Surveillance is the new sovereignty Palantir something, something."
Zoë Schiffer: Wow.
Brian Barrett: Before we get too deep in the night, I do want to say not to make you relive this, but we do have tape of Josh Tucker, chief marketing officer of Polymarket. At the moment he let everyone know how badly things were going. Can we play that real quick?
Josh Tucker, archival audio: As a result of an electrical issue earlier tonight, we had to reset all of the TVs. With that being said, we want you all to have a great evening tonight. There are drinks, food, past, apps. We are here to answer any questions. Overnight, we will remedy it so that the situation can be properly monitored tomorrow. Appreciate you all coming out. We're so excited to meet you. Thank you all. And let's go back to you.
Makena Kelly, archival audio: Yes. Could I get a glass of white, please?
Unidentified speaker, archival audio: Yeah. Did you want a red or a white?
Makena Kelly, archival audio: A white please.
Unidentified speaker, archival audio: Yeah. Do you like a sauvignon blanc?
Makena Kelly, archival audio: Sauvignon blanc's great. Thanks so much.
Brian Barrett: I love that we also got your drink order in there.
Leah Feiger: Was it good? Was the alcohol at least top shelf? That's the part to me that I'm like, this company makes so much money and they couldn't even put their screens together. I don't know. This is so messy.
Makena Kelly: So, yes, I had a glass of white wine. It was totally fine, but the apps that were passed around, I was a little bit disappointed because the things that I saw were some pretzel bites and then these little skewers of pineapple strawberry pineapple, and that was about it for the most part. On the second night, they brought in pizza from somewhere else and put it in these boxes that said Pentagon Pizza on them. So, there was more food the second night. But yeah, I got to say I was a little disappointed—
Kate Knibbs: So, you went twice?
Makena Kelly: Yes, Kate. I went twice.
Kate Knibbs: I missed that.
Zoë Schiffer: Wait, is the Pentagon Pizza thing a joke about the pizza predicting the war?
Makena Kelly: Yeah.
Zoë Schiffer: Oh, my God.
Makena Kelly: Because they had these Pentagon pizza trackers up. When I returned the second night, yes, I came back the second night. Everything was working for the most part. There were still some screens that were turned off, but I never saw any actual Bloomberg terminals. There were some monitory Bloomberg type terminal things that it looked like Polymarket had developed themselves, but the real $50,000 Bloomberg terminal was nowhere to be found. And yeah, the second night, again, it was mostly people looking to gawk at the event, except I did find a couple of people who placed some bets on platforms like Polymarket and Kalshi . One was named William, and he said he was a member of the military, wouldn't give me his full name. And he last year got involved in this for the first time by putting in, I think, all of his tax return into Oklahoma City sports betting.
Makena Kelly, archival audio: So, you used Kalshi?
William, archival audio: Yes .
Makena Kelly, archival audio: When did you first start using the service?
William, archival audio: Probably when I got my tax return back.
Makena Kelly, archival audio: OK .
William, archival audio: So, I filed my taxes pretty early and I was like, "Oh, sweet. I got my tax return. What am I going to do with it?" So, I was like, "I'm going to just put it on Kalshi."
Makena Kelly: He said that he goes up and down 100 dollars, but he hasn't made any major winnings. Some of the stuff that we've heard. Some people making crazy insider bets making millions and millions of dollars. This is just a guy who was interested in this and just plays it for fun, it sounds like.
Brian Barrett: Kate, what do you see when you see a pop-up like this and Polymarket trying to—is it an attempt to legitimize itself to just a marketing stunt? And how does it tie into what you're seeing with these companies anyway, that there's the explosive growth that they've got trying to reach out to so many people and getting so many people hooked on what they're offering?
Kate Knibbs: I mean, this particular event definitely seems like a very bald effort to woo DC-based journalists, if nothing else. One thing that Makena said sort of encapsulates what's going on right now, the thing about the guys in the Palantir hoodies. So, I think it was the same week that this bar opened. Polymarket announced a partnership with Palantir and Palantir is helping them protect the integrity of their sports market. So, Palantir is going to be basically attempting to help Polymarket catch insider traders and market manipulators in all the sports games, which is kind of wild. I actually asked Polymarket last week whether they had any other deals with Palantir when I was trying to get them to say anything about whether they were investigating the Iran bets that have been raising a lot of eyebrows. And they said that Palantir was only helping them with sports, which I thought was freaking weird. And it speaks to how they're rapidly expanding, but doing so in this really messy ad hoc way that doesn't really make a lot of sense. Because I was like, "If you're going to get Palantir involved, why wouldn't you have them do this geopolitical stuff instead of March Madness?" Yeah, wild, wild times.
Leah Feiger: It does all feel like quite piecemeal, but sort of together as a big step back, what does all of this say to you guys? Makena, you have now spent two, read two nights with all of these people. Kate, this is your beach.
Brian Barrett: That we know of.
Makena Kelly: Just two guys, just two.
Leah Feiger: That we know of. Maybe, yeah.
Makena Kelly: And I went back for brunch on Sunday morning.
Leah Feiger: What does this say about this increasing popularity, the power of prediction markets, and how this is becoming really a cultural phenomenon. The fact that they were able to get folks to come out for this, to get excited about this, even as it all vaguely blew up in their faces, their name is out there. The power, the cultural capital here that didn't exist a year ago is very much present. What does this say to you?
Kate Knibbs: I mean, I don't think it's going to subside anytime soon, at least not while the Trump administration is in power. The Trump administration is so, so friendly to this industry. Donald Trump Jr. is an advisor to both Polymarket and Kalshi. The Trump family is still prepping, allegedly prepping its own prediction market, truth predict, although I haven't actually heard anything about that since late 2025. Got to check on that. But yeah, this is definitely, I think, the beginning of something for better or for worse. So, one of the reasons I was just telling you guys about my ill-fated travels home to Chicago from New York, I was at this Kalshi conference last week that wasn't open to the public, so it was different from the Polymarket bar, but who it was for was really interesting to me. It was very, very focused on basically highlighting the ways that Kalshi is already super entrenched in the global financial system and has all of these big finance players involved. And it was really eye-opening how far they've already gone down that road. They announced that day that they had gotten approval for margining, and that basically means that we're going to see a lot of big institutions putting way more money into these markets sooner rather than later.
Makena Kelly: Yeah. I think my main takeaway from the event and the thing that stood out to me the most and that I put in the piece is that the guy who was running it, Josh Tucker, who made that announcement that we played the tape of earlier, his last job was at Mr. Beast doing viral marketing. And I think we talk about where Polymarket is right now very much, even with people who are familiar with the name and all that, it's very much a spectacle. This is very much playing on that and trying to grow its kind of name. And then when I was at the bar too, it was only a block away from the CFTC. On K Street, which colloquially in DC is known as lobbyist central and then a block away from the one regulator who regulates it trying to make this big party saying that this is our coming out party and we're here to have this conversation. And they're there for the spectacle about blowing up all this stuff. But when it comes to actually following through with the party and the planning or having this kind of productive discussion, that really was nowhere to be seen. The promotional material on X promoting this event by Polymarket made this out to be a very highly produced event that was going to be kind of otherworldly and highly technical. But after spending several hours here, the whole thing is kind of janky.
Kate Knibbs: It's super bonkers to me too, because Polymarket had that whole party in DC. And as of now, most of Polymarket, you're still not legally allowed to bet on from the US. So, they're really focusing on getting their name out there over people actually using the product.
Brian Barrett: Just to recap, we had Palantir, DOGE, Donald Trump Jr., Mr. Beast, and an absent Bloomberg terminal. I feel like we're checking a lot of boxes with this one event.
Leah Feiger: This is WIRED Mad Libs in every way, shape, or form. Yeah.
Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes. Uncanny Valley is produced by Kaleidoscope Content. Adriana Tapia produced this episode. It was mixed by Amar Lal at Macro Sound. Pran Bandi is our New York studio engineer. Mark Leyda is our San Francisco studio engineer. Kimberly Chua is our digital production senior manager. Kate Osborn is our executive producer, and Katie Drummond is WIRED's global editorial director.
|
|
|
AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost | TechCrunch |
techcrunch |
07.04.2026 05:30 |
0.649
|
| Embedding sim. | 0.7386 |
| Entity overlap | 0.0962 |
| Title sim. | 0.1156 |
| Time proximity | 0.9548 |
| NLP тип | product_launch |
| NLP организация | Rocket |
| NLP тема | generative ai |
| NLP страна | India |
Открыть оригинал
Indian startup Rocket is betting that the next big opportunity is the part before vibe coding: having AI help people decide what to build. It has launched a platform that produces consulting-style product strategies.
The startup, based in Surat, India, on Tuesday launched its platform, Rocket 1.0, which connects research, product building, and competitive intelligence in a single workflow. The platform generates detailed product strategy documents — including pricing, unit economics, and go-to-market recommendations.
As AI-powered coding tools proliferate — from platforms like Cursor , Replit , and Lovable to features such as Claude Code and Codex — writing code has become significantly easier and faster. “Everyone can generate the code now … it has become a commodity. But what to build is something which everyone is missing,” said Rocket co-founder and CEO Vishal Virani (pictured above), adding that “running a business and just building a codebase are two different things.”
TechCrunch briefly tested Rocket’s platform ahead of its launch and found that it generated product requirement documents in PDF format from simple prompts. These documents resemble consulting-style reports rather than vibe-coding tools or chatbots, which largely focus on features and execution.
However, some of the analysis appeared to be synthesized from existing data — combining known pricing models, user behavior patterns, and competitive insights — rather than based on independently verifiable information. This suggests users may still need to validate outputs before making business decisions. Virani said the platform can offer human support when users encounter issues.
Rocket’s platform generates consulting-style reports Based on text prompts given by users. Image Credits: Rocket
The product can also track competitors, including changes to their websites and traffic trends. Rocket draws on more than 1,000 data sources for its analysis, including Meta’s ad libraries, Similarweb’s API, and its own crawlers, Virani said.
Rocket’s subscription plans range from $25 per month for building applications to $250 for strategy and research capabilities, and up to $350 for the full platform, including competitive intelligence.
Techcrunch event
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Up to $482 savings for Disrupt 2026
Offer ends April 10, 11:59 p.m. PT
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
The $250 plan can generate two to three “McKinsey-grade” research reports alongside product builds, Virani told TechCrunch, positioning its higher-tier offerings as a lower-cost alternative to traditional consulting, which often costs thousands of dollars for similar strategy work.
Rocket raised a $15 million seed round in September from Accel, Salesforce Ventures, and Together Fund. Since then, the startup says it has grown from 400,000 to over 1.5 million users across 180 countries. It also reported an annualized average revenue per user in the $4,000 range, though it did not disclose detailed paying customer numbers. The startup said it operates at gross margins of over 50%, with 20% to 30% of its customers being small- and medium-sized businesses.
Rocket has a team of 57 employees and is headquartered in Surat, with operations in Palo Alto.
Topics
AI , India , rocket , Startups , vibe coding
Jagmeet Singh
Reporter
Jagmeet covers startups, tech policy-related updates, and all other major tech-centric developments from India for TechCrunch. He previously worked as a principal correspondent at NDTV.
You can contact or verify outreach from Jagmeet by emailing mail@journalistjagmeet.com .
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Google quietly launched an AI dictation app that works offline
Ivan Mehta
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
Zack Whittaker
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Kate Park
Embattled startup Delve has ‘parted ways’ with Y Combinator
Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthony Ha
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
The reputation of troubled YC startup Delve has gotten even worse
Julie Bort
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
OpenAI
Iran
Gas Prices
Tesla
Apple
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
The latest in data centers, AI, and energy |
the_verge_ai |
27.03.2026 18:35 |
0.648
|
| Embedding sim. | 0.7416 |
| Entity overlap | 0.1667 |
| Title sim. | 0.0412 |
| Time proximity | 0.9826 |
| NLP тип | other |
| NLP организация | Meta |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
Massive new data centers are the physical foundation for tech companies’ hopes and dreams for AI. But the rush to expand warehouses full of energy-hungry servers has also kicked up fights across the world over their impact on power grids, utility bills, nearby communities, and the environment.
From audacious plans to launch data centers into space to the latest legal battles over pollution, The Verge has the biggest news and reporting surrounding data centers.
Senators are pushing to find out how much electricity data centers actually use
How the spiraling Iran conflict could affect data centers and electricity costs
Seven tech giants signed Trump’s pledge to keep electricity costs from spiking around data centers
Trump claims tech companies will sign deals next week to pay for their own power supply
Anthropic says it’ll try to keep its data centers from raising electricity costs
How an ‘icepocalypse’ raises more questions about Meta’s biggest data center project
Microsoft wants to rewire data centers to save space
New York is considering two bills to rein in the AI industry
Elon Musk is merging SpaceX and xAI to build data centers in space — or so he says
It’s a new heyday for gas thanks to data centers
Meta is spending millions to convince people that data centers are cool and you like them
The winter storm tested power grids straining to accommodate AI data centers
OpenAI says its data centers will pay for their own energy and limit water usage
Microsoft scrambles to quell fury around its new AI data centers
Communities are rising up against data centers — and winning
Billionaires want data centers everywhere, including space
AI’s water and electricity use soars in 2025
Racks of AI chips are too damn heavy
The scramble to launch data centers into space is heating up
Data center construction moratorium is gaining steam
Data centers in Oregon might be helping to drive an increase in cancer and miscarriages
Google is turning on the gas for its data centers
Tech companies ‘be on alert,’ NAACP says with new guiding principles for data centers
|
|
|
TurboQuant is a big deal, but it won’t end the memory crunch |
the_register_ai |
01.04.2026 22:17 |
0.647
|
| Embedding sim. | 0.7352 |
| Entity overlap | 0.25 |
| Title sim. | 0.056 |
| Time proximity | 0.9566 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
AI + ML
1
Google's TurboQuant saves memory, but won't save us from DRAM-pricing hell
1
Chocolate Factory’s compression tech clears the way to cheaper AI inference, not more affordable memory
Tobias Mann
Wed 1 Apr 2026 //
22:17 UTC
When Google unveiled TurboQuant , an AI data compression technology that promises to slash the amount of memory required to serve models, many hoped it would help with a memory shortage that has seen prices triple since last year. Not so much.
TurboQuant isn't the savior you might be hoping for. Having said that, the underlying technology is still worth a closer look as it has major implications for model devs and inference providers.
What the heck is TurboQuant
Detailed by Google researchers in a recent blog post, TurboQuant is essentially a method of compressing data used in generative AI from higher to lower precisions, an approach commonly referred to as quantization.
According to researchers, TurboQuant has the potential to cut memory consumption during inference by at least 6x, a bold claim at a time when DRAM and NAND prices are at record highs.
However, unlike most quantization methods, TurboQuant doesn't shrink the model. Instead it aims to reduce the amount of memory required to store the key value (KV) caches used to maintain context during LLM inference.
In a nutshell, the KV cache is a bit like the model's short-term memory. During a chat session, for example, the KV cache is how the model keeps track of your conversation.
Where things get tricky is that these KV caches can pile up quite quickly, often consuming more memory than the model itself.
Usually, these KV caches are stored at 16-bit precision, so if you can shrink the number of bits used to store them to eight or even four bits, you can reduce the memory required by a factor of 2x to 4x.
While TurboQuant has certainly brought attention to KV cache quantization, the overarching idea isn't new. In fact, it's quite common for inference engines to store KV caches at FP8 for these reasons.
However, this kind of quantization isn't free. Lower precision means fewer bits to store key values and therefore less memory. These quantization methods also tend to introduce their own performance overheads.
This is really where TurboQuant's innovations lie. Google claims that it can achieve quality similar to BF16 using just 3.5 bits , while also mitigating those pesky overheads. At 4 bits, they claim as much as an 8x speedup on H100s when computing attention logits used to decide what in the context is or isn't important to the request.
And the researchers didn't stop there. In testing, they found they could crush the KV caches to 2.5 bits with minimal quality loss, which is where the claimed 6x memory reduction appears to have come from.
How does it work
TurboQuant is able to achieve this feat by combining two mathematical approaches: Quantized Johnson-Lindenstrauss (QJL) and PolarQuant.
PolarQuant works by mapping KV-cache vectors, which are just high-dimensional mathematical expressions of magnitude and direction, onto a circular grid that uses polar rather than Cartesian coordinates.
"This is comparable to replacing 'Go 3 blocks east, 4 blocks north' with 'go 5 blocks total at a 37-degree angle,'" Google's blog post explains.
Using this approach, the vector's magnitude and direction are now represented by its radius and angle, which the search giant explains eliminates the memory overhead associated with data normalization as each vector now shares a common reference point.
In addition to PolarQuant, Google also employs QJL to correct any errors introduced during the first phase and preserve the accuracy of the attention score used by the model to determine what information is or isn't important to serving a request.
The result is that these vectors can be stored using a fraction of memory. And this tech isn't limited to KV caches either. According to Google, the technology also has implications for vector databases used by search engines.
OpenAI gets $122B to 'just build things' as the world blows them up
Raspberry Pi leans into semiconductors as sales climb – especially in US and China
Arm says agentic AI needs a new kind of CPU. Intel's DC chief isn't buying it
Memory-makers' shares are down. Some RAM prices have eased. Blaming Google is not a good idea
Why TurboQuant won't deliver us from memory mayhem
With a claimed compression ratio of 6:1, it's not surprising that many on Wall Street tied memory makers' downward spirals to the introduction of TurboQuant.
But while the tech is likely to make AI inference clusters more efficient and therefore less expensive to operate, it's unlikely to curb demand for the NAND flash and DRAM memory used to store those KV-caches.
A year ago, open weights models like DeepSeek R1 offered context windows ranging from 64,000 to 256,000 tokens. Today, it's not uncommon to find open models sporting context windows exceeding one million tokens.
TurboQuant could allow an inference provider to make do with less memory, or let them serve up models with larger context windows. With code assistants and agentic frameworks like OpenClaw driving demand for larger context windows, the latter strikes us as the more likely of the two.
It seems that the industry watchers at TrendForce would agree. In a report published earlier this week, they predicted that TurboQuant will spark demand for long-context applications that drive demand for more memory rather than curb it. ®
Share
More about
AI
Datacenter
DRAM
More like these
×
More about
AI
Datacenter
DRAM
Google AI
Google Cloud
Narrower topics
AIOps
Android
App stores
Chrome
Chromium
DeepSeek
Disaster recovery
Gemini
Google Brain
Google Cloud Platform
Google I/O
Google Nest
Google Project Zero
GPT-3
GPT-4
G Suite
Kubernetes
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Open Compute Project
Pixel
Privacy Sandbox
PUE
Retrieval Augmented Generation
Software defined data center
Star Wars
Tavis Ormandy
Tensor Processing Unit
TOPS
Broader topics
Alphabet
Search Engine
Self-driving Car
Semiconductor Memory
Storage
More about
Share
1
COMMENTS
More about
AI
Datacenter
DRAM
More like these
×
More about
AI
Datacenter
DRAM
Google AI
Google Cloud
Narrower topics
AIOps
Android
App stores
Chrome
Chromium
DeepSeek
Disaster recovery
Gemini
Google Brain
Google Cloud Platform
Google I/O
Google Nest
Google Project Zero
GPT-3
GPT-4
G Suite
Kubernetes
Large Language Model
Machine Learning
MCubed
Neural Networks
NLP
Open Compute Project
Pixel
Privacy Sandbox
PUE
Retrieval Augmented Generation
Software defined data center
Star Wars
Tavis Ormandy
Tensor Processing Unit
TOPS
Broader topics
Alphabet
Search Engine
Self-driving Car
Semiconductor Memory
Storage
TIP US OFF
Send us news
|
|
|
Starcloud raises $170 million Series A to build data centers in space | TechCrunch |
techcrunch |
30.03.2026 11:00 |
0.647
|
| Embedding sim. | 0.7595 |
| Entity overlap | 0.119 |
| Title sim. | 0.1837 |
| Time proximity | 0.6167 |
| NLP тип | funding |
| NLP организация | Starcloud |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
Starcloud’s latest funding round values the space compute company at $1.1 billion, making it one of the fastest startups to reach unicorn status after graduating from Y Combinator.
The company’s Series A, which closed 17 months after its demo day presentation, was led by Benchmark and EQT Ventures. It’s another sign of the interest in outsourcing data centers to orbit as resource and political obstacles slow their development on Earth, but the business model depends on unproven technology and significant capital expenditure.
Starcloud has now raised a total of $200 million, and launched its first satellite with an Nvidia H100 GPU in November 2025. The company will launch a more powerful version, Starcloud 2, later this year with multiple GPUs, including an Nvidia Blackwell chip and an AWS server blade, as well as a bitcoin mining computer.
The company will also begin developing a data center spacecraft designed to launch from Starship, the reusable heavy lift rocket being built by Elon Musk’s SpaceX. Starcloud 3, as the spacecraft is named, will be a 200 kilowatt, three-ton spacecraft that fits the “PEZ dispenser” system SpaceX designed to deploy its Starlink satellites from Starship.
CEO and founder Philip Johnston said he expects that will be the first orbital data center that is cost-competitive with terrestrial data centers, with costs on the order of $.05 per kw/hour of power — if commercial launch costs land around $500 per kilogram.
The challenge is that Starship isn’t flying yet; Johnston says he expects commercial access to open up in 2028 and 2029. That’s the reality facing all the big space data center projects: Powerful space computers will be cost-prohibitive until a new generation of rockets starts launching at a high operational cadence, something that might not happen until the 2030s.
“If it ends up being delayed, we’ll just carry on launching the smaller versions on Falcon 9,” Johnston said. “We’re not going to be competitive on energy costs until Starship is flying frequently.”
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
“There’s kind of two business models,” Johnston explains: One is selling processing power to other spacecraft on orbit; the company’s first satellite, for example, analyzes data collected by Capella Space’s radar spacecraft. Then, in the future when launch costs go down, more powerful distributed data centers could potentially pull work from their terrestrial counterparts.
That gets at how new this industry really is. When Nvidia CEO Jensen Huang unveiled the company’s Vera Rubin Space-1 chip modules at his company’s annual GPU Technology Conference last week, he didn’t note that none had been produced or shared with the company’s development partners.
In fact, the number of advanced GPUs on orbit is numbered in the dozens, while Nvidia is estimated to have sold nearly 4 million to terrestrial hyperscalers in 2025.
Or consider that SpaceX’s Starlink communications network, the largest satellite network in orbit with 10,000 spacecraft, produces something around 200 megawatts of energy, while data centers with more than 25 gigawatts of power are currently under construction in the U.S., according to Cushman and Wakefield.
Johnston argues that his company is well ahead of the competition, with the first terrestrial GPU deployed in orbit. It was used to train an AI model in orbit, a first, according to Starcloud, and run a version of Gemini. Beyond the performance, Johnston says Starcloud now has valuable data about what it takes to run a powerful chip in space.
“An H100 is probably not the best chip for space, to be honest, but the reason we did it is we wanted to prove that we could run state of the art terrestrial chips in space,” he told TechCrunch. That hard-won knowledge —another GPU, an Nvidia A6000, failed during launch — will influence future designs.
There is a laundry list of technical challenges to be solved, including efficient power generation and cooling the hot-running chips. Starcloud-2 will have the largest deployable radiator flown on a private satellite; he expects at least two additional versions of that spacecraft will head to orbit, Johnston said.
Then there is the challenge of synchronization. The largest data center workloads, often for training, require hundreds or thousands of GPUs to work in tandem. Doing that in space will either require fantastically large spacecraft, or powerful and reliable laser links between spacecraft flying in formation. Most companies working on this technology expect those workloads to come long after simpler inference tasks take place on orbit.
Besides Starcloud, Aetherflux , Google’s Project Suncatcher, and Aethero — which launched Nvidia’s first space-based Jetson GPU in 2025 — are all developing space data center businesses.
The elephant in the room is SpaceX itself, which has asked the U.S. government for permission to build and operate a million satellites for distributed compute in space.
Going head-to-head with SpaceX is a daunting task for any entrepreneur, but Johnston sees room for coexistence.
“They are building for a slightly different use case than us,” he told TechCrunch. “They’re mainly planning on serving Grok and Tesla workloads. It may be at some point that they offer a third-party cloud service, but what I think they are unlikely to do is what we’re doing [as] an energy and infrastructure player.”
Topics
AI , Benchmark , eqt ventures , Space , Starcloud , Y Combinator
Tim Fernholz
Tim Fernholz is a journalist who writes about technology, finance and public policy. He has closely covered the rise of the private space industry and is the author of Rocket Billionaires: Elon Musk, Jeff Bezos and the New Space Race. Formerly, he was a senior reporter at Quartz, the global business news site, for more than a decade, and began his career as a political reporter in Washington, D.C.
You can contact or verify outreach from Tim by emailing tim.fernholz@techcrunch.com or via an encrypted message to tim_fernholz.21 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch |
techcrunch |
30.03.2026 12:49 |
0.645
|
| Embedding sim. | 0.6921 |
| Entity overlap | 0.0682 |
| Title sim. | 0.3636 |
| Time proximity | 0.9891 |
| NLP тип | funding |
| NLP организация | Mistral AI |
| NLP тема | ai infrastructure |
| NLP страна | France |
Открыть оригинал
French lab Mistral AI has raised $830 million in debt to build a new data center near Paris that will be powered by Nvidia chips, according to reports from Reuters and CNBC .
Mistral first announced plans to build a data center last year , when its CEO Arthur Mensch said it would explore different financing options in February 2025. It plans to complete building the data center in Bruyères-le-Châtel and make it operational in the second quarter of 2026, Reuters reported on Monday.
Mistral did not immediately return a request seeking confirmation.
Last month, the company said it would invest $1.4 billion in Sweden to build out AI infrastructure, including data centers. Mistral said it aims to deploy 200 megawatts of compute capacity across Europe by 2027.
“Scaling our infrastructure in Europe is critical to empower our customers and to ensure AI innovation and autonomy remain at the heart of Europe. We will continue to invest in this area, given the surging and sustained demand from governments, enterprises, and research institutions seeking to build their own customized AI environment, rather than depend on third-party cloud providers,” Mensch said in a statement to CNBC.
Mistral has raised over €2.8 billion ($3.1 billion) in funding to date from investors including General Catalyst, ASML, a16z, Lightspeed, and DST Global, according to data from Crunchbase.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Topics
AI , data centers , France , In Brief , Mistral AI , nvidia
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
Robotics
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles
Tim Fernholz
6 hours ago
AI
ScaleOps raises $130M to improve computing efficiency amid AI demand
Kate Park
1 day ago
Space
Starcloud raises $170 million Series A to build data centers in space
Tim Fernholz
1 day ago
Latest in AI
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
1 hour ago
AI
Alexa+ gets new food ordering experiences with Uber Eats and Grubhub
Lauren Forristal
3 hours ago
Robotics
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles
Tim Fernholz
6 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Codex now offers more flexible pricing for teams |
openai |
02.04.2026 10:00 |
0.642
|
| Embedding sim. | 0.7447 |
| Entity overlap | 0.2727 |
| Title sim. | 0.0946 |
| Time proximity | 0.7321 |
| NLP тип | other |
| NLP организация | ChatGPT |
| NLP тема | enterprise ai |
| NLP страна | |
Открыть оригинал
Codex now includes pay-as-you-go pricing for ChatGPT Business and Enterprise, providing teams a more flexible option to start and scale adoption.
|
|
|
Okta’s CEO is betting big on AI agent identity |
the_verge_ai |
30.03.2026 15:15 |
0.639
|
| Embedding sim. | 0.7541 |
| Entity overlap | 0 |
| Title sim. | 0.2273 |
| Time proximity | 0.5739 |
| NLP тип | other |
| NLP организация | Okta |
| NLP тема | ai security |
| NLP страна | |
Открыть оригинал
Today, I’m talking with Todd McKinnon, who is co-founder and CEO of Okta, a platform that lets big companies manage security and identity across all the apps and services their employees use. Think of it like login management — actually, that’s a great way to think about it because the way most people encounter Okta is that it’s the thing that makes you log in again right before joining a meeting several times a week, so then you’re late for the meeting… Can you tell we use Okta?
Anyhow, all of that is a big business — Okta has a $14 billion market cap. But big software as a service companies like Okta are under a lot of pressure in the age of AI. Why would you pay their fees when you can just vibe-code your own tools? This so-called Saaspocalypse is a big deal, and Todd recently said he was “paranoid” about it on Okta’s most recent earnings call. So we dug into it, and how he’s putting that paranoia into practice inside Okta — what he’s changing, and what opportunities he’s going after to head off the apocalypse.
Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here . Not a subscriber? You can sign up here .
The biggest opportunity you’ll hear us talk about is some deep Decoder bait: the idea that it’s not just people whose access and security credentials need management, but also AI agents inside a corporation. This concept has really exploded with the rise of OpenClaw , which came with a ton of security challenges . Can any company keep users, platforms, and data safe if people are just going to buy a Mac Mini, hand their credentials to it, and let OpenClaw do whatever it wants with them? Is simply installing a “kill switch” at the agent level — as Todd suggests — enough?
You’ll hear Todd say that agent identity is something in between a person and a system, which is some of the richest Decoder bait possible, so we spent some time digging into that. It also seems like we are on the cusp of some of the goofiest org chart ideas in history, as people start to manage hybrid teams of people and agents, and I wanted to know how Todd was thinking about that inside of Okta itself.
Like so many of our guests lately, it’s clear that Todd’s a Decoder fan, so this one got deep, about the very nature of building software itself, and what it means to run a software company. That’s right, the Okta episode got emotional. Hang on, it might surprise you. Okay: Okta CEO Todd McKinnon. Here we go.
This interview has been lightly edited for length and clarity.
Todd McKinnon, you’re the Co-founder and CEO of Okta. Welcome to Decoder .
Thank you for having me, Nilay. It’s great to be here.
I’m excited to talk to you. I feel like a real theme of Decoder lately is just me being emotional about the nature of software in 2026. And I can’t think of anyone better to do it with than you, because when I think of emotional software development, I think of big enterprise software CEOs.
Would you like me to soothe your emotions or upset your emotions?
I’m going to start with your emotions, actually. We’re going to get right into your feelings, Todd.
Oh, yeah. All right. I’m really good at talking about my feelings to massive groups of people, so lay it on.
Well, you did. Here we go. We’re going to just jump right into it. A few weeks ago, Okta had earnings. You’re on the call. They asked you about the SaaSpocalypse, which I want to talk about in detail. But this was your response to SaaSpocalypse; this is why we’re starting with feelings. You said , “We are paranoid, and we’re making sure that we’re using all the latest technologies, LLMs, et cetera, to make sure that we have something that’s resilient and secure but has the best features and best capabilities.” This is you talking about, “Hey, agentic software development is real. The idea that our customers would build their own tools instead of paying us for these tools is real. We’re paranoid about it. We’ve got to compete with that.”
That’s a big thing to say. Talk about where you are in SaaSpocalypse because I want to start there, and then I want to zoom out to basically the nature of software in general. But that feels like a big thing for you to say; you need to be paranoid about this threat.
Let’s start with me, personality-wise, and how I operate. I’m very much challenge-driven, and I think a lot of people are in our business and just like, “What’s the next challenge?” And what I see right now in the world is a huge challenge and a huge opportunity. It’s like a huge mountain to climb. And the fundamental level is that I believe strongly that the pie for technology is expanding greatly. The pie of what we can do for people and companies with AI and the common things people talk about, agents, and… This is a massive change, massive disruption. It’s bigger than cloud computing. If you could talk about it, is it as big as the internet? It’s big.
Now, capturing that and leading a company that thrives… Okta has had a decent amount of success, $3 billion in revenue, growing over 10 percent last year, an established brand, and 20,000 customers. We’ve had some decent success. I think the opportunity going forward with all this change and all this disruption is massive. It’s huge. Technology is getting way bigger; there are all kinds of new categories that I think are emerging. For me, personally, it’s an incredible opportunity and challenge to lead the company through this. And to go from what is a mid-size, successful SaaS company to what I think could be one of the most important companies in the world — that’s a huge challenge. It’s a huge opportunity. It’s also daunting because, in some way, it’d be great if things didn’t change that much, our locked-in position was more stable, and we could plug along. But there’s a huge prize. The prize is massive, and that’s incumbent upon us to face this challenge and to go get it.
You’ve talked about this in terms of the pie. You’ve said that the total addressable market for software is growing. I have a lot of questions about Okta in that market as it’s growing. I know you have some announcements about agents, verifying agents, and having a kill switch for agents that I want to talk about. I just want to come back to SaaSpocalypse in general. I understand SaaSpocalypse for run-of-the-mill productivity tools. We use a lot of run-of-the-mill productivity tools here at The Verge ; they’re all fine. And I’m always joking that enterprise software CEOs don’t love coming on the show because…
When I grow up, I want to be run-of-the-mill.
Right. But they’re all fine. You can take one piece of project tracking software and replace it with another, and the idea that you’re going to get anything more than a 5 percent productivity improvement, I think, has always been illusory. Maybe you’ll get some better pricing. The idea that I can just vibe code a Trello and now I don’t have to pay Trello because I just have a Trello… I understand that argument. Okta, to me, has seemed much more insulated from that because you have identity, and you have to do security at a scale that most people can’t consider doing security. There are a lot of reasons why paying you to take that liability on is a good business, regardless of whether I can build it myself for cheaper.
What specifically has you paranoid about agentic software and your customers building their own tools to look like Okta? Because to me, that’s actually a little more opaque.
If you look at what these tools can do, it’s amazing. The Claude Code, Cowork, and Codex and… These are… I grew up as a software engineer, and that whole world is being revolutionized. I’ve built a company as a product developer and as an engineer. And so if you don’t question and look at how you’ve built your own company and realize that the world is changing, you’re just naive. Now, we can talk about the reasons why I think Okta is very well positioned and has attributes of the market and attributes of the product that make it very resilient and hard to replace, but you just have to look at the technology and look at what’s possible. And if you’re not circumspect about what got you here and what your moats are and what the upstart would be doing if they were trying to compete with you, I think you’re just naive.
I think it’s a healthy paranoia. When you look at the business, I think there are the features and functionality of our products. And then one thing that’s maybe misunderstood about what we do, or maybe the buyers understand it, but in general might be misunderstood, is that you can build the features and functions, but the last thing is to connect it to everything. Thousands and thousands of different applications, services, and pieces of infrastructure have to be connected to the last mile. And that always changes, so you have to keep that integrated and you have to make sure it’s always up-to-date with the latest changes of the ecosystem. And so the integration part… And then this other part is that, really, it has to work. It’s mission-critical.
Even if you’re building something that looks like Okta, getting the features to work is 10 percent of the battle. Making sure it works 100 percent of the time takes years and years and years. And there’s also a reputational thing. It’s like, “What are you going to trust?” Are you going to trust the proven solution that’s been out there for years? Are you going to trust something that your team just cooked up? Infrastructure software in general…
And then cyber software, I think, is also very well insulated from people vibe coding it themselves just because you’re talking about things that are purchased on… There’s a lot of brand that goes into it. What cyber company do you trust? What cyber company do you trust to be secure itself, and what cyber company do you trust to be up-to-date on all the latest threats? And then people who are buying cyber tools, they’re going to have to look at their bosses and their boards of directors and say, “What did you pick?” “Oh, we got breached. Well, what did you pick?” “Well, I wanted to save a little bit of money to vibe code it.” The category of security and infrastructure software, I think, is a little bit different from some of the app categories that you were talking about.
There’s a little bit of “no one ever got fired for picking IBM” in there. And then I think more cynically, there’s, “I want a vendor for this stuff that is rich enough for me to sue them if something goes wrong.” It’s in there, I hear it from the industry.
Or the more glass-half-full view would be that it can support me.
Yeah, it’s one or the other. Your job is to have the glass be half-full; I have the other job.
I’m trying to connect the dots between what sounds like a good case for being insulated from the market and what you’re describing as healthy paranoia. There’s a new generation of software tools that will help people build competitors to Okta. Whether those competitors are just the next N+1 SaaS competitor or whether it’s the internal team at a company saying, “We’ll build our own identity solutions,” what’s the mechanism that is leading you to say, “We have to be vigilant”? Will the new generation of SaaS companies just be cheaper? They’ll have fewer people, and they’ll build something comparable to Okta that is just vastly cheaper per seat? Is it that the companies will realize, “Oh, we can just build all these connectors, and Claude Code is going to traverse our intranet and log people in manually”? And maybe that’ll be more costly in tokens, but the front end will be cheaper.
If you have the insulation, what is the mechanism that might be a threat to Okta?
I compartmentalize it into two different areas. The first area is just… Probably the most important area is the job as CEO is… The most important job is to figure out a strategy, which means which market you’re going to be in and how you’re going to win in those markets. And for us, there’s a big new emerging market which is AI agents need to log into stuff, and AI agents need to be… You need to have a system to keep track of them, define their role, define their permissions, and what they can connect to and what they can do. That’s a big new market, so getting the company oriented on that massive new market, and that’s one bucket, which is markets.
The second bucket is how we execute to capture that market. And I think the main theme in the second bucket is, and it sounds basic, but I think basics are important, which is… It’s very clear that, especially in software development and innovation, the technical shift is very significant. The number one thing that an organization has to do is turn the dial in terms of how much change it will absorb. In normal operating mode, let’s say you want 20 percent change, 80 percent stays the same, you need to turn that dial up now, you need to change more. Whether that’s your team structure, processes, or the technology you’re using, you have to turn up the change quotient. What I tell the team is that it’s got to be at least 60/40, if not more. And then with that, you give them the freedom to experiment with new technology, learn from what’s happening out there.
By the way, I think one of the most important things is that while you have a healthy appreciation for the change and the impact, you can fall victim to believing what you see online or what you hear because everyone is trying to sell something. Everyone is trying to make their company sound cool, and they’re like they’re embracing the change. When you hear companies, especially big company CEOs, say, “Oh, AI is writing 90 percent of our code right now.” They’re trying to sell something, whether it’s their own substance as a leader or their own organization’s ability to innovate. You’ve got to take that with a grain of salt and say, “Hey, the art of the possible, but as we change, what are we embracing? What’s working for us? What’s not?” But it all comes back to giving the teams freedom to change. And change is hard. It sounds trite, but you really, as a leader, have to force it sometimes, top-down mandates. I like to be bottom-up and empower people. But sometimes to get change to happen, you have to push it.
Tell me about the change. It sounds very specific that you think the change here is that there’s going to be a universe of agents doing work inside of companies, and they need to be permissioned and controlled, and Okta should focus on that. And you’re not so worried about, “Hey, a bunch of people are going to vibe code their own tools, or a bunch of cheaper competitors are going to come up and disrupt us because they vibe coded a competitor to Okta.” It seems like you’re bracketing that and saying, “That’s not a big problem for Okta right now.”
I think if we have the opportunity to win this battle, to be the identity layer for AI agents, and if we win that, that could easily be the biggest category in cyber. Cyber is about 280-ish billion dollars a year. Identity management is about roughly — depending on whose number you believe — it’s roughly 10 percent of that. This new agent layer could be the biggest category in cyber by far. Yeah, winning that is job number one for our company.
Tell me your calibration on how much it’s acceptable to lose the identity piece of your business to whatever vibe coding SaaSpocalypse people think in order to win the bigger market in agent control. Because right now, the argument is, why would anyone keep paying you monthly or yearly for X number of seats when they can pay a lower fee to some solution that someone has built more cheaply? And then once that’s done, it’s done, and you don’t have to pay annually. Why would anyone keep paying you for that if you think the market is bigger for agents?
They’re not mutually exclusive. I think the attributes we talked about, whether it’s reliability, trust, integration, capabilities, and whether the vendor you’re going to trust has enough money to support you, are a foundational thing in both of these markets. Whether it’s people identity for customers, partners, and employees, or it’s this new identity type of agents and facilitating that. They’re not mutually exclusive. But I think what’s happening in the world right now is every organization is… It’s interesting. I think I’d say they’re universally aware of the potential of agents or agentic, the agentic enterprise, which is essentially that they want to make things more automated, and they want to enhance their digital, or enhance their workforce with digital employees, or they want to add new digital employees. They’re all clearly aware of this, but they’re getting a very mixed set of signals and a very messy story about how they do it.
There’s a combination of the big platforms, Amazon, Microsoft, and Google, that are going to sell me agents. It’s not even actually clear what an agent is. Salesforce has Agentforce, ServiceNow has agents, every SaaS company is building agents, and they’re trying to sort through it all. But what they see is that they see a tremendous opportunity to automate things and to basically take the labor budget and divert it into the technology budget and make their companies grow faster and be more efficient. And now what they’re looking for is, “Okay, what are the foundational building blocks to wire that all together and make it work? What are the rails?” And so that’s where the big opportunity is to take the first steps on this, which could be the biggest category of cyber.
When you look at things like OpenClaw, which obviously had a huge moment, and everyone is buying Mac Minis so they can air-gap OpenClaw from their production machine, and then they’re just giving OpenClaw all of their logins and passwords on the Mac Mini. I look at that, and I’m like, “You’ve accomplished nothing.” Right? You’ve given it all the access over here, and maybe it just doesn’t have your file system with your photos on it, but it still has all the access to the tools. But that’s where the excitement is, right? It’s living on the bleeding edge of danger, and saying the agent running on this machine can run overnight and invent its own tools and figure out solutions to problems.
When you are looking at putting rails on that, it feels like you’re actually going to foreclose some opportunities because we don’t yet really know how the agents are going to work. How did you evaluate what was going on with OpenClaw and the way people were giving it permissions, just as that economy developed? I don’t want to call it an economy. How did you look at OpenClaw and the way people were giving it permissions? Is that culture organically developed, and how is it informing your thinking about building for agents at Okta now?
The first thing is that it’s the ChatGPT moment for agents, and then ChatGPT was the Netscape moment for AI. It’s very significant. And the biggest significance, I think, is that it opened everyone’s eyes to the art of the possible. At my son’s soccer game, the parents were talking about OpenClaw. And these aren’t tech people, they’re just talking about how they’re going to automate all their tasks. And so these people are using it in their personal lives, and they’re consumers, they’re IT buyers, they’re a company. It’s a really eye-opening and definitional thing about what an agent can do and what it can be.
As you mentioned, the rails needed are the… And this is a tension… When you get something like an OpenClaw, and you try to experiment with it and play around with it, you say, “Oh, it’s really not that interesting unless it has my data, unless it’s connected to everything.” And this is exactly what these companies or every enterprise are struggling with. It’s like, “Hey, this stuff really needs to have my data, my 50 years of sales inventory, my customer data, and my marketing data. And once it’s all combined, these agents and this agentic layer can do interesting things.”
What the rails we’re putting in place are… Actually, first of all, it sounds basic. But just giving enterprises a list of the agents sounds simple. But they need a list of the agents they have, and then they need a system of record and a list for the agents they could use. What is Salesforce doing? What is ServiceNow doing? What is Claude doing? What agents do they have? And then, “Okay, now what are they connected to?” And making sure that we control and secure what the agents are connected to because, again, the tension is between more and more data, more and more connections.
This is, by the way, why companies like Palantir, Snowflake, and Databricks are doing so well, because what they allow companies to do is, instead of having to actually connect their agentic enterprise to all these separate systems, they pool it into one data warehouse. That’s one model; you can pool it all into one data warehouse and run the agents on that. But I think the longer-term, more scalable model is that you actually have the right permissions and the right access tokens for the agents to access the data directly.
When you go back to the example of OpenClaw, it’s a mindset. Everyone knows what these things can do now, and you have to facilitate access; you have to facilitate making sure that these connections are made in a secure way, in a way they can be understood and monitored. And when things go too far, you can pull them back. And as you experiment in the lab, you can say, “These are the connections we need. We should add more here. We should change this. We should filter this permission.” That’s what companies have to do, and those are the rails we’re trying to put in place.
When I said this was going to be an emotional conversation on software development, the nature of our relationship to databases is at the very heart of that existential crisis that I feel every week on this show. Let me just get your answer to this directly. It sounds like you’re saying SaaSpocalypse might be real, but it’s not real for Okta in the way that most people think SaaSpocalypse is real.
I think what people miss is that the pie is getting much, much larger. I think a few things are true. Everything is getting bigger. I think if you look at the amount spent on software, if you do infrastructure and SaaS and everything, hyperscaler’s software, it’s about $1.2 trillion roughly. If you look at the number of people, the services, the IT services market, it’s about $1.8 trillion. The markets are getting bigger. We’re going to be spending more of that money on software, and the pie is getting bigger. That’s one thing that’s true.
The second thing that’s true is that every piece of technology in the stack, whether it’s SaaS apps or whether it’s devices or OSs or infrastructure, they’re all going to get agentic features, they’re all going to do things more on their own. They’re going to be able to talk to more of them, and they’re going to optimize for agentic.
And I think the last thing is that there is a new layer, and that is the digital worker layer. I’m sure some of the existing companies are going to make the leap, and they’re going to have real digital workers that are coming from Microsoft, Salesforce, and Amazon. I think it’s probably more likely that it’s going to come from companies that weren’t born in the legacy way of building an app. I think it’s hard when you grew up building an app in a certain functional silo. It’s hard to build a digital worker because digital workers need to go across different things; that’s why they’re called workers, that’s why they’re not called one app. And so it’s really hard for companies that have focused on collaboration, HR, or one silo to say, “Hey, now my digital worker really can span all these silos.” Because if you look inside those companies, the whole org structures of these companies and the politics of these companies are that someone owns one silo, so it’s very hard to break through and go broad.
Anyway, I think everything is getting bigger, I think a lot of the apps will have agentic features, I think there’s a new layer of digital workers. Now, back to your question, which is, what’s going on with the SaaSpocalypse? The reality is there will be some losers, and there will be some companies disrupted, and there’ll be new people to take over categories that are now… But that’s back to challenges and making it fun. That’s what fires me up, and I think it fires up a lot of people, too.
You have brilliantly opened the door to the Decoder questions by talking about org charts. I actually think we’re on the cusp of some of the weirdest org charts we’ve ever seen, but tell me about Okta.
Talking about change and change more… One of the hardest things about this whole thing for everyone is experience, what worked in the past, how you got promoted, and what you built your career on; a lot of it is being invalidated. We learned for 30 years like, “Oh, this is how org charts work.” And a lot of that stuff is probably different now, so it’s hard for people to adjust.
Tell me about Okta. What was your org chart in the past? You founded the company; I’m sure you’ve gone through many iterations of it. Where are you at now? And as you talk about changing the balance of change to the company, how are you changing your org chart?
I think the guiding principle is to try to give great people an area where they can be great. It’s really a people-driven org chart. Reward people, promote people, bring in new people, give them an area that could really excite them, and motivate them. And it’s people-centric. The second principle is that, where possible, try to cluster things so you minimize communication paths and you let people be more autonomous in small teams. I found that’s pretty hard. I think pretty quickly there’s… Unless you have very distinct, separate business units and really almost separate companies inside your company, it’s pretty hard to cut down on the lines of communication. I think you can do it, but it’s always, I found a little bit… There’s got to be lines of communication somewhere, and no matter how you slice the org, you’re moving around where the people have to cross org boundaries. But you do try to take that into consideration.
And then I think beyond that, I think a lot of things that people try to do with org charts, whether it’s get people aligned on goals and get a culture that is shipping things quickly, is… It’s really not an org chart thing; it’s a management thing, it’s a leadership thing. And instead of moving the org around all the time, you’d be better spent making sure you have the right management team and the right leadership team to instill those cultural elements. Doing that versus taking your people team and telling them to move stuff around to have a more nimble culture, you probably should just get the right managers and instill that value that way.
This is my joke on Decoder : if you tell me the structure of your company, I can tell you 80 percent of your problems because the tensions just exist in certain structures in predictable ways. And it’s that last 20 percent, which is priorities, leadership, and management. It sounds like you’re pretty functionally structured, but how is Okta actually structured? Are you structured by business line? Do you just have a crack AI team that’s off in the corner? How does this all work?
On the go-to market side, it’s functional. On the G&A side, it’s functional. On the R&D side, it’s by platform. We have two platforms, the Okta platform and our Zero platform. And the R&D is by platform.
The other question I ask everybody who comes on Decoder is about decisions. Again, it’s always great to have a founder because your frameworks change as you come up with a company. How do you make decisions? What’s your framework, and how has that changed over time?
We’re doing an introspection here. I love it.
I told you it would be emotional.
Yeah, you did.
This is Decoder . Decoder is just therapy for me personally. At this point, you can tell what my problems are by the questions I ask.
You’re like casting them out amongst the guests. It’s interesting. When I started Okta, I found myself… I’d worked at Salesforce, and I had a decent-sized team there and felt like I was very decisive. I was like, “We’ve got to do something, here are the options, decide.” And then I started Okta, and I found something interesting: my decision-making process slowed down. And when I was thinking about why, I realized that when I was at Salesforce, my boss was always a safety net, ultimately. It’s like, if I were going to make a bad decision, there was theoretically a boss to stop me. But when I started doing Okta and the company started getting successful, my decision was the decision, and I had better think about it and get it right. And so it slowed down, it slowed down.
And then the company got bigger, and we got into this phase where we went public and got close to a billion dollars of revenue. Then I felt like maybe I needed more input, and I really needed to get expert advice on a lot of things. And what I realized over those years is that my instincts were still pretty good, and I probably should trust my instincts more. And so I think that’s the mode I’ve been in for the last three years. Yeah, the company is bigger than it’s ever been. I’m managing a company that’s bigger than I’ve ever managed by definition, but I think I’ve been leaning more into my instincts.
I think to inform those… To put more detail on that, I think two things are very important. One is that you have to decide which decisions to make. That’s really important. There are a bunch of decisions that I shouldn’t be involved in, and I shouldn’t be making. But the inverse of that is super important, which is the ones that I am making. I’d better focus on them, concentrate on them, and really get those right. And for me, doing that in an effective way, having a detailed grasp of what’s going on, is incredibly important, being in the details. It’s at a scale where it’s hard to know every little thing, but you can really dive into areas and get enough details throughout the year so that when it comes to making those big decisions that you’ve narrowed down and focused on, you can use those details, use your judgment, and trust your instinct to make good, high-quality decisions. It’s the most important thing I do, deciding which decisions to make and getting a high success rate on them.
Put this into practice for me. The big decision we’ve been talking about is whether Okta is going to chase the idea of being the framework for agents in the workforce. That’s a huge market. It is so big that maybe you’re not as worried about SaaSpocalypse as some of the other enterprise CEOs that I talk to, because the market is going to grow so big and we’re going to force-change the company from the top down to make sure that the rate of change is higher and we’re all focused on this opportunity. How did you make that decision? Did you stare at the ocean for a while, and it came to you in a lightning bolt? What was the process there?
I think the high-order bit there is recognizing a world where everything in the stack is going to change. And I think it’s similar to when I started Okta. You never want to exactly follow the past because the past is always… Or history doesn’t repeat, it rhymes. But a lot of it was… I remember in 2009, I was looking at the world and saying, “Hey, there’s going to be a cloud version of everything in the stack, and what are the big unique opportunities there?” And what’s happening with agentic, call it agentic, is that everything is going to be revisited in this agentic world, whether current solutions are going to have agentic capabilities… It’s crazy, like AWS. AWS is the infrastructure business, the most unassailable business. That market, with all the changes with agentic and people building agents and running models, is up for grabs, which is crazy.
All this change and then you just look at what’s going to be required in all this change, and you say it’s… These connections between all these agents and where they’re running, the demand for that is going to be massive because there’s going to be this onrush of agentic capabilities. There’s going to be new stuff that’s built, there’s going to be native vendors that come out of nowhere and take market share, and there’s going to be new markets. And so it’s a macro thing, but now it’s like, “All right, what do you know about the details of your company, Todd? What are you guys good at? You’re good at building something that scales, building something that’s reliable, building something that connects to a lot of different systems. How can you position yourselves in that new market?” And I think those are the big essential things, that’s the bet we’re making.
Take me inside the moment, though, when you’re realizing this happens. Did you write an email? Did you open a Google Doc? Did you just dictate to ChatGPT and say, “Fire off an email from me, agent.” How did that actually work at the company?
Last year, I was in the process of meeting all of our 100 largest customers in person. And the purpose of the meetings was that I wanted to tell them about our vision of this unified identity platform, where we’re the only ones in the industry that have all these capabilities across customer identity, governance, and privilege. And at the same time, the teams were working on agent identity. And in these meetings, I would pitch what I was talking about, and then there’d be interest in, “Oh, we should look at this. We didn’t know how far along you were.” And then I started throwing in this agentic stuff at the end of the meeting. And whenever I would get to that, the people in the meeting would just stop, and they’d be like, “Wait, talk about that some more.”
And then that kept happening and happening until we’re 25, 30 meetings, 40 meetings in, so I would flip it around. We would start with the agents and the new identity type, what customers were thinking about doing with agents, how they’re seeing the potential of the digital worker, agents, and all the confusion, and we wouldn’t get to the other stuff. I remember during our big conference in the fall, it was the last vestiges of the old pitch, followed by the agents. And after that conference, I just said, “Listen, we’ve got to flip this around. People want to hear about the agents, that’s the direction they’re going, and that’s what we need to pivot to and totally focus on.”
All right. Let me ask you my crash-out questions about all of this. Here’s my first one, and you’re a great person to ask this question to because you build a lot of software. You’ve built a company around building software, very bespoke, very complicated software, and you’re trying to sell a lot of software to people who, as you said, would like to replace labor with technology. And there’s a lot there, and I’m looking at the state of the art in AI right now, and I see some cool stuff happening, and I find myself constantly wondering, can the LLM technology we have today, that is a foundation of all of these AI systems, can it bear the weight of our expectations? Can it actually, on any reasonable timeline, do all of the things that people think it can do?
Because I can see it doing some things, and then I see it just hit walls over and over again. And I say, “Well, if it’s brittle, people are not going to adopt it because that brittleness is exactly where you want a human being to just be available to overcome whatever boundary the AI is going to find for itself.” And I can give you examples, but I’m curious if you see that broadly and if you think the technology can actually develop to the point where the market becomes as big as what you’re describing.
Absolutely, the technology can develop. I think there’s a lot of wild extrapolations going on right now, but I think that even if you don’t meet the wild extrapolations people are talking about, the market is still massive. And I think it’s going to take a lot of innovation, good product work, good engineering work, and good process work to make sure that we can achieve these benefits even though it’s not some wild extrapolation of some magic LLM that can do everything in the world.
I see one example. Every software developer I know, especially the senior ones, who are like, “I’m now just describing software.” I’m just writing-
Yeah, that’s a great example. That’s a great example. Now, I believe that is very real and very powerful. But I also believe that there’s going to be more software engineers in five years than there are now. And the reason I believe that is not because I think those people are wrong, but I think what’s going to happen is, first of all, there’s just way more software that we need to build that can be built. And two, what’s going to happen is the software engineers are going to be figuring out how to make it work at scale, how to make sure that systems can be maintained, how to make sure we understand what they actually built, and we need to modify them for the next way….
No one has ever maintained an agentically developed system for five years. No one has ever figured out how to make it scale. No one has ever figured out… That’s where all the work is. And when you combine that with the idea that we’re going to build 10 times more software, that adds up to more people being required to do it. I think both can be true.
Where are those people going to learn how to do it? You’ve already described this, the traditional career path, the traditional org chart is breaking down. I think Meta announced that one manager will now oversee 50 ICs. When I say we’re on the cusp of some wild org charts, that’s what I mean. Some very strange corporate structures are going to blossom here. If the problem is, “Okay, no one has ever maintained an agentic system for five years, and we need more developers to do it.” Where are all those developers going to learn the skills to evaluate the code that agents are writing and deploying, and saying, “Okay, you got it wrong. Here’s how you need to maintain it.”
I think it’s maybe not what everyone says because people like to extrapolate and say everything in the world is changing, the education system is going to change, everything is going to change. I think a lot of the things where people learn, they’ll learn like in college. I think we’ll still teach computer science, it’ll just be different. Just like 50 years ago, we didn’t teach modern compilers; we taught machine code and assembly. And so now, we’ll teach how to coordinate agents and how to architect systems and how to… You’ll probably take some Java development classes, like when I was in college, I took machine code classes to understand how it really works under the covers, but you have to learn the new way. It’s modernization, it’s a new challenge… You’ll have to learn new challenges. And I think it’ll be better because we’re going to learn how to build stuff at scale, not just in terms of the amount of load it can handle, but build a large complex system at scale. Learning that in college, learning that on the job, and people who are early in their careers are leveling up.
There’s also this narrative out there that “Oh, we don’t need any entry-level developers anymore.” I’m very, very… That’s a bad mindset to have because, first of all, those are the people who are probably most open to doing things differently; they’re the least set in their ways. I think entry-level folks will learn how to use these tools and command these workflows to do things at scale in a way that people who learned 10, 15 years ago didn’t.
When I think about the value of agents going out in the world, as you’ve described, they need access to a lot of data. The notion that my company has a bunch of disparate databases and that I should hire an agent to go look at all those databases, put them together, and use the software. The thing that gets me about that every time is the notion that they’re going to build software because I’m not sure they’re building software for anyone… Because I’m not sure the agents are building software for anything but agents to use, and at some point, that software just gets very specialized and very narrow, and it is access to the databases that becomes the most valuable thing.
One of our own designers here at The Verge said to me right before I came to talk to you, he heard I was talking to you, and he said, “All software development in 2026 is just calibrating the interface between your brain and a database.” And right now, all AI development is like, “Would you like to just chat with this database?” And the answer in the enterprise appears to be yes, like, “Let me just talk to my analytics database directly like a person, and it will give me some insights.” And the answer in consumer maybe is no, Google Photos just walked back its AI search because it turns out people prefer the regular search. And I don’t know which one is going to win out over time and where habits for everybody across work and their personal lives will change, but the notion that the database is the important thing and that’s where the value is, because anybody can ask an agent to go make up a bespoke piece of software to do some business function.
Doesn’t it seem likely that the database vendors will just raise their prices, increase the barriers to access, or find other ways to extract more value from having that data? Because that’s what all the agents really need access to.
Well, I think there’s data, and then there’s intelligence. And I think a lot of the intelligence has been codified in the application. The raw database is not that helpful. When you say you want to talk to the database, what you’re really saying is you want some kind of analysis or intelligence done by something, you don’t want to have the ones and zeros and gigabytes of data coming at you. You’re really talking about intelligence.
And that’s the big debate about SaaSpocalypse: who’s going to do that intelligence? Is it the app vendors we have now? I mentioned the data warehouse companies like Databricks, Snowflake, and Palantir; essentially, they’re selling some kind of intelligence, the valuable part of their business is not the ones and zeros. The question is like, “Who’s going to do the intelligence?” And I think that the application companies are going to add some to their capabilities, and there’s going to be new ones. And there’s going to be new ones where that intelligence actually becomes work, not in the sense of app work, but in the sense of work people would have done.
Again, when I’m saying I’m having an existential crisis, as a tech journalist, I have understood software in one way for my entire career. It’s been a pretty good career because the software industry and the tech industry have grown so fast in the 15 years since we started The Verge . But every conversation I’ve had at Decoder over the past few months is with some CEO of a Web 2.0 company that put a beautiful mobile app interface on top of a database, and that thing felt like the application, and they built huge businesses on top of it. And you can describe this in all kinds of ways. We just had the CEO of Zillow on. Zillow is just a beautiful interface to a database, and that’s a really good business for them. I’m asking if you have agents and you’re like, “Go find me a house and order me a sandwich.” You’re going to end up in a place where it might just want to use Zillow, or it might want to cut Zillow out and go directly to the underlying database.
Or Zillow might build the killer agent.
Or Zillow might build the agent. And I’m just not sure how any of that plays out because what you’re really doing is unbundling the data and the intelligence that acts upon the data, and the interface to that data, into three very different things. And everybody still wants to make money and not go out of business. You’re sitting right at the center of it, you’re providing access to everyone. How do you see that playing out right now?
Well, I think the connections are very important because the app needs to… And I think a different way to frame what you’re saying is that there’s an unbundling, and there’s a data layer, an intelligence layer, and a front-end layer, but what also is happening is that it’s all getting more connected. We think of an app, a database, and a user interface as one thing. But as that unbundling happens, what is really happening is all the apps that you thought were in various silos are connecting to each other. And that’s because there are agents on top of them that are connecting to all those silos. The apps themselves are becoming more agentic, and Okta as a company… This is why I’m so excited about this agentic identity and these guardrails we’ve talked about.
It’s also why this needs to be standardized in the industry. There’s no good standard for how… We have pretty good standards now for how… When you single sign-on into your applications, how that interaction works between you and your browser, your phone, and the applications — there are no good standards for how agents connect to a bunch of other systems where they need to get their data. So, there’s some standardization that’s required here, too. But zooming out, it’s like, “Isn’t it exciting? It’s such a challenge.” It’d be much easier if things had just stayed the same, and we could keep in our own little lanes, and our success would be more assured.
I agree it’s exciting, especially because I think we’re going to see a wave of new companies and new ways of thinking. And certainly we’ll see new ways of computing, which is why The Verge exists. We were built around the concept that mobile phones would be important, which, when we launched the site, was not… People were like, “What are you talking about?” It’s hard to even say now, but this was a real thing that we said that we got question marks around.
I think that what I would temper that with is when I have CEOs on the show, and they say, “Companies are interested in replacing their labor budgets with technology budgets.” That is a pretty huge threat. When we talk about how much work will be automated by running around the agents and doing intelligence, one, I wonder, well, who will be spending all that money if no one is making any of that money? And then I think very importantly — this comes back to me asking about whether LLMs can do it — I wonder if any new ideas will be generated in that process at all if we’re just going to automate our way into something that seems pretty boring. We’re just going to run a bunch of business logic, and no one at the bottom who is actually operating a business logic will think, “Oh, I could do this 10 times cheaper if I start my own company.” And go start a new company. There’s something about all of that that I think, and I hear from our audience, is that’s why AI polls as badly as AI polls, even though the opportunities look exciting.
Well, there’ll be a wave of people building agentic systems to do the jobs people do now, or help people do the jobs people do now, then there’ll be another wave of things that are automating processes that weren’t possible before. We’re still in the early parts of that second phase where we’re thinking about, “Hey, we could build this new set of digital workers, and we’re going to get productivity.” We really haven’t gotten to the point where we question, “What is the process that should be happening in all these workflows if it could just be agentic from the start?”
Okta has announced a blueprint for agentic enterprise; it’s basically got three big pillars. It’s how to onboard agents as an identity, which I’m very curious about, and how you think about the difference between agent identity and an actual person. Two, standardize connection points, which you’ve talked about a little bit. And then lastly, this one is great, which is to provide a kill switch in case your agents go rogue.
Talk to me about the first one. You want to create a new identity for agents in the workforce on your network. What does that look like? How is it defined differently from an employee or a person?
Well, agents are a new identity type, and it’s like a combination of… It has some attributes of a human identity and some attributes of just a system, and it’s basically a hybrid of both. And so from a definition perspective, it’s pretty simple. I think where it gets interesting is that it becomes a map that centralizes the list of agents from all your vendors. It can represent agents from all the big platforms. It gives you this central way to keep track of it all. And that’s what companies are struggling with: they hear all the announcements, and they’re very excited about this. They just need a place. “Hey, bring it in centrally and let me see what I have. And now once I see what I have, I can…” Some of these things are very much, “Hey, they’re just one-to-one with people.” Some of them are a set of multiple agents that work with one person. Some of them are totally headless, and they’re just on their own thing, automated with some things, and they need a human in the loop. And you can start to organize things that way.
But it’s all framed in this concept of mapping across different silos. You have agents you’ve built yourself, you have platforms you’re using like Amazon, Microsoft, or Google. You have big apps you’re using, like Salesforce and ServiceNow. It lets you centralize all that in a way that doesn’t lock you into one of those silos. And then, as you said, it can help you say, “All right, all these things unequivocally need to connect to more things. And I can control where they connect to, when they connect to that data warehouse, what permissions they have in that data warehouse, and then across all the different various technologies.” Then, as you said, stuff is going to go wrong, and there’s going to be issues, threats, and prompt injection. And when that happens, it gives you the ability to essentially pull the plug, take the connections away in terms of like, “Oh, this agent is doing something we didn’t expect. Now, what we can do is we can pull away its connections.”
How do you detect whether it’s doing something you didn’t expect?
We don’t have a magic solution to that because it depends on the point of the agent, and that’s dependent on the person who wrote the agent and the system it came from. But we’re working on standards for people to raise that issue, from a technical sense, like raise an alert and have the other elements of the system respond to that.
Is the kill switch just we’re pulling your access, you’re fired, get your stuff, and go?
It’s pulling the access to everything the agent can access, not access to the agent.
Right. It’s just saying we revoked all your passwords.
Shut it down. Yeah, exactly.
You’re out of the system now.
It’s almost like you would take a machine off the network.
When you say that the agent identity is somewhere between a person and a system, go into that in more detail. What specifically do you mean?
When you think about having a system that controls what something has access to, a lot of it is very similar to a person, meaning that just like you would give a person access to applications and then inside of those services and applications, you would say, “Here’s their role, here’s their group, here’s their profile.” That’s a lot of the way these agents are being built and modeled. The reason it’s not like a person is that you have a relationship between the people and the agents in a way that they’re on behalf of, and you want to always take the identity of the person and pass it to the agent and have it use that. And sometimes you want the agent to have its own identity and the systems that talk to do their permissions based on what the agent is, and then it goes back to the person as a human in the loop.
There are different patterns, so that if you actually look at the physical directory of agents, some of the elements are very much like a person. Some of them are only because they’re these agents that can be on behalf of people, or they can be connecting to other agents, and they’re more like systems versus people.
When you look at how the agents operate, you can go look at the chain of thinking at any one of these systems; a lot of times, they’re just talking to themselves in weird ways. I feel like you’re provisioning identity. Obviously, Okta doesn’t think about identity in the most deeply philosophical ways, but Anthropic is very happy to hint that Claude is alive. When you think about it, “Okay, I’m a provider of identity to these systems that are a hybrid between people and something else.” Does it ever occur to you that they might be reasoning in a way that is more human or not, or that you need to address that in some way in the architecture of how you give permissions to them?
We’re pretty pragmatic about it, meaning that we know that the behavior of these systems is non-deterministic and you have to… It’s all about getting this balance right between giving it flexibility to what data, systems, and things it can access and do, and what operations, but then having the ability to reign it in when it goes too far. And I think that’s the right… Ultimately, that’s the right way to balance the effectiveness of these systems and the risk. There’s no free lunch; you have to give it the data if you want it to be effective. And you have to decide if you have zero tolerance for non-deterministic behavior. You can’t give it the data, you can’t give it the permission. And so that’s the balance that we’re helping customers strike.
How do you think about… Okta sits in the middle. You were talking about Salesforce, which has its own agents; there are other vendors that have their own agents. They are not going to want those agents to work across their databases. This comes back to what I think is the central challenge here, and the reason why something like OpenClaw was able to be so powerful so quickly, because it had nothing to do with any of those companies or those platforms. It was just clicking around their browser as though it were an actual person.
It was like a cannon shot out of nowhere. Yeah. Yeah.
Right. And it was because there was no security built into it. And instead of acting on behalf of a person, it just represented itself as a person, and it was off to the races. And Salesforce can’t keep an actual human user from using a different system or orchestrating in their own head, right?
Well, when you build the agents inside the corporate network, you can absolutely do those things, and Salesforce can absolutely write a terms of service that says, “We don’t want the agent from your rival vendor using our system as well.” Are those just politics? Is that negotiation? How is that going to work?
I think there’s only one thing, it’s customers. Customers will have the leverage eventually. And if the customers in a market mechanism don’t have leverage, the government will step in and do antitrust. The reason we have a software industry, do you know why we have a software industry? Because customers finally got fed up with IBM and said, “You have to sell software, operating systems, and applications independent from the hardware.” This is 50, 60 years ago, 70 years ago, IBM is like, “There is no software, there are no applications, there’s this IBM box, and you get it, and we are technology.” And customers want a choice, and finally, the government stepped in and said , “You’ve got to split it up. You’ve got to have operating systems, you’ve got to have hardware, you’ve got to have software.”
And so I think a similar thing, it’s, yeah, of course… Every big vendor that’s trying to protect their entrenched things, whether it’s Microsoft with their new bundle where they’re trying to lock everyone in, they’re going to say, “It all has to be on our thing, and you can’t use other agents against our agents because our agents are better because they have our data and our workflow.” And ultimately, it’s going to be customers that demand change, and if there’s so much monopolistic lock-in, then we have to rely on regulators to come in and fix it.
Well, I do think this is history that you’ve just made. You’re the first CEO of a multi-billion-dollar enterprise software company to advocate for vigorous antitrust enforcement at Decoder , so I’m just going to hold that close to my heart. I do think-
If the market doesn’t work, customers can’t force the choice.
I do think the pre-Reagan antitrust environment that led to IBM being unbundled is very different from today, but we will set that aside.
But I did impress you with my historical reference.
It was very good. Again, the reason I didn’t answer your question correctly is that I’m very surprised that you went to antitrust. That doesn’t usually happen on the show. Isn’t there going to be just some weird pricing war in the middle of all that, where Microsoft says, “Sure, let your other vendor’s agent into 365. We’re just going to charge you a massive access fee to do it.” And…
Yeah, I think that’s very likely. Yeah.
Do you see that playing out now, or do you just see it on the horizon?
Not yet. It’s still very early. If you think of… What is happening now is that people are just getting familiar with the… Call it the siloed agents. They’re just getting familiar with the agents in Microsoft or the agents in Salesforce. We’re not really to the phase yet of multi-silo agents, agents that can go from stovepipe to stovepipe and do these… In cases there are, but that era is still ahead of us. And I think as you get more into that era, some of these issues have become more significant.
And again, just to bring this back to OpenClaw, which I think most of the audience is probably most familiar with, that is the promise of that system. That’s why it lit everyone’s brains up because it was running from system to system, doing some logic, and coming up with some outcomes. Again, the problems that-
The thing about that, and I think a lot of these trends and ideas, is to remember that no one cares about the infrastructure, no one cares about the… Well, this is obviously a dramatic statement. I’ll explain what I mean. But people care about the app in the sense that they care about what it can do. And the reason why OpenClaw was such a lightning in a bottle is that they saw what was possible, they saw what it could do. Now, the fact that it had to do that by connecting to all these systems, and it required access, and there were security issues, it’s like that’s infrastructure and people… Once their mindset gets set on the possible, then it’s up to industry to figure out how it all works under these covers, but people care about the possible in the apps. And I think that you’re going to see it ripple through… As I said, I thought it was the ChatGPT of agents, and it’s very exciting.
You’re saying now is the time to build the guardrails up to make sure these actually work.
Exactly.
Can I ask you about the flip side of that? The promise of agents broadly, AI maybe broadly, is that we will remove these intermediaries. The thing I keep saying is that your computer will just go access the databases all on its own, and you don’t need these app intermediaries or whatever, and we’re going to reshape the app economy.
Then I look at how there’s a bunch of scammers online who are just setting up fake hotel service numbers, calling grandparents, stealing bookings with AI receptionists by just doing SEO hustles, and collecting pennies. And Okta has a role to play there, too, by saying, “Okay, this is fraud, this is a scam. You shouldn’t hand over your identity here.”
I’m not sure anyone is paying attention to that, but I see it ballooning every day, just AI-powered scams, frauds, and identity theft. The idea that someone is going to call me and verify me by voice is under threat by AI in very specific ways. How do you see the flip side here of making sure that the core business that Okta is in, which is making sure it’s a real person doing the thing they’re supposed to do at the right time, isn’t just totally upended by the amount of AI-powered fraud that’s occurring?
Forty percent of our business is authenticating and validating customers, logging into customer websites and mobile apps, and this area is changing a lot with AI as well. And I think what you’re seeing is that the offline identity, driver’s license, passports, these are rapidly digitizing. I think it’s coming at a great time, too, because it gives us something to offer people who really want to do a better job differentiating between agents, OpenClaw, bots that log into their sites, and real people. So, as the offline identities digitize, people have mobile driver’s licenses, the smartphone wallets are getting pretty capable now, and you can do fancy things. Just like you do Apple Pay, you can do biometric authentication on your mobile driver’s license, and then that becomes a very powerful thing to present to a website that will actually prove you’re a person, or in a better sense than was possible before.
It’s a big deal. People need to really know in certain use cases when it’s an agent, when it’s a bot. It’s like this bot problem is not new; it’s an old problem on Twitter/X, and Elon Musk is on trial for talking about bots and how many bots there were. And now I think with AI, it’s becoming supercharged. I think with what we have with these national IDs, passports, and mobile driver’s licenses being digitized, we might have a shot at actually bringing some sanity to that world.
There are some real debates there about privacy, about surveillance, about-
Yeah. What does that mean to actually digitize identity from a credentials’ perspective?
Yeah. Are you guys in that mix? Is that something Okta is actively thinking about, or are you waiting for that to sort itself out politically?
Well, governments are deciding, and governments are deciding that they want to digitize, they want to issue these passports and these national IDs. And in Europe, there are certain standards across the EU. In the United States, it’s very much at the state level. Our customers are really excited about it, and we’re giving them all the capabilities to take advantage of this stuff. Without really specific judgment about how they should do it, we’re just trying to equip them to make sure that they can accept all the regulatory requirements and also all the identities and the digital formats that their users and their citizens want. And so it’s a big part of our future, and we’re working hard on that.
Right next to that is a big fight over age verification in the United States on the app stores and who gets to use what apps. Discord just had a big controversy because they went to an outside vendor. People had a lot of feelings about that outside vendor, and Discord rolled that back. Are you seeing any of that controversy come your way around age verification?
We work with the vendors that are trying to log people in, and they want the best tools and technologies to do age verification. We’re going to make sure we equip them with that.
Technically speaking, it’s often not a technical issue. It’s what ID system do you trust, and is there an ID system for someone that’s 12, 13, 14 years old? And so I think one of the challenges has been out of the scope of a lot of the driver’s license-based or passport national ID-based discussions. But I think that’ll be a use case that’ll be covered, I think, by governments fairly quickly.
Do you think it’s possible to do age verification and still protect people’s privacy?
I do. Yeah. Yeah.
Go ahead. How do you start to bounce?
There are technical solutions. There are also process and regulatory parts of it. I think ultimately the most privacy-preserving thing is no technology, so there’s going to be a trade-off. If you are trying to automate something and you’re trying to bring technology to something, there’s going to be a risk of centralization and privacy controls, but I do think it’s possible to get the balance right.
It seems like that’s just the other front; the computers are going to get way more capable on their own, and then we are very interested in limiting what people can do with computers in very specific ways. And it does seem like you sit in the middle of it. Todd, we’re going to have to have you back. I feel like there’s yet more emotional crash out for me to have with you.
This is fun. This is super fun.
Tell people quickly what’s next for Okta, what they should be looking for.
I think they should be thinking about how they build the secure agentic enterprise, and how they can use the blueprint we’re proposing to the entire industry, and how to make that possible. And we’re excited to work with everyone in the industry, and particularly the tools, technologies, and products we’re going to be building to make sure that reality comes to fruition.
Amazing. Well, like I said, we’re going to have to have you back to see how all this is going because it feels like it’s going to change really fast. Thank you so much for being on Decoder .
Thanks for having me.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
|
|
|
[Перевод] Terrafab, Starship, IPO: три обещания Маска, которые вызывают вопросы |
habr_ai |
31.03.2026 16:55 |
0.638
|
| Embedding sim. | 0.7526 |
| Entity overlap | 0.0256 |
| Title sim. | 0.0567 |
| Time proximity | 0.8219 |
| NLP тип | other |
| NLP организация | |
| NLP тема | ai infrastructure |
| NLP страна | |
Открыть оригинал
Изначально идея орбитального дата-центра от Маска казалась амбициозной, но технически сомнительной. Теперь, когда появились подробности о том, как он планирует реализовать этот грандиозный замысел, проект выглядит ещё менее осуществимым, чем казалось раньше. Либо это результат чрезмерного оптимизма, либо здесь происходит что-то более сложное.
Давайте начнём с самого начала. Вопреки заявлениям Маска, орбитальные ИИ-дата-центры не дешевле наземных. Как я уже писал ранее , запуск ИИ-дата-центров в космос обходится примерно в девять раз дороже , чем их эксплуатация на Земле — так что даже во время энергетического кризиса орбитальные дата-центры значительно дороже.
Вдобавок потребуются десятки триллионов долларов, чтобы построить и развернуть в космосе 100 ГВт солнечных панелей, которые обещал Маск, — и их придётся полностью заменять каждые пять лет или около того, когда спутники, к которым они прикреплены, сойдут с орбиты.
Да, и строительство этих спутников на Луне, как предлагал Маск, не решает ни одну из этих проблем и, по сути, только усугубляет их.
Тем не менее, Маск всё ещё хочет развернуть орбитальную констелляцию из миллиона спутников с ИИ-дата-центрами!
Читать далее
|
|
|
Sixteen new START.nano companies are developing hard-tech solutions with the support of MIT.nano |
mit_news_ai |
07.04.2026 20:40 |
0.636
|
| Embedding sim. | 0.7326 |
| Entity overlap | 0 |
| Title sim. | 0.0746 |
| Time proximity | 0.9706 |
| NLP тип | other |
| NLP организация | mit.nano |
| NLP тема | quantum computing |
| NLP страна | |
Открыть оригинал
MIT.nano has announced that 16 startups became active participants in its START.nano program in 2025, more than doubling the number of new companies from the previous year. Aimed at speeding the transition of hard-tech innovation to market, START.nano supports new ventures through the discounted use of MIT.nano shared facilities and a guided access to the MIT innovation ecosystem. The newly engaged startups are developing solutions for some of the world’s greatest challenges in health, climate, energy, semiconductors, novel materials, and quantum computing.
“The unique resources of MIT.nano enable not just the foundational research of academia, but the translation of that research into commercial innovations through startups,” says START.nano Program Manager Joyce Wu SM ’00, PhD ’07. “The START.nano accelerator supports early-stage companies from MIT and beyond with the tools and network they need for success.”
Launched in 2021, START.nano aims to increase the survival rate of hard-tech startups by easing their journey from the lab to the real world. In addition to receiving access to MIT.nano’s laboratories, program participants are invited to present at startup exhibits at MIT conferences, and in exclusive events including the newly launched PITCH.nano competition .
“For an early-stage startup working at the frontier of superconductor discovery, the combination of infrastructure and community has been irreplaceable,” says Jason Gibson, CEO and co-founder of Quantum Formatics. “START.nano isn’t just a resource,” adds Cynthia Liao MBA ’24, CEO and co-founder of Vertical Semiconductor. “It’s a strategic advantage that accelerates our roadmap, allowing us to iterate quickly to meet customer needs and strengthen our competitive edge.”
Although an MIT affiliation is not required, five of the 16 companies in the new cohort are led by MIT alumni, and an additional three have MIT affiliation. In total, 49 percent of the startups in START.nano are founded by MIT graduates.
Here are the intended impacts of the 16 new START.nano companies:
Acorn Genetics is developing a "smartphone of sequencing," launching the power of genetic analysis out of slow, centralized labs and into the hands of consumers for fast, portable, and affordable sequencing.
Addis Energy leverages oil, gas, and geothermal drilling technologies to unlock the chemical potential of iron-rich rocks. By injecting engineered fluids, they harness the earth’s natural energy to produce ammonia that is both abundant and cost-effective.
Augmend Health uses virtual reality and AI to deliver clinical data intelligence services for specialty care that turns incomplete documentation into revenue, compliance, and better treatment decisions.
Brightlight Photonics is building high-performance laser infrastructure at chip scale, integrating Titanium:Sapphire gain to deliver broadband, high-power, low-noise optical sources for advanced photonic systems.
Cahira Technologies is creating the new paradigm of brain-computer symbiosis for treating intractable diseases and human augmentation through autonomous, nonsurgical neural implants.
Copernic Catalysts is leveraging computational modeling to develop and commercialize transformational catalysts for low-cost and sustainable production of bulk chemicals and e-fuels.
Daqus Energy is unlocking high-energy lithium-ion batteries using critical metal-free organic cathodes.
Electrified Thermal Solutions is reinventing the firebrick to electrify industrial heat.
Guardion is making analytical instruments, chemical detectors, and radiation detectors more sensitive, portable, and easier to scale with nanomaterial-based ion detectors.
Mantel Capture is designing carbon capture materials to operate at the high temperatures found inside boilers, kilns, and furnaces — enabling highly efficient carbon capture that has not been possible until now.
nOhm Devices is developing highly-efficient cryogenic electronics for quantum computers and sensors.
Quantum Formatics is speeding discovery of the world’s next superconductors using proprietary AI.
Qunett is building the foundational hardware stack for deployable quantum networks to power the next era of global connectivity.
Rheyo is developing new ways to make dental care more effective, efficient, and easy through advanced materials and technology.
Vertical Semiconductor is commercializing high-voltage, high-density, high-efficiency vertical GaN (gallium nitride) to power the next era of compute.
VioNano Innovations is developing specialty material solutions that reduce variability and improve precision in semiconductor manufacturing, allowing chipmakers to build even smaller, faster, and more cost-effective chips.
START.nano now comprises over 32 companies and 11 graduates — ventures that have moved beyond the prototyping stages, and some into commercialization. See the full list here.
|
|
|
OpenAI Full Fan Mode Contest: Terms & Conditions |
openai |
09.04.2026 00:00 |
0.635
|
| Embedding sim. | 0.7192 |
| Entity overlap | 0.2222 |
| Title sim. | 0.0735 |
| Time proximity | 0.9405 |
| NLP тип | other |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Explore the official terms and conditions for the OpenAI Full Fan Mode Contest, including eligibility, entry steps, judging criteria, and prize details. Learn how to participate, submit your entry on Instagram, and win IPL match tickets.
|
|
|
AI Models Map the Colorado River’s Hard Choices |
ieee_spectrum_ai |
08.04.2026 14:00 |
0.631
|
| Embedding sim. | 0.7329 |
| Entity overlap | 0.0208 |
| Title sim. | 0.1 |
| Time proximity | 0.8571 |
| NLP тип | other |
| NLP организация | U.S. Bureau of Reclamation |
| NLP тема | deep learning |
| NLP страна | United States |
Открыть оригинал
The Colorado River begins as snow. Every spring, the mountain snowpack of the Rockies melts into streams that feed into reservoirs that supply 40 million people across seven U.S. states. The system has worked, more or less, for a century. That century is over.
By some measures, 2026 is shaping up to be the worst year the river has seen since records began. Flows are down 20 percent from 2000 levels . Lake Powell, the reservoir straddling Utah and Arizona, may drop below the threshold for generating hydropower before the year is out . The negotiations between the seven states over how to share what’s left have collapsed twice , and the U.S. federal government is threatening to impose its own plan.
While the states argue and the river shrinks, a growing set of machine learning tools is being deployed across the basin. Federal water managers are running millions of simulations to stress-test reservoir strategies against different possible futures. Researchers are forecasting streamflow months out using satellite data and deep learning. These technologies don’t promise to resolve the crisis, but they’re making the trade-offs visible. They’re showing, more precisely than ever before, what each decision will cost.
Seeing Further Into the River’s Future
Nobody manages more of the Colorado River’s daily operations than the U.S. Bureau of Reclamation . If the federal government follows through on its threat to impose a water-sharing plan, it will be Reclamation doing the imposing, and making decisions about how much water flows from Lake Powell and Lake Mead, the two largest reservoirs in the country.
The agency is not new to sophisticated modeling. For years, Reclamation’s researchers have combined paleoclimate reconstructions, global circulation models, and scenario planning to predict the river’s future. Machine learning tools are adding to that toolkit, says Chris Frans , Reclamation’s water-availability research coordinator, and they are already informing real operational decisions.
The clearest gains are in streamflow forecasting. Machine learning techniques—using data from satellites and weather stations well outside the basin—now outperform traditional methods across a range of conditions. Forecasts update every hour. In some areas, managers are getting five to seven days of advance warning on flood events, compared with three in the past, which gives them time to reduce the water in reservoirs before high inflows arrive.
The scale of scenario modeling has also expanded dramatically. A decade ago, running 100,000 individual simulations was a landmark study. Now, says Alan Butler , who manages Reclamation’s research and modeling group for the lower Colorado Basin, millions of simulations feed the analytical tools used in the current guidelines. Those simulations map out how different operating strategies perform across widely varying futures—making the trade-offs between them harder to ignore.
Dividing a Shrinking River
Knowing how much water is coming is one problem. Deciding who gets it is another. At the center of that process is the Colorado River Simulation System (CRSS), which models how water moves through the basin’s reservoirs, canals, and pipelines under more than a century of legal and regulatory constraints. This Reclamation model is an imperfect representation, but it has been the foundation of river negotiations for decades.
A tool called RiverWare , first developed in the early 1990s at the University of Colorado Boulder, lets states, cities, and tribes run their own scenarios through CRSS. Before RiverWare, these groups didn’t have confidence in Reclamation’s numbers, says Edith Zagona , a Boulder professor who directs the Center for Advanced Decision Support for Water and Environmental Systems , the center that built it. “There was just this huge lack of trust.” The solution was letting stakeholders inspect the assumptions built into the RiverWare model—how much water was available, how it could be used, and under what rules.
Getting stakeholders to trust the model turned out to be the easier problem. The harder one is what to do when the model itself can’t predict a single probable future. That question drove Zagona toward a framework called decision-making under deep uncertainty, which trades prediction for stress-testing policies against thousands of possible futures.
The tool Zagona’s group developed with Reclamation and the consulting firm Virga Labs puts the framework into practice in a web-based tool, running CRSS across more than 8,000 possible future water-supply scenarios to show how different management strategies hold up against the full range of what climate change might bring. At its center is an evolutionary algorithm called Borg, which generates and iteratively refines those strategies, searching for plans that perform well across many scenarios. The result is a set of trade-offs, not a single answer.
Borg-RiverWare has already shaped the ongoing negotiations over the river’s next operating rules, generating the scenarios and data that Reclamation used in its modeling tools. Those tools give stakeholders a common analytical foundation for negotiations. Now Zagona’s center is pushing the approach further. A system in development would let negotiating parties test competing proposals on the fly, showing how one side’s policy choices would ripple through the system and identifying areas of potential compromise during the negotiation itself.
New Tools for Forecasting the Colorado
Reclamation and Zagona’s center aren’t the only ones trying to see further into the river’s future. At Metropolitan State University of Denver, a team led by Mohammad Valipour has been building a forecasting system that uses deep learning to issue drought warnings across seven rivers in Colorado, from seven days to six months out. In a region where ground gauges are sparse and mountains make installation difficult, the team found that NASA satellite data outperformed in-field measurements. The goal, Valipour says, is a statewide drought alarm system that gives farmers and water managers more time to respond.
At Utah State University, Soukaina Filali Boubrahimi is attacking a different problem: how conditions at one point in the river ripple downstream weeks later. Using a graph neural network that treats each monitoring station as a node, her team built a map of the river’s interdependencies across one of the most contested water systems in the world. She says the approach could extend to other overtaxed basins.
“If you can figure out the Colorado River,” she says, “anyone else dealing with a stressed river system is going to be interested in what you learned.”
Snowpack in the upper Colorado River basin is far below normal. As of late March 2026, measurements across 130 sites were about 35 percent of the median, with projections showing continued shortfalls. USDA Natural Resources Conservation Service (NRCS)
What the Models Can’t See
Across the basin, researchers and water managers are running into the same wall. The models learn from historical data, but that data describes a river that no longer exists. Valipour found that feeding his models only the last decade outperformed using longer records. Filali Boubrahimi’s model struggles most in drought conditions, precisely when predictions matter most, because recent prolonged droughts don’t resemble the historical training data. One workaround is to train models on data from basins that have already experienced what the Colorado hasn’t yet.
Even so, better forecasts do not resolve the central problem. While the tools can show you what a drier future looks like across a thousand possible scenarios, they can’t tell you who should bear the cost of it. The cuts coming to the basin are going to be enormous, says Brad Udall , a water and climate research scientist at Colorado State University’s Colorado Water Center , and they will fall mostly on agriculture . They may fundamentally reshape communities that have built their economies around water for generations. “AI has no business being in the realm of replacing human values and human judgments,” he says.
The tools, by most measures, are doing exactly what they were built to do: The negotiating parties understand what is coming, and they are not disputing the projections. Zagona, who has worked on the Colorado River for 45 years, sees reasons for optimism. “The tools are bringing people to the table,” she says. “They’re at the table arguing. But at least they’re at the table.”
|
|
|
OpenAI announces plans to shut down its Sora video generator |
arstechnica_ai |
24.03.2026 21:19 |
0.63
|
| Embedding sim. | 0.7446 |
| Entity overlap | 0.1111 |
| Title sim. | 0.0625 |
| Time proximity | 0.7302 |
| NLP тип | product_launch |
| NLP организация | OpenAI |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
We hardly knew ye
OpenAI announces plans to shut down its Sora video generator
Move comes amid a reported plan to refocus on business and productivity use cases.
Kyle Orland
–
Mar 24, 2026 5:19 pm
|
93
We'll use any excuse to reuse this image of a cool dog riding a skateboard, from a video generated by Sora during its brief public access leak in November 2024.
Credit:
Sora
We'll use any excuse to reuse this image of a cool dog riding a skateboard, from a video generated by Sora during its brief public access leak in November 2024.
Credit:
Sora
Text
settings
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers only
Learn more
Minimize to nav
OpenAI is preparing to shut down Sora, the video-generation app that drew widespread attention when it launched in late 2024 .
OpenAI announced the move in a social media post Tuesday just after a Wall Street Journal story broke the news . The company said it will have more to share soon on “timelines for the app and API and details on preserving your work.”
“To everyone who created with Sora, shared it, and built community around it: thank you,” OpenAI wrote. “What you made with Sora mattered, and we know this news is disappointing.”
The announcement comes days after leaked news of an OpenAI all-hands meeting in which company executives reportedly said they were refocusing on business and productivity applications rather than being “distracted by side quests,” as OpenAI head of applications Fidji Simo reportedly put it.
The move also comes just months after Disney invested $1 billion in OpenAI as part of a deal that would “bring beloved characters from across Disney’s brands to Sora.” It’s unclear how that investment and partnership will continue following Sora’s shutdown.
OpenAI was well ahead of the curve when it first previewed Sora’s photorealistic video generation in February 2024, wowing industry observers with a level of fidelity that was unheard of for the much more limited text-to-video models of the time . Following Sora’s public launch that December, OpenAI continued updating Sora to support new video styles, more consistent worlds, voice synthesis and lip-syncing, and the opt-in ability to put your actual face (or even a dead celebrity’s face ) in a Sora-generated video.
Competitors have rushed into the AI video space in the time since Sora’s debut, though. ByteDance’s SeeDance 2.0 in particular has drawn significant attention in recent months for viral videos of complex, Hollywood-style scenes, complete with complex cuts and angles. And Google’s impressive Veo video-generation tools have formed the basis of its Genie world models , which allow for some level of real-time interactivity with generated video content.
Kyle Orland
Senior Gaming Editor
Kyle Orland
Senior Gaming Editor
Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper .
93 Comments
|
|
|
Локальные 200B уже не выглядят фантастикой: что меняют Bonsai и TurboQuant |
habr_ai |
02.04.2026 22:41 |
0.63
|
| Embedding sim. | 0.7098 |
| Entity overlap | 0.3333 |
| Title sim. | 0.1017 |
| Time proximity | 0.8547 |
| NLP тип | other |
| NLP организация | |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
Последние новости в сфере ИИ намекают на важный сдвиг: локальный запуск очень больших моделей уже не выглядит чистой фантастикой. В этой статье я разбираю две технологии — Bonsai и TurboQuant, — которые бьют по двум главным ограничениям инференса: размеру весов и объёму KV-cache. А затем прикидываю, что будет, если однажды их удастся объединить и масштабировать до моделей уровня 235B.
Читать далее
|
|
|
U.S. News Unveils 2026 Best Graduate Schools |
prnewswire |
07.04.2026 04:01 |
0.627
|
| Embedding sim. | 0.7597 |
| Entity overlap | 0 |
| Title sim. | 0.0459 |
| Time proximity | 0.6488 |
| NLP тип | other |
| NLP организация | U.S. News & World Report |
| NLP тема | educational technology |
| NLP страна | United States |
Открыть оригинал
U.S. News Unveils 2026 Best Graduate Schools
News provided by
U.S. News & World Report, L.P.
Apr 07, 2026, 00:01 ET
Share this article
Share to X
Share this article
Share to X
The new edition features updates to Sciences, Fine Arts and specialty Business rankings.
WASHINGTON , April 7, 2026 /PRNewswire/ -- U.S. News & World Report, the global authority in rankings and consumer advice, today announced the 2026 Best Graduate Schools rankings.
The rankings are a resource for students pursuing postgraduate education, offering evaluations of programs in fields like law , business , medicine , engineering , education and nursing .
While all disciplines return with the same ranking factors and weights as the prior edition, this year brings a few key updates, all aimed at helping prospective graduate students make informed decisions.
In this edition:
Expanded program coverage and data:
Utilizing an enhanced data collection framework, the Business rankings now feature over six times as many schools compared to previous editions in specialty fields such as marketing, finance and management.
Computer science program profiles on USNews.com feature expanded data on admissions, costs and program offerings.
Comprehensive rankings refreshes: This year's edition includes fully updated rankings for all Health disciplines (excluding physician assistant and social work), the first full refresh for Sciences doctoral programs since 2022, and the return of Master's in Fine Arts rankings for the first time since 2020.
Because each program is different, the rankings methodologies vary by discipline and graduate degree level.
"We know a graduate degree is a major commitment. That is why we are dedicated to methodologies that thoroughly examine a wide range of factors, from research excellence to career success," said LaMont Jones, Ed.D., managing editor of Education at U.S. News. "These rankings are a powerful tool for prospective students, offering clarity and confidence as they approach their most critical educational choice."
Best Business Schools: MBA (Full-Time)
1. Stanford University 2. University of Pennsylvania (Wharton) 3. University of Chicago (Booth)
Best Law Schools
1. Stanford University 2. University of Chicago (tie) 2. Yale University (tie)
Best Education Schools 1. University of Wisconsin – Madison 2. Northwestern University (tie) 2. University of Florida (tie) 2. University of Michigan – Ann Arbor (tie)
Best Engineering Schools
1. Massachusetts Institute of Technology 2. Stanford University 3. University of California, Berkeley
Best Nursing Schools: Master's Programs 1. Emory University 2. Johns Hopkins University 3. Duke University (tie) 3. Ohio State University (tie)
Best Nursing Schools: DNP Programs 1. Johns Hopkins University 2. Emory University 3. Rush University
Best Fine Arts Schools: Master's (MFA) 1. Yale University 2. Carnegie Mellon University (tie) 2. Rhode Island School of Design (tie) 2. University of California – Los Angeles (tie) 2. Virginia Commonwealth University (tie)
Best Medical Schools: Research Schools placing in the top tier include the following: Baylor College of Medicine, Case Western Reserve University, Emory University, Mayo Clinic School of Medicine (Alix), Ohio State University, University of California – Los Angeles (Geffen), University of California – San Diego, University of California – San Francisco, University of Colorado, University of Florida, University of Pittsburgh, University of Rochester, University of South Florida (Morsani), University of Texas Southwestern Medical Center, Vanderbilt University, Yale University.
Best Medical Schools: Primary Care Schools placing in the top tier include the following: Dartmouth College (Geisel), East Carolina University (Brody), Saint Louis University, University of Arkansas for Medical Sciences, University of California – Davis, University of California – San Diego, University of California – San Francisco, University of Hawaii – Manoa (Burns), University of Kansas Medical Center, University of Minnesota, University of Nebraska Medical Center, University of New Mexico, University of North Carolina – Chapel Hill, University of Wisconsin – Madison, Western University of Health Sciences, William Carey University College of Osteopathic Medicine.
U.S. News' education portfolio of resources includes the Scholarship Finder tool which provides potential and current graduate students with access to financial aid options and scholarships.
For more information, visit Best Graduate Schools and use #BestGradSchools on Facebook , X (formally Twitter) , TikTok and Instagram .
About U.S. News & World Report U.S. News & World Report is the global leader for journalism that empowers consumers, citizens, business leaders and policy officials to make confident decisions in all aspects of their lives and communities. A multifaceted media company, U.S. News provides unbiased rankings, independent reporting and analysis, and consumer advice to millions of people on USNews.com each month. A pillar in Washington for more than 90 years, U.S. News is the trusted home for in-depth and exclusive insights on education, health, politics, the economy, personal finance, travel, automobiles, real estate, careers and consumer products and services.
SOURCE U.S. News & World Report, L.P.
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Делаем лимиты ИИ почти бесконечными: умный роутер, который режет затраты на токены в разы и делает их почти бесплатными |
habr_ai |
03.04.2026 11:17 |
0.626
|
| Embedding sim. | 0.7135 |
| Entity overlap | 0.25 |
| Title sim. | 0.037 |
| Time proximity | 0.925 |
| NLP тип | other |
| NLP организация | |
| NLP тема | large language models |
| NLP страна | |
Открыть оригинал
$47 за неделю на LLM API — при том что половина запросов тривиальные. Поставил ClawRouter — open source роутер, который анализирует промт по 15 параметрам и отправляет в самую дешёвую подходящую модель. За следующую неделю потратил $1.80. Рассказываю, как работает, что понравилось, что нет и какие есть альтернативы.
Читать далее
|
|
|
RayNeo Leads Global AR Market with One-Third Share |
prnewswire |
01.04.2026 07:10 |
0.623
|
| Embedding sim. | 0.7207 |
| Entity overlap | 0.0444 |
| Title sim. | 0.172 |
| Time proximity | 0.7491 |
| NLP тип | other |
| NLP организация | RayNeo |
| NLP тема | human-computer interaction |
| NLP страна | China |
Открыть оригинал
RayNeo Leads Global AR Market with One-Third Share
News provided by
RayNeo
Apr 01, 2026, 03:10 ET
Share this article
Share to X
Share this article
Share to X
LOS ANGELES , April 1, 2026 /PRNewswire/ -- Following the release of several authoritative 2025 annual AR market reports, one brand has consistently emerged as the industry leader: RayNeo. Despite varying statistical methodologies across firms, the conclusion remains unanimous—RayNeo has secured the No.1 position in both global and Chinese markets. In just four years since its inception, the AR pioneer has achieved a "no-suspense" lead, dominating key regions with record-breaking growth.
Dominating Global and North American Markets
Continue Reading
RayNeo: Global No.1 (PRNewsfoto/RayNeo)
According to the Counterpoint Research 2025 Global AR Smart Glasses Brand Shipment Report , RayNeo commanded a 27% share of global shipments in 2025, ranking first worldwide. The brand saw explosive growth in Q4 2025 across China, North America, and Europe.
Data from IDC further confirms RayNeo's global leadership in Q4 2025, highlighting a "phenomenal breakthrough" in the North American market, where RayNeo's shipments surged by 456.5% year-over-year. Consequently, IDC noted in its report that the smart glasses sector has officially entered an era led by Chinese brands.
Full-Stack Innovation and Global Ecosystem
RayNeo's ascent is fueled by its deep commitment to R&D in near-eye display, spatial computing, AI large models, and human-computer interaction. By achieving full-link self-research and mass production of core optical solutions, RayNeo has built a formidable technical moat.
X Series: All-scenario AI+AR glasses.
Air Series: The "super blockbuster" focused on immersive viewing.
V Series: Specialized AI filming glasses.
RayNeo's services now span over 3 0 countries and regions, with a strong retail presence in global mainstream channels such as Amazon and Best Buy. RayNeo's ecosystem has received several praises, signaling the brand's success in moving AR from "tech novelty" to "daily essential."
Strategic Financing and Ecosystem Expansion
RayNeo's market leadership is supported by robust capital backing and a rapidly expanding global partner network. In Q1 2026, the company successfully completed a financing round exceeding $140 million (approx. 1 billion RMB), led by an investment group including CITIC Goldstone and strategic funds from industry giants China Mobile and China Unicom.
Alongside this financial growth, RayNeo is deepening its technical integration with world-class telecommunications partners, including China Mobile, and China Unicom. These collaborations focus on building a 5G+AR+AI ecosystem centered on eSIM technology, advanced AI models, and cloud services. Additionally, RayNeo continues to co-create diverse application scenarios with global tech leaders such as Google, Applied Materials, SeeYA Technology, Alibaba Cloud, Ant Group, and Tencent, further accelerating the maturity of the global AR industry.
About RayNeo
RayNeo is the global leader in consumer Augmented Reality (AR) glasses, dedicated to transforming everyday life for one billion people. As the Official Worldwide Olympic Partner in the AR glasses category, the company represents the forefront of immersive technology. Its product portfolio features the AI-enhanced, full-color display X Series and the portable, large-screen Air Series, designed for versatility and high-quality viewing. According to Counterpoint Research, RayNeo dominated the global AR glasses market in Q3 2025, capturing a 24% market share and securing the top position worldwide.
Contact PR Manager: Sophie Email: [email protected]
SOURCE RayNeo
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Oracle cuts jobs across sales, engineering, security |
the_register_ai |
31.03.2026 17:42 |
0.622
|
| Embedding sim. | 0.7851 |
| Entity overlap | 0.129 |
| Title sim. | 0.1277 |
| Time proximity | 0.1433 |
| NLP тип | other |
| NLP организация | Oracle |
| NLP тема | ai infrastructure |
| NLP страна | United States |
Открыть оригинал
AI + ML
12
Oracle cuts jobs across sales, engineering, security
12
Big Red declines comment as reports point to layoffs in the thousands
O'Ryan Johnson
Tue 31 Mar 2026 //
17:42 UTC
Oracle laid off thousands of employees on Tuesday as it ramps spending on AI infrastructure projects internally and with major technology partners.
The layoffs were carried out via email, according to copies of the message viewed by Business Insider . The email told affected workers they would be terminated immediately and to provide a personal email for follow-up.
Oracle declined to comment to The Register .
The cuts echo a TD Cowen forecast earlier this year, when the investment bank questioned how Oracle would finance its expanding AI datacenter buildout and suggested headcount reductions could reach 20,000 to 30,000. It is not clear how many employees were notified on Tuesday, but one screenshot that purports to show the number of internal Slack users showed a drop of 10,000 overnight .
Big Red has also partnered with OpenAI and SoftBank on the Stargate project, a massive project to build datacenters around the country to help power generative AI models, starting with one in Abilene, Texas. The headline numbers were ridiculously large - OpenAI said the venture intended to invest $500 billion, and even if the actual number falls well shy of that, it's still a massive promise.
In a September filing [ PDF ] with the SEC, Oracle said it was planning its largest restructuring yet in its current fiscal year, which started in June, with an expected cost of $1.6 billion. During its most recent earnings call on March 10, Oracle said it expected to spend $50 billion on capital expenditures during fiscal 2026, according to Douglas Kehring, executive vice president and principal financial officer.
Oracle has previously said it reserves most of its spend for “revenue generating equipment” that builds out datacenter capacity, which returns margins of 30 to 40 percent.
Oracle employs about 162,000 people, with 58,000 of those in the US and approximately 104,000 internationally. If the rumored cuts of 30,000 are correct, it would amount to 18 percent of the company’s workforce.
According to posts from Oracle workers on LinkedIn, the cuts were spread through multiple departments around the country, with employees in Kansas, Tennessee, and Texas taking to social media to say they were among those chopped.
“I’m incredibly proud of what I was able to build over the past 4 years, from intern to full-time, and grateful for the experience, mentors, and teammates along the way,” wrote a software engineer from Texas.
She said she helped build and launch FreeSQL.com, Oracle’s next-generation SQL learning platform, from the ground up and played a key role in the LiveSQL-to-FreeSQL rewrite, “improving onboarding and developer experience for thousands of users.”
Banker claims Oracle may slash up to 30,000 jobs, sell health unit to pay for AI build-out
Oracle: AI agents can reason, decide and act - liability question remains
Larry Ellison's latest craze: Vectorizing all the customers
Oracle tops up restructuring fund for FY26 by $500M
In another post, a 20-year Oracle veteran in the security group said he had no bitterness and saw the cut coming.
“Not unexpected as I was able to use my AI coding skills to take over a lot of my daily tasks,” he wrote, adding a laughing emoji to the line. “While this isn’t how I imagined this chapter ending, I’m incredibly grateful for the experiences, the work, and most importantly, the people. I’ve had the opportunity to collaborate with talented teams, build lasting relationships, and be part of work I’m truly proud of.” ®
Share
More about
Oracle
More like these
×
More about
Oracle
Narrower topics
Database
Mark Hurd
MySQL
Broader topics
Larry Ellison
More about
Share
12
COMMENTS
More about
Oracle
More like these
×
More about
Oracle
Narrower topics
Database
Mark Hurd
MySQL
Broader topics
Larry Ellison
TIP US OFF
Send us news
|
|
|
Larx Expands Global Footprint with Launch of LARX AI LTD., Appoints Rory Horgan as Director UK & EMEA |
prnewswire |
02.04.2026 07:00 |
0.622
|
| Embedding sim. | 0.7131 |
| Entity overlap | 0.0313 |
| Title sim. | 0.1417 |
| Time proximity | 0.8581 |
| NLP тип | leadership_change |
| NLP организация | Larx Inc. |
| NLP тема | enterprise ai |
| NLP страна | United Kingdom |
Открыть оригинал
Larx Expands Global Footprint with Launch of LARX AI LTD., Appoints Rory Horgan as Director UK & EMEA
News provided by
Larx, Inc.
Apr 02, 2026, 03:00 ET
Share this article
Share to X
Share this article
Share to X
New UK subsidiary strengthens international presence and accelerates delivery of decision intelligence capabilities across UK, Europe, the Middle East, and Africa
ATLANTA and LONDON , April 2, 2026 /PRNewswire/ -- Larx, Inc., the AI-native decision intelligence platform for visual and multi-source data, today announced the launch of its UK-based subsidiary, LARX AI LTD. , marking a major milestone in Larx's growing international footprint and commitment to supporting allied defense, intelligence, and commercial partners across Europe, the Middle East, and Africa (EMEA). The new entity enables localized operations, deeper integration with UK and NATO partners, and alignment with regional mission priorities.
To lead this expansion, Larx has appointed Rory Horgan as Director of UK & EMEA. Based in the United Kingdom, Horgan will oversee regional strategy, partnerships, and deployment of Larx's platform across government and commercial sectors.
Horgan brings a unique blend of operational and strategic experience to the role, having served in the UK military before transitioning into national security roles. Most recently, he spent the past two years focused on human intelligence (HUMINT) and defense-related initiatives, where he supported mission-critical efforts at the intersection of intelligence, security, and emerging technology.
"Expanding into the UK is a natural next step for Larx as we continue to scale globally alongside our partners and customers," said Tad Mielnicki , CEO of Larx. "The traction and warm reception we've received in the UK and NATO shows that there is global demand for decision intelligence support. Rory's background in military operations, HUMINT, and the UK defense ecosystem makes him uniquely suited to lead this next phase of growth."
"Having operated in environments where timely, accurate intelligence is the difference between success and failure, I've seen firsthand the challenges of fragmented data and cognitive overload," said Horgan. "Larx changes that equation. I'm excited to lead our expansion across EMEA and work with partners to deliver decision advantage at speed whether in defense, security, or commercial operations."
Larx's platform unifies visual, geospatial, and multi-modal data into a single operational layer, enabling users to rapidly synthesize information, explore possibilities, and act with confidence. With the launch of LARX AI LTD., the company is positioned to further support allied missions and expand into key international markets.
About Larx Larx is the decision intelligence platform for a world defined by overwhelming data and accelerating timelines. It unifies information across the intelligence lifecycle — including imagery, video, geospatial and sensor data, text, open-source material, operational reporting, and other structured and unstructured inputs — into a single operational environment designed for mission-driven decision-making. By fusing visual, geospatial, and multi-source intelligence, Larx enables organizations to move faster, understand more, and act with precision. Learn more at larx.io.
SOURCE Larx, Inc.
21 %
more press release views with
Request a Demo
×
Modal title
|
|
|
Popular AI gateway startup LiteLLM ditches controversial startup Delve | TechCrunch |
techcrunch |
30.03.2026 23:08 |
0.62
|
| Embedding sim. | 0.6946 |
| Entity overlap | 0 |
| Title sim. | 0.2 |
| Time proximity | 0.9397 |
| NLP тип | other |
| NLP организация | litellm |
| NLP тема | ai security |
| NLP страна | united states |
Открыть оригинал
LiteLLM, makers of a popular AI gateway used by millions of developers, has publicly announced that it is ditching compliance startup Delve and will redo its security certifications with another company and auditor. The announcement comes after LiteLLM’s open source version fell victim to some horrific credential-stealing malware last week.
Prior to the incident, LiteLLM had obtained two security compliance certifications by hiring AI compliance startup Delve. Such certifications are intended to verify that a company has procedures in place to minimize potential incidents.
Delve has been accused of misleading its customers about their true compliance by allegedly generating fake data and using auditors that rubber-stamped their reports. Delve’s founder has denied those allegations and offered free re-tests and audits to all of its customers. That denial encouraged the anonymous Delve whistleblower to double down, including releasing alleged receipts over the weekend.
On Monday, LiteLLM CTO Ishaan Jaffer posted on X that his company will be using Delve competitor Vanta to re-certify and will find its own, independent third-party auditor to verify its compliance controls. After such a harsh week, LiteLLM is voting with its feet.
Topics
AI , Delve , Security , security compliance , Startups , TC
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Newsletters
See More
Subscribe for the industry’s biggest tech news
TechCrunch Daily News
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch Mobility
TechCrunch Mobility is your destination for transportation news and insight.
Startups Weekly
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
StrictlyVC
Provides movers and shakers with the info they need to start their day.
No newsletters selected.
Subscribe
By submitting your email, you agree to our Terms and Privacy Notice .
Related
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
1 hour ago
Robotics
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles
Tim Fernholz
6 hours ago
Venture
Former Coatue partner raises huge $65M seed for enterprise AI agent startup
Julie Bort
23 hours ago
Latest in Startups
Startups
It’s not your imagination: AI seed startups are commanding higher valuations
Dominic-Madori Davis
5 minutes ago
Startups
Yupp shuts down after raising $33M from a16z crypto’s Chris Dixon
Julie Bort
1 hour ago
Hardware
Whoop’s valuation just tripled to $10 billion
Connie Loizos
4 hours ago
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
OpenAI's Fidji Simo Is Taking Medical Leave Amid an Executive Shake-Up |
wired |
03.04.2026 19:38 |
0.62
|
| Embedding sim. | 0.695 |
| Entity overlap | 0.3 |
| Title sim. | 0.1264 |
| Time proximity | 0.8563 |
| NLP тип | leadership_change |
| NLP организация | OpenAI |
| NLP тема | leadership change |
| NLP страна | |
Открыть оригинал
Zoë Schiffer
Business
Apr 3, 2026 3:38 PM
OpenAI’s Fidji Simo Is Taking Medical Leave Amid an Executive Shake-Up
The company is undergoing major leadership restructuring as its CEO of AGI deployment goes on leave for “several weeks.”
Photograph: JOEL SAGET/Getty Images
Save this story
Save this story
OpenAI announced a major reorganization on Friday as the company’s CEO of AGI deployment, Fidji Simo , takes medical leave to focus on her health. OpenAI president Greg Brockman will handle the product teams in Simo’s absence. Simo’s previous title was CEO of applications.
Brad Lightcap, the chief operating officer and one of CEO Sam Altman’s top deputies, is transitioning to a “special projects” role. Kate Rouch, the chief marketing officer, is taking a leave of absence to focus on her health. Rouch has been undergoing treatment for breast cancer. When she returns, it will be in “a different, more narrowly scoped role,” according to a note Simo shared with OpenAI staff which was viewed by WIRED.
“As I shared when I joined, I had a relapse of my neuroimmune condition a few weeks before starting the job,” Simo said in the note which was sent in OpenAI’s “core” Slack channel. “It’s been a bit of a rollercoaster since, and the last month has been particularly rough health-wise. For my entire time here, I’ve postponed medical tests and new therapies to stay completely focused on the job and not miss a single day of work. I took time off for the first time two weeks before the break for some medical tests, and it's now clear that I’ve pushed a little too far and I really need to try new interventions to stabilize my health.”
Simo is expected to take “several weeks” of leave according to her internal post.
In his new role, Lightcap will be in charge of the company’s forward-deployed engineers, which embed within enterprise organizations and help integrate OpenAI’s technology, among other duties.
OpenAI will begin searching for a new CMO, Simo said. The company is also looking for a chief communications officer to replace Hannah Wong, who left her position in January . Chris Lehane has taken over as the leader of the communications team in the interim.
“We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases,” said an OpenAI spokesperson in a statement. “We're well-positioned to keep executing with continuity and momentum.”
Simo joined OpenAI in August 2025, where she took over many of the company’s consumer-facing products, including ChatGPT, Codex, and the social-video app Sora. She recently shuttered the Sora app and told staff that the company needed to cut side projects and refocus around its core products.
The decision comes as OpenAI eyes an IPO as soon as this year. The company recently raised $122 billion in the largest funding round the tech industry has ever seen, which valued the company at $852 billion.
|