← Все кластеры
Quantum Computing will Augment Artificial Intelligence
closed
Тип событияother
Темаai infrastructure
ОрганизацияAnthropic
СтранаUnited States
Статей5
Уник. источников3
Важность / Момент1.63 / 0
Период23.02.2026 10:32 — 25.02.2026 19:40
Создан06.04.2026 06:20:00
Статьи в кластере 5
Заголовок Источник Дата публикации Score
S Quantum Computing will Augment Artificial Intelligence ai_supremacy 23.02.2026 10:32 1
Embedding sim.1
Entity overlap1
Title sim.1
Time proximity1
NLP типother
NLP организацияAnthropic
NLP темаquantum computing
NLP странаUnited States

Открыть оригинал

Prospectus Quantum Computing will Augment Artificial Intelligence New computing paradigms are likely to boost AI. Quantum computing could have an important next five years. I expect it to become much more relevant sometime in the 2028 to 2035 window. Michael Spencer Feb 23, 2026 ∙ Paid 119 9 Share Upgrade to paid to play voiceover Image: MIT Sloan Good Morning Recently with the Anthropic push back against the U.S. DoD (“Department of War”) issue, a group of quantum scientists have independently published a manifesto rejecting the use of quantum research for military purpose s and is seeking signatures from researchers around the world. Read the manifesto . You want to talk about emerging technology? The worlds of National defense, Generative AI, Geopolitics, Robotics Innovation (including things like space-tech & drone swarms) and Quantum computing are quickly converging. Quantum computing is still in its nascent emerging beginnings. But it’s slowly becoming a field worth following with potential major intersections with AI, hybrid quantum chips, and very specialized accelerators for very narrow tasks related to scientific frontiers like chemistry, new materials, battery tech, cybersecurity and national defense capabilities, among others. The Three Core Pillars of Quantum The three rapidly (Q3) emerging pillars of Quantum technology and quantum computing broadly are as I see them are quantum computing, quantum communication, and quantum sensing that could together generate up to $97 billion in revenue worldwide by 2035. This could be around $200 Bn. by 2040 . Quantum Computing Quantum Communication Quantum Sensing I’ve always considered Quantum computing a wild card in how AI Supremacy plays out. I’ve been covering Quantum computing startups, fundraising and the industry for years on my Quantum Foundry Newsletter. When we finally are able to have real machines with millions of qubits (far from today’s reality), it will vastly opens up new possibilities for computing. It’s going to be a journey to get there. Quantum computing will lend aspects of parallelism that could radically augment AI. Even in terms of Quantum machine learning at the intersection of LLMs, if you think about it, thanks to superposition, a quantum computer can evaluate millions of model parameters simultaneously rather than sequentially. Companies like Nvidia have released NVQLink (a so-called "Rosetta Stone" of hybrid computing), which allows GPUs and quantum processors to communicate with microsecond latency. This hybrid hardware will evolve considerably in the next decade. A Period of R&D could lead to a Quantum Acceleration period Classical AI struggles with high-dimensional data and our future AI systems will have access to new emerging computing architectures and paradigms. QML or Quantum machine learning is a new frontier where hard engineering and hybrid utility are evolving steadily in the 2020s. Just as we have frontier labs and AI startups to use AI to design our own chips , so will we will one day I predict also have this for Quantum computers, new kinds of hybrid chips and entirely new computing paradigms. This is not well understood by those in AI, Venture Capitalists or even National tech policy think tanks today. The way Quantum and AI evolves together will enable an altered future that is highly uncertain. Quantum computing even in its infancy is today impacting finance, cybersecurity, advanced materials, battery tech, biotechnology, drug development, logistics & supply chains, among many others. Qubit Modalities and Quantum Hybrid-Chips Qubit types are still evolving rapidly although a real breakthrough in Quantum remains elusive. Some say that Neutral atoms (using laser-cooled atoms like Rubidium or Cesium) are arguably the most promising for massive scaling in the next 2-3 years. Meanwhile watch the R&D Nvidia is doing with TSMC. TSMC are owning the infrastructure that are attempting to create a Quantum Bridge. TSMC R&D means TSMC is producing the specialized Nvidia Quantum-X and Spectrum-X photonics switches using their leading-edge nodes (3nm and 2nm) for advanced quantum nodes. Microsoft. The era of Quantum chips is here. More Quantum Companies going to the Public Market With yet another Quantum startup going public this February in the SPAC of Infleqtion (ticker INFQ 0.00%↑ ), investor awareness about Quantum computing will only continue to grow. Quantum, like AI (Anthropic vs. Department of War), is thus becoming a National Defense technology of considerable importance. In cybersecurity, communications and sensing with ramifications in space-technology as well. Also in predicting certain outcomes in high-powered simulations. I asked Brian Lenahan for his take on where the Quantum computing industry is at, so our AI readers can catch up and be current. Brian writes: Quantum's Business Quantum Technology for Business from a Global Quantum Top Voice and LinkedIn 'Top Strategy Voice'. By Brian Lenahan Just as National Defense spending increase for space-tech, some of that might also boost Quantum startups. This is because things like cybersecurity and Quantum sensing have mission-critical implications. The most serious Quantum startup I’m waiting for to go public is Quantinuum . Meanwhile on January 14, 2026, its majority owner, Honeywell, announced that Quantinuum has filed a confidential draft registration statement (Form S-1) with the SEC for an Initial Public Offering (IPO). In Quantum, Europe is not seen as a laggard like it is in scaling major Tech companies and AI, but as a pivotal player. It makes the whole space rather interesting to watch. Quantum Companies and Qubit Modalites to Watch 2026-2030 The Venture Capital scene behind Quantum is fairly interesting, as are the areas where China is competitive or even ahead. There’s certainly been some moonshots in Quantum startups, but the future is still fairly uncertain. Government funding of the Quantum industry for National Defense is also a huge driver as well as the impact from huge corporate sponsors. Most Well Funded Quantum Companies 2026 Some of the leaders in terms of funds raised in recent years are: PsiQuantum SandboxAQ Quantinuum IQM Quantum Computers Xanadu QuEra Computing Multiverse Computing Classiq Technologies Alice & Bob Pasqual Xanadu going public On January 28th, 2026 Xanadu based in Canada also took steps to begin the process of going public. When Quantinuum and Xanadu go public, they together with IonQ IONQ 0.00%↑ will represent the most promising first three Quantum companies on public markets. This will also mark a period where BigTech will allocate more capex in Quantum and making more strategic acquisitions. Quantum Modalities A brief overview: Topological Qubits Microsoft has also been working hard on a Modality that is a Moonshot where in brief they are trying to build Topological Qubits using Majorana quasiparticles. Best Funded Quantum Startups Not all Quantum modalities of qubit approaches have a valid or scalable future. Right now the Neutral Atom and Trapped Ion approaches seem dominant. But that could change. The State of Quantum: Brian Lenahan Brian is the founder and chair of the Quantum Strategy Institute (QSI), a collaboration of quantum experts and enthusiasts from around the globe enabling business to understand the technology, its potential and its practical applications. He’s a Quantum author, strategic analyst and expert I’ve been following for years. He’s a consultant, mentor, think tank leader, workshop facilitator, as well as a 3x Amazon Bestselling author, public speaker and business leader in the space. He has a Quantum strategy for business course here. One of his most reviewed books was called Quantum Boost (2021). If you are curious about the history of Quantum computing read this one (2023). Where are we today in 2026 in Quantum? By Brian Lenahan of Quantum Business Newsletter. In late 2024, I sat down with the quantum leadership team at Microsoft’s Seattle campus along with a group of industry analysts and participated in a compelling focus group followed by a lab tour. The previous year, I had walked through the D-Wave quantum lab in Burnaby, BC to see a quantum computer or “fridge” up close. In Boston, I toured the QuEra lab which includes a lego version of their quantum computer. In fact, I’ve watched the evolution of quantum computing with a mix of cautious optimism and relentless scrutiny as a former bank executive who focuses squarely on results. So, I have been somewhat amazed and thrilled about 2025. 2025 stands out as the year the field decisively shifted from “promising lab demos” to “credible paths toward practical utility.” Designated by the United Nations as the International Year of Quantum Science and Technology , the recognition amplified global attention, inspired billions of dollars in investment, and research breakthroughs. What is Quantum Computing? If you’re an AI enthusiast, think of quantum computing as the next leap beyond classical GPUs and TPUs (Tensor Processing Units). Today’s AI models crunch massive data with billions of parameters using classical bits (0 or 1). Quantum computers (QC) use units in the form of atoms or particles called qubits (or quantum bits) which can be 0, 1, or both at once (thanks to the crazy world of physics called superposition)—like exploring many possibilities simultaneously (think running through a maze in every direction at once rather than a linear, one path-at-a-time approach, to solve the maze faster). QC’s also exploit entanglement, where qubits link such that changing one instantly affects another, no matter the distance. This enables exponential speedups for certain problems that classical computers (even the biggest supercomputers) struggle with, like simulating large molecules for new drugs or optimizing complex AI training for financial portfolios or large cities. Classical computers will never be replaced by quantum computers because they’re used for different purposes. Think of it this way – today’s computers are good at analysing large amounts of data with few parameters in such functions as accounting, operations, basic drug design and testing, whereas quantum computers are best with smaller datasets but a large number of parameters (or complexity). So, just the hardest problems would leverage a quantum computer. What’s New? The Evolution of Sensing As an industry writer, observer and conference presenter, I have the good fortune of connecting with many of the leaders of quantum companies, and their persistence (and now access to greater funding through both private and public capital) has translated into significant advances. And before you mention the oft-heard predictions of quantum technologies being years or decades away, I point you to one pillar - quantum sensing - in navigation especially – which is already in market today, replacing jammable GPS tech, as an example of quantum’s commercial progress. If you’ve been on a commercial flight where GPS signals were jammed—leaving the aircraft reliant on less precise backup systems—you’re likely feeling uneasy about aviation safety in an era of increasing electronic interference. But imagine your plane is equipped with Q-CTRL’s Ironstone Opal quantum navigation system. Your confidence would quickly return, because this advanced quantum sensor technology provides positioning that is completely passive, undetectable, and inherently immune to jamming or spoofing. Unlike traditional GPS, which depends on vulnerable satellite signals, Ironstone Opal uses ultrasensitive quantum sensors—enhanced by proprietary AI-powered software—to map subtle variations in Earth’s magnetic field (or gravity in related implementations). This geophysical approach delivers GPS-like accuracy without any external signals that adversaries can disrupt. Real-world flight trials have demonstrated it outperforming high-end conventional inertial navigation systems by up to 94x (or more in some cases), ensuring reliable, secure navigation even in fully GPS-denied environments. In short: when GPS fails, quantum-assured navigation doesn’t just keep you on course—it restores peace of mind. Quantum and AI Quantum technologies and AI do indeed have direct relationships. Quantum speedup could dramatically reduce time and energy consumption for training massive LLMs or foundation models (e.g., hours instead of weeks), while many AI tasks (hyperparameter tuning, combinatorial problems in logistics/scheduling, portfolio optimization) map to NP-hard problems where quantum algorithms like QAOA (Quantum Approximate Optimization Algorithm) or quantum annealing show promise. In reverse, AI and Machine Learning is proving essential for building and operating quantum systems, which are noisy and hard to control including error correction and mitigation where neural networks decode error syndromes and reinforcement learning discovers better codes. The combination also can optimize pulse sequences to minimize noise (e.g., Google’s and others’ work on AI-driven quantum control) and AI can aid in discovering new materials or architectures for qubits. Finally, AI can help design better quantum circuits/algorithms. The Evolution of Computing The ultimate quantum computer is one with the accuracy of today’s systems (99.99999999999% or better) yet being able to handle much more complex problems (such as mentioned optimizing traffic in a large cities or a financial trading portfolio) with substantially more variables or parameters. It’s true the best today’s QC’s can manage is 99.99%, yet that level is vastly improved from just 24 months ago. And while collectively we are not yet at large-scale, everyday-useful quantum computers, 2025 delivered verifiable milestones in some of the priority challenges fixing errors, scaling hardware, and showing real advantages over classical systems. The Big Fix: Error Correction Becomes Real (The Key Breakthrough) Quantum bits are fragile—tiny disturbances like heat, vibrations, or cosmic rays cause errors quickly (referred to as ‘decoherence’). This has been the biggest industry roadblock for years. In 2025, the field crossed a major threshold with quantum error correction (QEC) moving from theory to hardware reality. Google’s Willow chip (105 qubits, superconducting type) achieved the “below-threshold” milestone. By grouping many physical qubits into one reliable “logical” qubit and using clever codes (like surface codes), errors dropped exponentially as more qubits were added. Willow ran a benchmark task in about 5 minutes that would take the world’s fastest classical supercomputer 10^25 years—way longer than the universe’s age. More excitingly, they demonstrated Quantum Echoes, the first verifiable quantum advantage on a real, useful algorithm (out-of-order time correlator), running approximately 13,000 times faster than classical methods. This wasn’t just “faster for fun”—it ties to problems in physics, finance, and potentially AI pattern recognition. IBM advanced with processors like Quantum Loon (testing fault-tolerant parts) and Nighthawk (high-connectivity for complex circuits). Their roadmap targets Quantum Starling by 2029 where 200 logical qubits are expected to be running 100 million error-corrected operations. Microsoft, the same organization I visited, pushed topological qubits (Majorana 1 chip) for built-in error resistance, and novel 4D codes reduced errors dramatically in simulations. To boot, research exploded with 120+ peer-reviewed QEC papers in the first 10 months of 2025 (up from 36 in 2024). Error correction shifted from “maybe someday” to “now an engineering challenge,” meaning bigger systems get more reliable, not less. Hardware Progress: Many Approaches Racing Forward Akin to Beta versus VHS in the early era of videotape, or Android vs iOS in today’s world, many “modalities” exist in the world today though no single “best” way to build qubits has emerged. Modalities like superconducting, trapped ions, neutral atoms, photonic and annealing types of quantum computers exist simultaneously. Superconducting computers from companies like Google, IBM, and Rigetti operate like tiny loops cooled to near absolute zero (colder than outer space). Willow and IBM’s Heron showed high fidelity (99.99%+ accurate operations) though nowhere near classical fidelity of up to 15 nine’s. Trapped ions & neutral atoms computers from vendors like Quantinuum, IonQ, and Atom Computing function with ions/atoms held by lasers for high accuracy. Quantinuum’s Helios (launched late 2025) claimed the most accurate commercial system, enabling generative quantum AI. IonQ hit advantages in drug discovery and chemistry simulations. Photonic computers from companies like PsiQuantum use light particles to perform computations. Annealing computers predominantly from D-Wave (my other visit mentioned above) specialize in optimization problems. Early Wins: Quantum Advantage in Real Applications 2025 saw credible “quantum advantage”—quantum outperforming classical on narrow but useful tasks such as IonQ/Ansys achieving 12% better medical device simulations, Google achieving 13,000x speedup on verifiable algorithms, and Quantinuum & others achieving better accuracy in chemistry, materials, and AI-related tasks. These target drug discovery (simulating molecules exactly), climate modeling, finance optimization, and even enhancing AI (e.g., better randomness for secure models or quantum-inspired training). Quantum sensing (as mentioned above) also progressed starting with MRI’s decades ago to ultra-precise, non-jammable magnetometers for navigation. Investment, Ecosystem, and Reality Check Funding surged in 2025 with $3.77B in the first nine months (nearly 3x that of 2024) including PsiQuantum’s $1billion and $800 million to Honeywell-owned Quantinuum. Governments poured in billions with Japan leading the way in 2025. Cloud platforms (IBM, AWS Ocelot, Azure) made experimentation easy. Certain challenges do remain including significant talent shortages (not just PhD’s), high energy/cooling needs, and full fault-tolerance for broad use still 5–10+ years away. But 2025 proved scaling is feasible and physics isn’t the blocker anymore. Scale & Democratizing Quantum Access Continue reading this post for free, courtesy of Michael Spencer. Claim my free post Or purchase a paid subscription. Previous Next
AI Is Acing Math Exams Faster Than Scientists Write Them ieee_spectrum_ai 25.02.2026 16:00 0.736
Embedding sim.0.8667
Entity overlap0.0345
Title sim.0.2353
Time proximity0.6964
NLP типother
NLP организацияEpoch AI
NLP темаmathematical reasoning
NLP страна

Открыть оригинал

Mathematics is often regarded as the ideal domain for measuring AI progress effectively. Math’s step-by-step logic is easy to track, and its definitive, automatically verifiable answers remove any human or subjective factors. But AI systems are improving at such a pace that math benchmarks are struggling to keep up . Way back in November 2024, nonprofit research organization Epoch AI quietly released FrontierMath . A standardized, rigorous benchmark, FrontierMath was designed to measure the mathematical reasoning capabilities of the latest AI tools. “It’s a bunch of really hard math problems,” explains Greg Burnham , Epoch AI senior researcher. “Originally, it was 300 problems that we now call tiers 1–3, but having seen AI capabilities really speed up, there was a feeling that we had to run to stay ahead, so now there’s a special challenge set of extra carefully constructed problems that we call tier 4.” To a rough approximation, tiers 1–4 go from advanced undergraduate through to early postdoc-level mathematics. When introduced, state-of-the-art AI models were unable to solve more than 2 percent of the problems FrontierMath contained. Fast forward to today : The best publicly available AI models, such as GPT-5.2 and Claude Opus 4.6, are solving over 40 percent of FrontierMath’s 300 tier 1–3 problems, and over 30 percent of the 50 tier 4 problems. AI takes on Ph.D.-level mathematics And this dizzying pace of advancement is showing no signs of abating. For example, just recently Google DeepMind announced that Aletheia, an experimental AI system derived from Gemini Deep Think, achieved publishable Ph.D.-level research results . Though obscure mathematically—it was calculated with certain structure constants in arithmetic geometry called eigenweights—the result is significant in terms of AI development. “They’re claiming it was essentially autonomous, meaning a human wasn’t guiding the work, and it’s publishable,” Burnham says. “It’s definitely at the lower end of the spectrum of work that would get a mathematician excited, but it’s new—it’s something we truly haven’t really seen before.” To place this achievement in context, every FrontierMath problem has a known answer that a human has derived. Though a human could probably have achieved Aletheia’s result “if they sat down and steeled themselves for a week,” says Burnham, no human had ever done so. Aletheia’s results and other recent achievements by AI mathematicians point to new, tougher benchmarks being needed to understand AI capabilities—and fast, because existing ones will soon become irrelevant. “There are easier math benchmarks that are already obsolete, several generations of them,” says Burnham. “FrontierMath will probably saturate [Ed. note: This means that state-of-the-art AI models score 100 percent] within the next two years—could be faster.” The First Proof challenge To begin to address this problem, on 6 February, a group of 11 highly distinguished mathematicians proposed the First Proof challenge , a set of 10 extremely difficult math questions that arose naturally in the authors’ research processes, and whose proofs are roughly five pages or less and had not been shared with anyone. The First Proof challenge was a preliminary effort to assess the capabilities of AI systems in solving research-level math questions on their own. Generating serious buzz in the math community, professional and amateur mathematicians, and teams including OpenAI, all stepped up to the challenge. But by the time the authors posted the proofs on 14 February, no one had submitted correct solutions to all 10 problems. In fact, far from it. The authors themselves only solved two of the 10 problems using Gemini 3.0 Deep Think and ChatGPT 5.2 Pro. And most outside submissions fared little better, apart from OpenAI and a small Aletheia team at Google DeepMind. With “limited human supervision,” OpenAI’s most advanced internal AI system solved five of the 10 problems , with Aletheia achieving similar outcomes—results met with a spectrum of emotions by different members of the mathematics community, from awe to disappointment. The team behind First Proof plans an even tougher second round on 14 March . A new frontier for AI “I think First Proof is terrific: It’s as close as you could realistically get to putting an AI system in the shoes of a mathematician,” says Burnham. Though he admires how First Proof tests AI’s mathematical utility for a wide range of mathematics and mathematicians, Epoch AI has its own new approach to testing— FrontierMath: Open Problems . Uniquely, the pilot benchmark consists of 16 open problems (with more to follow) from research mathematics that professional mathematicians have tried and failed to solve. Since Open Problems’ release on 27 January , none have been solved by an AI. “With Open Problems, we’ve tried to make it more challenging,” says Burnham. “The baseline on its own would be publishable, at least in a specialty journal.” What’s more, each question is designed so that it can be automatically graded. “This is a bit counterintuitive,” Burnham adds. “No one knows the answers, but we have a computer program that will be able to judge whether the answer is right or not.” Burnham sees First Proof and Open Problems as being complementary. “I would say understanding AI capabilities is a more-the-merrier situation,” he adds. “AI has gotten to the point where it’s—in some ways—better than most Ph.D. students, so we need to pose problems where the answer would be at least moderately interesting to some human mathematicians, not because AI was doing it but because it’s mathematics that human mathematicians care about.”
AI’s Math Tricks Don’t Work for Scientific Computing ieee_spectrum_ai 23.02.2026 13:00 0.678
Embedding sim.0.7699
Entity overlap0
Title sim.0.1798
Time proximity0.9853
NLP типother
NLP организацияOpenchip
NLP темаai infrastructure
NLP странаSpain

Открыть оригинал

AI has driven an explosion of new number formats—the ways in which numbers are represented digitally. Engineers are looking at every possible way to save computation time and energy , including shortening the number of bits used to represent data. But what works for AI doesn’t necessarily work for scientific computing, be it for computational physics, biology, fluid dynamics, or engineering simulations. IEEE Spectrum spoke with Laslo Hunhold , who recently joined Barcelona-based Openchip as an AI engineer, about his efforts to develop a bespoke number format for scientific computing. LASLO HUNHOLD Laslo Hunhold is a senior AI accelerator engineer at Barcelona-based startup Openchip. He recently completed a Ph.D. in computer science from the University of Cologne, in Germany. What makes number formats interesting to you? Laslo Hunhold: I don’t know another example of a field that so few are interested in but has such a high impact. If you make a number format that’s 10 percent more [energy] efficient, it can translate to all applications being 10 percent more efficient, and you can save a lot of energy. Why are there so many new number formats? Hunhold: For decades, computer users had it really easy. They could just buy new systems every few years, and they would have performance benefits for free. But this hasn’t been the case for the last 10 years. In computers, you have a certain number of bits used to represent a single number, and for years the default was 64 bits. And for AI, companies noticed that they don’t need 64 bits for each number. So they had a strong incentive to go down to 16, 8, or even 2 bits [to save energy]. The problem is, the dominating standard for representing numbers in 64 bits is not well designed for lower bit counts. So in the AI field, they came up with new formats which are more tailored toward AI. Why does AI need different number formats than scientific computing? Hunhold: Scientific computing needs high dynamic range: You need very large numbers, or very small numbers, and very high accuracy in both cases. The 64-bit standard has an excessive dynamic range, and it is many more bits than you need most of the time. It’s different with AI. The numbers usually follow a specific distribution, and you don’t need as much accuracy. What makes a number format “good”? Hunhold: You have infinite numbers but only finite bit representations. So you need to decide how you assign numbers. The most important part is to represent numbers that you’re actually going to use. Because if you represent a number that you don’t use, you’ve wasted a representation. The simplest thing to look at is the dynamic range. The next is distribution: How do you assign your bits to certain values? Do you have a uniform distribution, or something else? There are infinite possibilities. What motivated you to introduce the takum number format? Hunhold: Takums are based on posits . In posits, the numbers that get used more frequently can be represented with more density. But posits don’t work for scientific computing, and this is a huge issue. They have a high density for [numbers close to one], which is great for AI, but the density falls off sharply once you look at larger or smaller values. People have been proposing dozens of number formats in the last few years, but takums are the only number format that’s actually tailored for scientific computing. I found the dynamic range of values you use in scientific computations, if you look at all the fields, and designed takums such that when you take away bits, you don’t reduce that dynamic range This article appears in the March 2026 print issue as “Laslo Hunhold.”
AI to help researchers see the bigger picture in cell biology mit_news_ai 25.02.2026 10:00 0.646
Embedding sim.0.7735
Entity overlap0
Title sim.0.0577
Time proximity0.7321
NLP типscientific_publication
NLP организацияBroad Institute of MIT and Harvard
NLP темаmachine learning
NLP странаUnited States

Открыть оригинал

Studying gene expression in a cancer patient’s cells can help clinical biologists understand the cancer’s origin and predict the success of different treatments. But cells are complex and contain many layers, so how the biologist conducts measurements affects which data they can obtain. For instance, measuring proteins in a cell could yield different information about the effects of cancer than measuring gene expression or cell morphology. Where in the cell the information comes from matters. But to capture complete information about the state of the cell, scientists often must conduct many measurements using different techniques and analyze them one at a time. Machine-learning methods can speed up the process, but existing methods lump all the information from each measurement modality together, making it difficult to figure out which data came from which part of the cell. To overcome this problem, researchers at the Broad Institute of MIT and Harvard and ETH Zurich/Paul Scherrer Institute (PSI) developed an artificial intelligence-driven framework that learns which information about a cell’s state is shared across different measurement modalities and which information is unique to a particular measurement type. By pinpointing which information came from which cell parts, the approach provides a more holistic view of the cell’s state, making it easier for a biologist to see the complete picture of cellular interactions. This could help scientists understand disease mechanisms and track the progression of cancer, neurodegenerative disorders such as Alzheimer’s, and metabolic diseases like diabetes. “When we study cells, one measurement is often not sufficient, so scientists develop new technologies to measure different aspects of cells. While we have many ways of looking at a cell, at the end of the day we only have one underlying cell state. By putting the information from all these measurement modalities together in a smarter way, we could have a fuller picture of the state of the cell,” says lead author Xinyi Zhang SM ’22, PhD ’25, a former graduate student in the MIT Department of Electrical Engineering and Computer Science (EECS) and an affiliate of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, who is now a group leader at AITHYRA in Vienna, Austria. Zhang is joined on a paper about the work by G.V. Shivashankar, a professor in the Department of Health Sciences and Technology at ETH Zurich and head of the Laboratory of Multiscale Bioimaging at PSI; and senior author Caroline Uhler, a professor in EECS and the Institute for Data, Systems, and Society (IDSS) at MIT, member of MIT’s Laboratory for Information and Decision Systems (LIDS), and director of the Eric and Wendy Schmidt Center at the Broad Institute. The research appears today in Nature Computational Science . Manipulating multiple measurements There are many tools scientists can use to capture information about a cell’s state. For instance, they can measure RNA to see if the cell is growing, or they can measure chromatin morphology to see if the cell is dealing with external physical or chemical signals. “When scientists perform multimodal analysis, they gather information using multiple measurement modalities and integrate it to better understand the underlying state of the cell. Some information is captured by one modality only, while other information is shared across modalities. To fully understand what is happening inside the cell, it is important to know where the information came from,” says Shivashankar. Often, for scientists, the only way to sort this out is to conduct multiple individual experiments and compare the results. This slow and cumbersome process limits the amount of information they can gather. In the new work, the researchers built a machine-learning framework that specifically understands which information overlaps between different modalities, and which information is unique to a particular modality but not captured by others. “As a user, you can simply input your cell data and it automatically tells you which data are shared and which data are modality-specific,” Zhang says. To build this framework, the researchers rethought the typical way machine-learning models are designed to capture and interpret multimodal cellular measurements. Usually these methods, known as autoencoders, have one model for each measurement modality, and each model encodes a separate representation for the data captured by that modality. The representation is a compressed version of the input data that discards any irrelevant details. The MIT method has a shared representation space where data that overlap between multiple modalities are encoded, as well as separate spaces where unique data from each modality are encoded. In essence, one can think of it like a Venn diagram of cellular data. The researchers also used a special, two-step training procedure that helps their model handle the complexity involved in deciding which data are shared across multiple data modalities. After training, the model can identify which data are shared and which are unique when fed cell data it has never seen before. Distinguishing data In tests on synthetic datasets, the framework correctly captured known shared and modality-specific information. When they applied their method to real-world single-cell datasets, it comprehensively and automatically distinguished between gene activity captured jointly by two measurement modalities, such as transcriptomics and chromatin accessibility, while also correctly identifying which information came from only one of those modalities. In addition, the researchers used their method to identify which measurement modality captured a certain protein marker that indicates DNA damage in cancer patients. Knowing where this information came from would help a clinical scientist determine which technique they should use to measure that marker. “There are too many modalities in a cell and we can’t possibly measure them all, so we need a prediction tool. But then the question is: Which modalities should we measure and which modalities should we predict? Our method can answer that question,” Uhler says. In the future, the researchers want to enable the model to provide more interpretable information about the state of the cell. They also want to conduct additional experiments to ensure it correctly disentangles cellular information and apply the model to a wider range of clinical questions. “It is not sufficient to just integrate the information from all these modalities,” Uhler says. “We can learn a lot about the state of a cell if we carefully compare the different modalities to understand how different components of cells regulate each other.” This research is funded, in part, by the Eric and Wendy Schmidt Center at the Broad Institute, the Swiss National Science Foundation, the U.S. National Institutes of Health, the U.S. Office of Naval Research, AstraZeneca, the MIT-IBM Watson AI Lab, the MIT J-Clinic for Machine Learning and Health, and a Simons Investigator Award.
Mixing generative AI with physics to create personal items that work in the real world mit_news_ai 25.02.2026 19:40 0.632
Embedding sim.0.7481
Entity overlap0.0333
Title sim.0.1239
Time proximity0.6746
NLP типscientific_publication
NLP организацияMIT
NLP темаgenerative ai
NLP странаUnited States

Открыть оригинал

Have you ever had an idea for something that looked cool, but wouldn’t work well in practice? When it comes to designing things like decor and personal accessories, generative artificial intelligence (genAI) models can relate. They can produce creative and elaborate 3D designs, but when you try to fabricate such blueprints into real-world objects, they usually don’t sustain everyday use. The underlying problem is that genAI models often lack an understanding of physics. While tools like Microsoft’s TRELLIS system can create a 3D model from a text prompt or image, its design for a chair, for example, may be unstable, or have disconnected parts. The model doesn’t fully understand what your intended object is designed to do, so even if your seat can be 3D printed, it would likely fall apart under the force of someone sitting down. In an attempt to make these designs work in the real world, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are giving generative AI models a reality check. Their “PhysiOpt” system augments these tools with physics simulations, making blueprints for personal items such as cups, keyholders, and bookends work as intended when they’re 3D printed. It rapidly tests if the structure of your 3D model is viable, gently modifying smaller shapes while ensuring the overall appearance and function of the design is preserved. You can simply type what you want to create and what it’ll be used for into PhysiOpt, or upload an image to the system’s user interface, and in roughly half a minute, you’ll get a realistic 3D object to fabricate. For example, CSAIL researchers prompted it to generate a “flamingo-shaped glass for drinking,” which they 3D printed into a drinking glass with a handle and base resembling the tropical bird’s leg. As the design was generated, PhysiOpt made tiny refinements to ensure the design was structurally sound. “PhysiOpt combines GenAI and physically-based shape optimization, helping virtually anyone generate the designs they want for unique accessories and decorations,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL researcher Xiao Sean Zhan SM ’25, who is a co-lead author on a paper presenting the work. “It’s an automatic system that allows you to make the shape physically manufacturable, given some constraints. PhysiOpt can iterate on its creations as often as you’d like, without any extra training.” This approach enables you to create a “smart design,” where the AI generator crafts your item based on users’ specifications, while considering functionality. You can plug in your favorite 3D generative AI model, and after typing out what you want to generate, you specify how much force or weight the object should handle. It’s a neat way to simulate real-world use, such as predicting whether a hook will be strong enough to hold up your coat. Users also specify what materials they’ll fabricate the item with (such as plastics or wood), and how it’s supported — for instance, a cup stands on the ground, whereas a bookend leans against a collection of books. Given the specifics, PhysiOpt begins to iteratively optimize the object. Under the hood, it runs a physics simulation called a “finite element analysis” to stress test the design. This comprehensive scan provides a heat map over your 3D model, which indicates where your blueprint isn’t well-supported. If you were generating, say, a birdhouse, you may find that the support beams under the house were colored bright red, meaning the house will crumble if it’s not reinforced. PhysiOpt can create even bolder pieces. Researchers saw this versatility firsthand when they fabricated a steampunk (a style that blends Victorian and futuristic aesthetics) keyholder featuring intricate, robotic-looking hooks, and a “giraffe table” with a flat back that you can place items on. But how did it know what “steampunk” is, or even how such a unique piece of furniture should look? Remarkably, the answer isn’t extensive training — at least, not from the researchers. Instead, PhysiOpt uses a pre-trained model that’s already seen thousands of shapes and objects. “Existing systems often need lots of additional training to have a semantic understanding of what you want to see,” adds co-lead author Clément Jambon, who is also an MIT EECS PhD student and CSAIL researcher. “But we use a model with that feel for what you want to create already baked in, so PhysiOpt is training-free.” By working with a pre-trained model, PhysiOpt can use “shape priors,” or knowledge of how shapes should look based on earlier training, to generate what users want to see. It’s sort of like an artist recreating the style of a famous painter. Their expertise is rooted in closely studying a variety of artistic approaches, so they’ll likely be able to mirror that particular aesthetic. Likewise, a pre-trained model’s familiarity with shapes helps it generate 3D models. CSAIL researchers observed that PhysiOpt’s visual know-how helped it create 3D models more efficiently than “ DiffIPC ,” a comparable method that simulates and optimizes shapes. When both approaches were tasked with generating 3D designs for items like chairs, CSAIL’s system was nearly 10 times faster per iteration, while creating more realistic objects. PhysiOpt presents a potential bridge between ideas and real-world personal items. What you may think is a great idea for a coffee mug, for instance, could soon make the jump from your computer screen to your desk. And while PhysiOpt already does the stress-testing for designers, it may soon be able to predict constraints such as loads and boundaries, instead of users needing to provide those details. This more autonomous, common-sense approach could be made possible by incorporating vision language models, which combine an understanding of human language with computer vision. What’s more, Zhan and Jambon intend to remove the artifacts, or random fragments that occasionally appear in PhysiOpt’s 3D models, by making the system even more physics-aware. The MIT scientists are also considering how they can model more complex constraints for various fabrication techniques, such as minimizing overhanging components for 3D printing. Zhan and Jambon wrote their paper with MIT-IBM Watson AI Lab Principal Research Scientist Kenney Ng ’89, SM ’90, PhD ’00 and two CSAIL colleagues: undergraduate researcher Evan Thompson and Assistant Professor Mina Konaković Luković, who is a principal investigator at the lab. The researchers’ work was supported, in part, by the MIT-IBM Watson AI Laboratory and the Wistron Corp. They presented it in December at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.