|
S
|
Build with Lyria 3, our newest music generation model |
google |
25.03.2026 16:00 |
1
|
| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | product_launch |
| NLP организация | Google DeepMind |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Breadcrumb
Innovation & AI
Technology
Developer tools
Build with Lyria 3, our newest music generation model
Mar 25, 2026
·
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Our newest music generation model is now available to developers in public preview.
Alisa Fortin
Product Manager, Google DeepMind
Guillaume Vernade
Gemini Developer Advocate, Google DeepMind
Read AI-generated summary
General summary
Lyria 3 music generation models are here for developers through the Gemini API and Google AI Studio. You can choose between Lyria 3 Pro for full songs or Lyria 3 Clip for shorter clips, and control tempo, lyrics, and even use images to influence the music. Start experimenting in AI Studio today and check out the documentation and cookbook to start coding.
Summaries were generated by Google AI. Generative AI is experimental.
Bullet points
"Build with Lyria 3" introduces Google's new music generation models for developers.
Lyria 3 Pro creates full songs, while Lyria 3 Clip makes shorter, high-quality clips.
Control tempo, lyrics, and mood using text prompts or even images as input.
Use Google AI Studio to experiment with Lyria 3 and build custom music apps.
Lyria 3 adds a digital watermark to tracks, ensuring transparency and trust.
Summaries were generated by Google AI. Generative AI is experimental.
Explore other styles:
General summary
Bullet points
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Sorry, your browser doesn't support embedded videos, but don't worry, you can
download it
and watch it with your favorite video player!
Your browser does not support the audio element.
Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice
Speed
Voice
Speed
0.75X
1X
1.5X
2X
Lyria 3 and Lyria 3 Pro , our music generation models, are rolling out now to developers in public preview through the Gemini API and a new audio experience in Google AI Studio .
Lyria 3 is designed to combine deep musical awareness with structural coherence. This allows developers to build apps that offer high-fidelity compositions, including vocals, verses and choruses, that maintain musical consistency from the first note to the last.
Studio quality and speed
Developers can now choose between two distinct model variants designed to meet specific production and latency requirements:
Lyria 3 Pro (lyria-3-pro-preview): Our premier model for full-length song generation creates tracks up to approximately three minutes long. These tracks have professional-grade structural awareness, making it the standard for studio-quality, premium output.
Lyria 3 Clip (lyria-3-clip-preview): Optimized for speed and high-volume requests, this variant generates high-quality 30-second clips. It is the ideal choice for rapid prototyping, background loops and social media assets.
Both models support realistic vocals that convey expressive nuance, plus improved clarity for more natural sounds. Developers can also explore global languages and genres. Generate vocals in different languages, and create music spanning genres from pop to funk to Motown.
Precision control and multimodal input
Lyria 3 introduces granular controls that allow you to direct the model with precision through natural language prompts:
Tempo conditioning: Set a specific tempo (e.g., Fast, slow) with high accuracy to ensure the music fits your application’s rhythm.
Time-aligned lyrics: You can outline the progression of a song in your prompt and control when lyrics start and end within a track.
Multimodal image-to-music input: Beyond text, Lyria 3 supports multimodal inputs. You can provide an image to influence the mood, style and atmosphere of the audio.
Lyria 3 in action
To show how you could incorporate this model into an application we built some examples in Google AI Studio :
Background music for videos : This demo app allows users to upload a video that is analyzed by Gemini 3 flash to generate a descriptive prompt for a custom soundtrack. Lyria then uses this prompt to compose a matching instrumental that serves as a synchronized background music for the video.
Alarm clock : This demo app wakes you up each morning with a new song that covers relevant information like the weather, your location, the time and date, and events on your calendar.
Try Lyria 3 in Google AI Studio
To help you start experimenting immediately, we are also launching a new music generation experience in AI Studio . Using a paid API key, this dedicated workspace provides a first-class environment to create with Lyria 3 and explore its advanced features like image to music.
Inside the playground, you can explore two powerful creation modes for music:
Text mode: Describe the music you want to hear using natural language including parameters like Tempo or Key.
Composer mode: Build your song section by section from intro to verses, to bridges and more. This mode gives you granular control to set timing, intensity and descriptions for each part individually.
Start composing today
Lyria 3 Clip and Lyria 3 Pro are now available in public preview for developers globally.
We have been developing our music generation tools in close partnership with industry experts to ensure AI serves as an additive force for human creativity. Additionally, every track generated by Lyria 3 includes a SynthID digital watermark. This technology maintains transparency and trust by allowing anyone to identify and verify audio generated by Google AI, even after the audio has been modified.
Try it in Google AI Studio : Use the model selection dropdown to select Lyria 3 (30s) or Lyria 3 Pro (Full Song) and start experimenting in the playground.
Explore the documentation: Visit the Music Generation Guide for prompt guides, API references and code snippets to jumpstart your integration.
Start coding with the cookbook : Check the cookbook guide to get started with the API.
Try the demo applications: Lyria Studio , Lyria Rhythm , Alarm Clock , Background music for Videos
POSTED IN:
Developer tools
AI
|
|
|
Google AI Releases Veo 3.1 Lite: Giving Developers Low Cost High Speed Video Generation via The Gemini API |
marktechpost |
01.04.2026 06:22 |
0.81
|
| Embedding sim. | 0.911 |
| Entity overlap | 0.4444 |
| Title sim. | 0.2578 |
| Time proximity | 0.9144 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Google has announced the release of Veo 3.1 Lite , a new model tier within its generative video portfolio designed to address the primary bottleneck for production-scale deployments: pricing. While the generative video space has seen rapid progress in visual fidelity, the cost per second of generated content has remained high, often prohibitive for developers building high-volume applications.
Veo 3.1 Lite is now available via the Gemini API and Google AI Studio for users in the paid tier. By offering the same generation speed as the existing Veo 3.1 Fast model at approximately half the cost, Google is positioning this model as the standard for developers focused on programmatic video generation and iterative prototyping.
https://blog.google/innovation-and-ai/technology/ai/veo-3-1-lite/
Technical Architecture: The Diffusion Transformer (DiT)
The most significant aspect of the Veo 3.1 family is its underlying Diffusion Transformer (DiT) architecture. Traditional generative video models often relied on U-Net-based diffusion, which can struggle with high-dimensional data and long-range temporal dependencies.
Veo 3.1 Lite utilizes a transformer-based backbone that operates on spatio-temporal patches . In this architecture, video frames are not processed as static 2D images but as a continuous sequence of tokens in a latent space. By applying self-attention across these patches, the model maintains better temporal consistency . This ensures that objects, lighting, and textures remain coherent across the duration of the clip, reducing the artifacts commonly seen in earlier models.
The model performs its computation in a compressed latent space rather than pixel space. This allows the model to handle the high computational demands of video generation while maintaining a lower memory footprint. For developers, this translates to a model that can generate high-definition content without the exponential increase in compute time that usually accompanies resolution scaling.
Performance and Output Specifications
Veo 3.1 Lite provides specific parameters for resolution and duration, allowing AI devs to integrate it into structured workflows. Unlike the flagship Veo 3.1 model, which supports 4K resolution, the Lite version is optimized for high-definition (HD) outputs.
Supported Resolutions: 720p and 1080p.
Aspect Ratios: Native support for both landscape (16:9) and portrait (9:16) orientations.
Clip Durations: Developers can specify generation lengths of 4, 6, or 8 seconds.
Prompt Adherence: The model is optimized for ‘Cinematic Control,’ recognizing technical directives such as ‘pan,’ ’tilt,’ and specific lighting instructions.
The ‘Lite’ tag does not refer to a reduction in generation speed compared to the ‘Fast’ tier. Instead, it refers to an optimized parameter set that allows Google team to offer the model at a significantly lower price point while maintaining the same low-latency performance characteristics of Veo 3.1 Fast.
The Pricing Shift: Democratizing Video Inference
The core value proposition of Veo 3.1 Lite is its cost structure. In the current market, high-quality video inference often costs several dollars per minute of footage, making it difficult to justify for applications like dynamic ad generation or social media automation.
Veo 3.1 Lite pricing is structured as follows:
720p: $0.05 per second.
1080p: $0.08 per second.
Deployment via Gemini API and AI Studio
The accessibility is handled through the Gemini API . This allows for the integration of video generation into existing Python or Node.js applications using standard REST or gRPC calls.
One critical technical feature for enterprise developers is the inclusion of SynthID . Developed by Google DeepMind, SynthID is a tool for watermarking and identifying AI-generated content. It embeds a digital watermark directly into the pixels of the video that is imperceptible to the human eye but detectable by specialized software. This is a mandatory component for developers concerned with safety, compliance, and distinguishing synthetic media from captured footage.
Key Takeaways
Half the Cost, Same Speed : Offers the same low-latency performance as the ‘Fast’ tier at less than 50% of the price ($0.05/sec for 720p).
Scalable HD Output : Supports 720p and 1080p resolutions in 4, 6, or 8-second clips with native 16:9 and 9:16 aspect ratios.
Architecture : Built on a Diffusion Transformer (DiT) using spatio-temporal patches for superior motion and physical consistency.
Developer Ready : Available now via Gemini API (paid tier) and Google AI Studio, featuring built-in SynthID digital watermarking.
Check out the Technical details . You can access the model via paid tier on the Gemini API and Google AI Studio . Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter . Wait! are you on telegram? now you can join us on telegram as well.
The post Google AI Releases Veo 3.1 Lite: Giving Developers Low Cost High Speed Video Generation via The Gemini API appeared first on MarkTechPost .
|
|
|
Lyria 3 Pro: Create longer tracks in more Google products |
google |
25.03.2026 16:00 |
0.759
|
| Embedding sim. | 0.8672 |
| Entity overlap | 0.2963 |
| Title sim. | 0.117 |
| Time proximity | 1 |
| NLP тип | product_launch |
| NLP организация | Google DeepMind |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Breadcrumb
Innovation & AI
Technology
AI
Lyria 3 Pro: Create longer tracks in more Google products
Mar 25, 2026
·
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Introducing Lyria 3 Pro, which unlocks longer tracks with structural awareness. We’re also bringing Lyria to more Google products and surfaces.
Myriam Hamed Torres
Senior Product Manager, Google DeepMind
Read AI-generated summary
General summary
Google is releasing Lyria 3 Pro, an advanced music generation model, across more Google products. You can now create longer tracks up to 3 minutes and customize elements like verses and choruses. Lyria 3 Pro is available in Vertex AI, Google AI Studio, the Gemini API, Google Vids, the Gemini app, and ProducerAI, so you can scale music production and experiment with different styles.
Summaries were generated by Google AI. Generative AI is experimental.
Bullet points
"Lyria 3 Pro" lets you create longer, more customized music tracks in more Google products.
Lyria 3 Pro creates songs up to 3 minutes long, with intros, verses, choruses, and bridges.
You can now access Lyria 3 Pro in Vertex AI, Google AI Studio, Gemini, and Google Vids.
Google is partnering with musicians to responsibly develop AI music tools like Lyria 3 Pro.
Lyria 3 Pro outputs are watermarked and designed to avoid mimicking existing artists.
Summaries were generated by Google AI. Generative AI is experimental.
Basic explainer
Google made a new AI model called Lyria 3 Pro that makes music. It can now create longer songs up to 3 minutes and understand song structure better. People can use it in different Google apps like Vids and Gemini to make custom music. Google is working with musicians to make sure the AI helps them be creative.
Summaries were generated by Google AI. Generative AI is experimental.
Explore other styles:
General summary
Bullet points
Basic explainer
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Your browser does not support the audio element.
Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice
Speed
Voice
Speed
0.75X
1X
1.5X
2X
Last month, we introduced Lyria 3 , featuring custom music generation designed to spark creative expression. Now, we’re bringing our most advanced music generation model to more Google products, and introducing Lyria 3 Pro. This advanced version allows the creation of tracks up to 3 minutes long, with customization and creative control. Lyria 3 Pro better understands musical composition, so you can now prompt for specific elements like intros, verses, choruses and bridges. It’s great for experimenting with different styles or generating songs with complex transitions.
Your browser does not support the audio element.
Lyria 3 Pro Musical Overview
Providing new places to generate music
High-quality music generation should be accessible wherever creativity happens. Whether you are an app developer, a business or music professional, or a creator, these integrations allow you to use Lyria’s advanced musical awareness to scale your production.
Vertex AI : Lyria 3 Pro is now in public preview on Vertex AI for businesses who require on-demand audio at scale. It gives organizations the ability to scale high-fidelity production, from rapidly generating bespoke soundtracks for gaming to integrating into creative tools, music and video platforms.
Google AI Studio and the Gemini API : For developers building the next generation of creative tools, Lyria 3 provides improved musical awareness and structural coherence to offer creative flexibility. Lyria 3 Pro is now available alongside Lyria RealTime in AI Studio.
Google Vids : Vids is an AI-powered video creation app that anyone can use. With Lyria 3 and Lyria 3 Pro in Vids, you can add custom music that matches your style for everything from creative projects to marketing videos. This is rolling out to Google Workspace customers and Google AI Pro & Ultra subscribers starting this week.
Gemini app : Longer generations with Lyria 3 Pro are now available in the Gemini app, starting with paid subscribers. Lyria 3 Pro’s enhanced customization offers more space to experiment and play with longer tracks. So now, you can add more details to bring your full vision to life, or create personalized tracks for vlogs, podcasts or tutorial videos.
ProducerAI : We recently introduced ProducerAI , a collaborative music creation tool, built by musicians looking for new ways to enhance their creative process. With Lyria 3 Pro, ProducerAI offers an agentic experience designed to help artists, producers and songwriters at every level iterate on comprehensive songs. It’s available globally to free and paid subscribers.
Partnering with creatives
We have been developing our music generation tools responsibly and in close partnership with the industry to ensure AI serves as a tool for creative expression.
Through our Music AI Sandbox , we provide musicians, producers and songwriters with a suite of experimental tools designed to expand their creative horizons. The insights from this collaboration helped shape the development of Lyria 3.
We’re inviting artists to integrate AI into their workflows to make sure our technology helps the people who use it. Grammy-winning producer Yung Spielburg used Lyria in his composition and production process for the score of the Google DeepMind short film “ Dear Upstairs Neighbors .” And we’re also collaborating with DJ and producer François K , who used Lyria in an iterative process to create a soon-to-be-released song.
Responsibility was foundational, and remains integral in the design and training of Lyria 3, using materials that YouTube and Google has a right to use under our terms of service, partner agreements, and applicable law. To protect original expression, Lyria 3 and Gemini do not mimic artists; if a prompt names a creator, the model takes that as broad inspiration. Additionally, we employ filters to check outputs against existing content and users must adhere to the Terms of Service and Gen AI prohibited use policies , which prohibit violating others' intellectual property and privacy rights. All Lyria 3 and Lyria 3 Pro outputs are embedded with SynthID , our imperceptible watermark for identifying Google AI-generated content.
Lyria 3 Pro is rolling out to professionals, developers, organizations and everyday creators to help craft high quality music generations.
POSTED IN:
AI
Gemini App
Developer tools
Google Labs
|
|
|
Google now lets you direct avatars through prompts in its Vids app | TechCrunch |
techcrunch |
02.04.2026 16:00 |
0.751
|
| Embedding sim. | 0.8412 |
| Entity overlap | 0.4 |
| Title sim. | 0.1714 |
| Time proximity | 1 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | generative ai |
| NLP страна | United States |
Открыть оригинал
Google on Thursday added new features to its video editor app Vids, including directing and customizing avatars through text prompts, Veo 3.1 support, the ability to export videos to YouTube, and recording with a Chrome extension.
Users will be able to use natural language prompts to direct avatars to “act” in a scene. This can include the avatar interacting with a product, a prop, or a piece of equipment. The company said that despite the dynamic nature of the output, Vids maintains character consistency.
Google said that based on the theme of the video, users can customize characters by tweaking appearance, changing apparel, and creating new backgrounds through prompts.
Last month, Google added its Lyria 3 and Lyria 3 Pro music creation models to Vids to let users add sound effects or music to their clips. With this rollout, Google is bringing the Veo 3.1 video-generation model, which can create eight-second clips within the video editing tool. The company is giving out 10 free generations per month to all users. The company said Google AI Ultra and Workspace AI Ultra accounts can generate up to 1,000 Veo videos per month.
What’s more, Google is adding the ability to export finished videos directly to YouTube, saving the hassle of downloading and uploading them to the channel. All the exported videos are by default private, so you can review the video before making it public.
Image Credits: Google
The company is also adding a new screen-recording Chrome extension to the video suite, allowing users to capture the screen with audio or video.
Google has constantly added features to Vids after first unveiling the product in 2024 to cater to enterprise content creation . Last year, the company brought AI avatars to Vids and expanded access to consumers . In February, the company added 2D and 3D cartoon-style avatars and added language support for seven new voice-over languages , including French, German, Italian, Korean, Portuguese, Spanish, and Japanese.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Google Vids faces competition from the likes of Synthesia , HeyGen , D-ID , and Lemon Slice .
Topics
AI , Apps , Google , Google Vids , Video Editing
Ivan Mehta
Ivan covers global consumer tech developments at TechCrunch. He is based out of India and has previously worked at publications including Huffington Post and The Next Web.
You can contact or verify outreach from Ivan by emailing im@ivanmehta.com or via encrypted message at ivan.42 on Signal.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Tim Fernholz
Anthropic is having a month
Connie Loizos
Google is now letting users in the US change their Gmail address
Ivan Mehta
Why OpenAI really shut down Sora
Connie Loizos
The Pixel 10a doesn’t have a camera bump, and it’s great
Ivan Mehta
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Let’s take a look at the retro tech making a comeback
Lauren Forristal
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
You can now transfer your chats and personal information from other chatbots directly into Gemini | TechCrunch |
techcrunch |
26.03.2026 23:47 |
0.722
|
| Embedding sim. | 0.8111 |
| Entity overlap | 0.3529 |
| Title sim. | 0.1377 |
| Time proximity | 0.9878 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | conversational ai |
| NLP страна | United States |
Открыть оригинал
When it comes to AI chatbots, there’s currently a war on for consumer attention. All the big chatbot providers are looking to increase their user count and, in a minor coup for itself, Google just made it significantly easier for users of those other chatbots to defect to Gemini.
On Thursday, the company announced what it calls “switching tools,” new widgets that are designed to allow users to transfer “memories” (basically chunks of personal information) and even entire chat histories from other chatbots directly into Gemini. Users can easily share “key preferences, relationships, and personal context” in this way, the company says.
The idea is to make it significantly easier to adopt Google’s AI assistant, as users won’t have to spend large amounts of time re-training Gemini on who they are and what they want.
The memory feature works like this: Gemini will suggest a prompt that the user can enter into their current chatbot, which will then generate a response that can be copied and pasted back into Gemini. In this fashion, Gemini coaches the user on what kinds of information it would be helpful to know about them, while also helping facilitate the transmission of that information back into its own archive.
Image Credits: Gemini
“Once you import these memories, Gemini will understand the same key facts you’ve shared with other apps, like your interests, your sibling’s name, or where you grew up,” the company says. “Instead of starting over from scratch, you can quickly get Gemini up to speed on what matters most to you.”
When it comes to importing chat histories, Google says that all you need is to upload them in a zip file. It’s relatively easy to export chat logs via zips from most chatbots — including from ChatGPT and Claude . This allows users to “seamlessly pick up right where you left off,” the company says. Google says users also have the ability to search through those old chats.
ChatGPT remains the big kahuna in the consumer chatbot market, with OpenAI announcing last month that it has reached 900 million weekly active users . Gemini — despite Google’s vast distribution advantages, including its default placement across Android devices and the Chrome browser — has lagged in consumer mindshare. Last month, it shared its own numbers during Alphabet’s fourth-quarter earnings call, saying Gemini had surpassed 750 million monthly active users . This move is clearly aimed at helping Google catch up.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Topics
AI , artificial intelligence , gemini , Google
Lucas Ropek
Senior Writer, TechCrunch
Lucas is a senior writer at TechCrunch, where he covers artificial intelligence, consumer tech, and startups. He previously covered AI and cybersecurity at Gizmodo.
You can contact Lucas by emailing lucas.ropek@techcrunch.com.
View Bio
April 30
San Francisco, CA
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
REGISTER NOW
Most Popular
Why OpenAI really shut down Sora
Connie Loizos
Anthropic’s Claude popularity with paying consumers is skyrocketing
Julie Bort
Waymo’s skyrocketing ridership in one chart
Kirsten Korosec
A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know.
Lorenzo Franceschi-Bicchierai
The AI skills gap is here, says AI company, and power users are pulling ahead
Rebecca Bellan
Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’
Sarah Perez
Kentucky woman rejects $26M offer to turn her farm into a data center
Graham Starr
Loading the next article
Error loading the next article
X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky
TechCrunch
Staff
Contact Us
Advertise
Crunchboard Jobs
Site Map
Terms of Service
Privacy Policy
RSS Terms of Use
Code of Conduct
Kalshi
Copilot
Blue Origin
WordPress
Bezos
Tech Layoffs
ChatGPT
© 2026 TechCrunch Media LLC.
|
|
|
Create, edit and share videos at no cost in Google Vids |
google |
02.04.2026 16:00 |
0.69
|
| Embedding sim. | 0.7962 |
| Entity overlap | 0.1905 |
| Title sim. | 0.1587 |
| Time proximity | 0.7999 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Breadcrumb
Products & Platforms
Products
Google Workspace
Create, edit and share videos at no cost in Google Vids
Apr 02, 2026
·
Share
x.com
Facebook
LinkedIn
Mail
Copy link
New AI capabilities are coming to Google Vids, powered by Lyria 3 and Veo 3.1, including high-quality video generation at no cost, custom music creation and more.
David Nachum
Group Product Manager, Google Vids
Read AI-generated summary
General summary
Google Vids now lets anyone with a Google account generate high-quality video clips using Veo 3.1, with 10 free generations monthly. Google AI Pro and Ultra subscribers gain access to custom music generation via Lyria 3 and Lyria 3 Pro, plus customizable AI avatars for engaging content. You can also use the new Chrome extension for easy screen recording and directly publish videos to YouTube.
Summaries were generated by Google AI. Generative AI is experimental.
Bullet points
Google Vids helps you create and share videos easily, now with advanced features!
Anyone with a Google account can generate high-quality video clips using Veo 3.1.
Google AI Pro and Ultra users can create custom music and direct AI avatars.
A Chrome extension lets you record your screen, and you can publish directly to YouTube.
Google AI Ultra and Workspace AI Ultra accounts can generate up to 1,000 Veo videos monthly.
Summaries were generated by Google AI. Generative AI is experimental.
Explore other styles:
General summary
Bullet points
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Your browser does not support the audio element.
Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice
Speed
Voice
Speed
0.75X
1X
1.5X
2X
Google Vids is an intuitive, easy-to-use video editing suite. It helps you turn your ideas into polished stories — whether you're creating a quick video tutorial, recapping a weekend getaway or putting together a birthday montage for a friend. This week, we're adding more advanced capabilities like high-quality video generation for all users, custom music generation and AI avatars to help you create more in Vids. Keep reading to learn about these capabilities and how you can access them.
Generate high-quality videos at no cost
As of this week, anyone with a Google account can generate video clips at no cost using our latest video generation model, Veo 3.1. Now you can easily bring your stories to life in a high-quality video clip from just a simple prompt or photo. It’s perfect for creating an animated neighborhood party flyer, mocking up a quick promo for your side-hustle or sending a fun greeting card. All personal accounts now get 10 video generations every month at no cost — and you can always upgrade if you need more.
Score your videos with custom music
Give your videos energy with a custom soundtrack tailored perfectly to your video’s vibe, powered by our Lyria 3 and Lyria 3 Pro models . Google AI Pro and Ultra subscribers can now generate everything from short 30-second clips to three-minute tracks. Whether you need a catchy, lighthearted tune for a birthday shoutout or an uplifting score for a family vacation reel, you can easily compose an original track that hits all the right notes.
Tell your story with customizable and directable AI avatars
AI avatars give your video a consistent face and voice across every frame, making it easy to create engaging content without the hassle of multiple takes. Powered by Veo 3.1, Google AI Pro and Ultra subscribers now have complete directorial control over how their characters look and act:
Direct your avatars: Move beyond static talking heads. Place avatars into specific scenes and have them interact directly with uploaded objects, like a product or a prop, set against custom backdrops. It's an easy way to create engaging tutorials and standout social content, or bring your passion project to life.
Customize their look: Tweak the fine details of your avatar's appearance, swap outfits and change backgrounds to match the exact mood of your video, all while keeping their voice and identity consistent. Whether you're dressing up an avatar to narrate your latest travel vlog, designing a virtual host for a school project or just creating something fun to share with friends, your avatar can always look the part.
Capture and share your creations with ease
We also want to make every step of your video journey effortless, from the moment inspiration strikes to the final upload. To help you work faster and share more easily, we’re introducing new tools that connect Vids right to your browser and your favorite platforms, available to everyone, at no cost:
Record in your browser with our Chrome extension: With our new Google Vids Screen Recorder Chrome extension , you can quickly record your screen and yourself from anywhere on the web. It brings the Vids recording studio features you already know with you as you browse. No need to navigate to Vids first — just click and start creating.
Publish finished videos straight to YouTube: Skip the hassle of downloading files and push your creations directly to YouTube right from Vids. The streamlined export process makes publishing completely effortless, and all exports default to Private, giving you the chance to review your videos before sharing them with the world.
And, as of this week, Google AI Ultra and Workspace AI Ultra accounts can now generate up to 1,000 Veo videos per month.
We can't wait to see what you create. Get started at vids.new today.
Get more stories from Google in your inbox.
Get more stories from Google in your inbox.
Email address
Your information will be used in accordance with
Google's privacy policy.
Subscribe
Done. Just one step more.
Check your inbox to confirm your subscription.
You are already subscribed to our newsletter.
You can also subscribe with a
different email address
.
POSTED IN:
Google Workspace
AI
Google One
|
|
|
Gemini 3.1 Flash Live: Making audio AI more natural and reliable |
google |
26.03.2026 15:21 |
0.674
|
| Embedding sim. | 0.7886 |
| Entity overlap | 0.0968 |
| Title sim. | 0.0762 |
| Time proximity | 0.861 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Breadcrumb
Innovation & AI
Models & research
Gemini Models
Gemini 3.1 Flash Live: Making audio AI more natural and reliable
Mar 26, 2026
·
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Our latest voice model has improved precision and lower latency to make voice interactions more fluid, natural and precise.
Valeria Wu
Product Manager
Yifan Ding
Software Engineer on behalf of the Gemini team
Read AI-generated summary
General summary
Gemini 3.1 Flash Live is Google's highest-quality audio model, designed for natural and reliable real-time dialogue. Developers can access it through the Gemini Live API in Google AI Studio, while enterprises can use it for customer experience. Everyone can experience it via Search Live and Gemini Live, which now supports over 200 countries.
Summaries were generated by Google AI. Generative AI is experimental.
Bullet points
"Gemini 3.1 Flash Live" is here, making AI audio sound more natural and reliable.
This new audio model is faster and better at understanding tone for natural conversations.
Developers can use it to build voice agents that handle complex tasks more reliably.
Gemini Live and Search Live now offer more helpful responses in many languages.
All audio from 3.1 Flash Live is watermarked to help prevent the spread of misinformation.
Summaries were generated by Google AI. Generative AI is experimental.
Explore other styles:
General summary
Bullet points
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Your browser does not support the audio element.
Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice
Speed
Voice
Speed
0.75X
1X
1.5X
2X
Today, we’re advancing Gemini’s real-time dialogue capabilities with Gemini 3.1 Flash Live, our highest-quality audio and voice model yet. It delivers the speed and natural rhythm needed for the next generation of voice-first AI, offering a more intuitive experience for developers, enterprises and everyday users.
3.1 Flash Live is available across Google products:
For developers in preview via the Gemini Live API in Google AI Studio
For enterprises in Gemini Enterprise for Customer Experience
For everyone via Search Live and Gemini Live
For developers: Robust reasoning and task execution
We’ve improved 3.1 Flash Live’s overall quality, making it more reliable for developers and enterprises to build voice-first agents that can complete complex tasks at scale. On ComplexFuncBench Audio , a benchmark that captures multi-step function calling with various constraints, it leads with a score of 90.8% compared to our previous model.
On Scale AI’s Audio MultiChallenge , Gemini 3.1 Flash Live leads with a score of 36.1% with “thinking” on. The benchmark specifically tests complex instruction following and long-horizon reasoning amidst the interruptions and hesitations typical of real-world audio.
3.1 Flash Live also has improved tonal understanding to deliver more natural dialogue. In Gemini Enterprise for Customer Experience , it’s even more effective at recognizing acoustic nuances like pitch and pace than 2.5 Flash Native Audio. It’s also better at dynamically adjusting its response to users' expressions of frustration or confusion.
3.1 Flash Live lets you build voice-ready agents that handle complex tasks in noisy environments.
Illustrative demonstration built with Gemini 3.1 Pro, powered by Gemini 3.1 Flash Live.
3.1 Flash Live lets you use your voice to vibe code and quickly iterate.
Illustrative demonstration built with Gemini 3.1 Pro, powered by Gemini 3.1 Flash Live.
Companies like Verizon, LiveKit and The Home Depot have given positive feedback on 3.1 Flash Live in their workflows, highlighting its improved, natural conversation.
For everyone: More natural and intuitive interactions
In Gemini Live and Search Live, the 3.1 Flash Live model delivers more helpful and natural responses, whether you’re asking quick daily questions or engaging in more complex conversations.
With the 3.1 Flash Live model under the hood, Gemini Live delivers faster responses compared to the previous model and it can follow the thread of your conversation for twice as long, keeping your train of thought intact during longer brainstorms.
3.1 Flash Live makes Gemini Live faster and more helpful
3.1 Flash Live is also inherently multilingual, which enables this week’s global expansion of Search Live . With this launch, people in more than 200 countries and territories can now have real-time, multimodal conversations with Search in their preferred language.
Get real-time troubleshooting help using 3.1 Flash Live in Search Live
Try Gemini 3.1 Flash Live
All audio generated by 3.1 Flash Live is watermarked with SynthID. This imperceptible watermark is interwoven directly into the audio output, allowing the reliable detection of AI-generated content to help prevent misinformation. For more information on our approach to safety and responsibility, see the model card .
Experience the naturalness and reliability of 3.1 Flash Live, starting today. We look forward to seeing how you interact and build with it.
Get more stories from Google in your inbox.
Get more stories from Google in your inbox.
Email address
Your information will be used in accordance with
Google's privacy policy.
Subscribe
Done. Just one step more.
Check your inbox to confirm your subscription.
You are already subscribed to our newsletter.
You can also subscribe with a
different email address
.
POSTED IN:
Gemini models
AI
|
|
|
Google is making it easier to import another AI’s memory into Gemini |
the_verge_ai |
26.03.2026 21:44 |
0.666
|
| Embedding sim. | 0.7438 |
| Entity overlap | 0.1538 |
| Title sim. | 0.1961 |
| Time proximity | 0.9619 |
| NLP тип | product_launch |
| NLP организация | Gemini |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
After Anthropic updated its tool for copying another AI's memory into Claude earlier this month, Google Gemini is rolling out new "Import Memory" and "Import Chat History" features on desktop that can help users quickly copy over everything their current AI already knows about them. To use the "Import Memory" tool, users copy and paste a suggested prompt from Gemini into their previous AI, then paste the output from the previous AI into Gemini, which should get Gemini caught up on their preferences.
The "Import Chat History" feature has users request an export of all of their chats from their previous AI, which they upload to Gemini in th …
Read the full story at The Verge.
|
|
|
Build with Veo 3.1 Lite, our most cost-effective video generation model |
google |
31.03.2026 16:00 |
0.665
|
| Embedding sim. | 0.7763 |
| Entity overlap | 0.3684 |
| Title sim. | 0.4444 |
| Time proximity | 0.1429 |
| NLP тип | product_launch |
| NLP организация | Google DeepMind |
| NLP тема | video generation |
| NLP страна | |
Открыть оригинал
Breadcrumb
Innovation & AI
Technology
AI
Build with Veo 3.1 Lite, our most cost-effective video generation model
Mar 31, 2026
·
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Our most cost-effective video generation model is now available to developers in the Gemini API.
Alisa Fortin
Product Manager, Google DeepMind
Guillaume Vernade
Gemini Developer Advocate, Google DeepMind
Share
x.com
Facebook
LinkedIn
Mail
Copy link
Your browser does not support the audio element.
Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice
Speed
Voice
Speed
0.75X
1X
1.5X
2X
We are introducing Veo 3.1 Lite , Google’s most cost-effective video model. This model empowers developers to build high-volume video applications, at less than 50% of the cost of Veo 3.1 Fast, but with the same speed. This rounds out the Veo 3.1 model family, giving developers flexibility based on needs.
On April 7, we’ll also be reducing the pricing for Veo 3.1 Fast allowing for even more developers to integrate video generation into their products.
Efficiency for builders
Veo 3.1 Lite balances practical utility with professional capabilities, supporting Text-to-Video and Image-to-Video. It offers flexible framing for landscape (16:9) and portrait (9:16) ratios and 720p and 1080p video resolutions.
A demo app showcasing how Veo 3.1 Lite can be used to iterate as you craft the perfect video.
Developers can also customize duration at 4s, 6s or 8s, with cost adjusting accordingly.
Our commitment to making video generation more available to developers doesn't stop with the release of Veo 3.1 Lite. Stay tuned for more updates soon!
Get started
Rolling out today, you can access the model via paid tier on the Gemini API and Google AI Studio .
Check out the developer documentation for all specifications for the new Veo 3.1 Lite and the updated pricing for Veo 3.1 Fast.
POSTED IN:
Developer tools
AI
|
|
|
Veo 3.1 бесплатно — 10 генераций видео в месяц для любого аккаунта. Разбираю, что реально можно получить |
habr_ai |
05.04.2026 11:04 |
0.631
|
| Embedding sim. | 0.7801 |
| Entity overlap | 0.1765 |
| Title sim. | 0.0437 |
| Time proximity | 0.4007 |
| NLP тип | product_launch |
| NLP организация | Google |
| NLP тема | generative ai |
| NLP страна | |
Открыть оригинал
Google отдала Veo 3.1 бесплатно — 10 генераций видео в месяц для любого Google-аккаунта. Без карты, без подписки. Протестировал: 720p, до 8 секунд, с нативным звуком. Физика убедительная, свет и тени — на уровне. OpenAI свернула Sora не просто так. Разбираю, что реально получаешь, где ограничения, и даю советы по промтам, чтобы не тратить генерации впустую.
Читать далее
|