Turn your selfie into an action star with this new AI image-to-video feature
AI video generator Runway’s Gen-3 quickly makes a still into a film
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Artificial intelligence-powered video maker Runway has added the promised image-to-video feature to itsGen-3 modelreleased a few weeks ago, and it may be as impressive as promised. Runway has enhanced the feature to address its biggest limitations in theGen-2 modelreleased early last year. The upgraded tool is miles better at character consistency and hyperrealism, making it a more powerful tool for creators looking to produce high-quality video content.
Runway’s Gen-3 model is still in alpha testing and only available to subscribers who pay $12 per month per editor for the most basic package. The new model had already attracted plenty of interest even when it came out with only text-to-video capabilities. But, no matter how good a text-to-video engine is, it has inherent limits, especially when it comes to characters in a video looking the same over multiple prompts and appearing to be in the real world. Without visual continuity, it’s hard to make any kind of narrative. In earlier iterations of Runway, users often struggled to keep characters and settings uniform across different scenes when relying solely on text prompts.
Offering reliable consistency in character and environmental design is no small thing, but the use of an initial image as a reference point to maintain coherence across different shots can help. In Gen-3, Runway’s AI can create a 10-second video guided by additional motion or text prompts in the platform. You can see how it works in the video below.
Stills to Films
Runway’s image-to-video feature doesn’t just ensure people and backgrounds stay the same when seen from a distance. Gen-3 also incorporates Runway’s lip-sync feature so that someone speaking moves their mouth in a way that matches the words they are saying. A user can tell the AI model what they want their character to say, and the movement will be animated to match. Combining synchronized dialogue and realistic character movements will interest a lot of marketing and advertising developers looking for new and, ideally, cheaper ways to produce videos.
Runway isn’t done adding to the Gen-3 platform, either. The next step is bringing the same enhancements to the video-to-video option. The idea is to keep the same motion but in a different style. A human running down a street becomes an animated anthropomorphic fox dashing through a forest, for instance. Runway will also bring its control features to Gen-3, such as Motion Brush, Advanced Camera Controls, and Director Mode.
AI video tools are still in the early stages of development, with most models excelling in short-form content creation but struggling with longer narratives. That puts Runway and its new features in a strong position from a market standpoint, but it is far from alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and others are all racing to make the definitive AI video generator. Of course, they’re all keeping a wary watch onOpenAIand itsSoravideo generator. OpenAI has some advantages in name recognition, among other benefits. In fact,Toys"R"Ushas already made a short film commercial using Sora and premiered it at the Cannes Lions Festival. Still, the film about AI video generators is only in its first act, and the triumphant winner cheering in slow-motion at the end is far from inevitable.
You might also like…
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He’s since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he’s continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
How to delete a character from Character AI
How to turn off Meta AI
NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)