How AI can turn your home video into a Hollywood blockbuster
No mocap, no problem
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Want to star in an animated film as an anthropomorphic animal version of yourself? Runway’s AI video creation platform has a new AI tool to do just that. The new Act-One feature may make motion-capture suits and manual computer animation unnecessary to match live action.
Act-One streamlines what is usually a long process for facial animation. All you need is a video camera facing an actor and able to capture their face as they perform.
The AI fueling Act-One reworks the facial movements and expressions from the inputted video to fit an animated character. Runway claims even the most nuanced emotions are visible through micro-expressions, eyeliners, and other facets of the performance. Act-One can even produce multi-character dialogue scenes, which Runway suggests are difficult for most generative AI video models.
To produce one, a single actor performs multiple roles, and the AI animates the different performances mapped onto different characters in one scene as though they are talking to each other.
That’s a far cry from the laborious traditional animation requirements and makes animation far more accessible to creators with limited budgets or technical experience. Not that it’s always going to match the skills of talented teams of animators with big movie budgets, but the relatively low barrier of entry might let amateurs and those with limited resources have the chance to play with character designs that are still realistic in portraying emotions, all without breaking the bank or missing deadlines. You can see some demonstrations below.
Animated Runway
Act-One is, in some ways, an enhancement for Runway’svideo-to-video featurewithin itsGen-3 Alpha model. But while that tool uses a video and a text prompt to adjust the setting, performers, or other elements, Act-One skips straight to mapping human expressions onto animated characters. It also fits with how Runway has been pushing out more features and options for its platform, such as theGen-3 Alpha Turboversion of its model, which sacrifices some functionality for speed.
Like its other AI video tools, Runway has some restrictions on Act-One to prevent people from misusing it or breaking its terms and conditions. You can’t make content with public figures, for instance, and it employs techniques to ensure anyone whose voice is used in the final video has given their permission. The model is continuously monitored to spot any attempts to break those or other rules.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
“We’re excited to see what forms of creative storytelling Act-One brings to animation and character performance. Act-One is another step forward in our goal to bringing previously sophisticated techniques to a broader range of creators and artists,” Runway wrote in its announcement. “We look forward to seeing how artists and storytellers will use Act-One to bring their visions to life in new and exciting ways.”
Act-One may be somewhat unique among AI video generators, thoughAdobeFirefly and Meta’s MovieGen have some similar efforts in their portfolio. Runway’s Act-One seems to be much easier to use than Firefly’s equivalent and more available than the restricted MovieGen model.
Still, there’s s ever more AI video competition asOpenAI’sSora modelstarts to spread, andStability AI, Pika, Luma Labs’Dream Machine, and others push out a steady stream of features for AI video production. If you want to try Act-One, Runway’s paid plans start at $12 a month.
You might also like…
Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He’s since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he’s continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
How to delete a character from Character AI
How to turn off Meta AI
NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)