Meta, the parent company of Facebook, has unveiled a new product that will allow users to create videos generated by artificial intelligence. Photo courtesy of Meta
Oct. 8 (UPI) — Meta, the parent company of Facebook, has unveiled a new product that will allow users to create videos generated by artificial intelligence.
Make-a-Video, which is not yet available for public use, will allow users to create videos from a text prompt in the latest addition to the recent push for AI-generated art.
The videos are no longer than five seconds and contain no audio but mark a significant leap in AI-generated art from still images to video clips.
The tool was built by a team of machine learning engineers at Meta who published a white paper on their research findings available at the Cornell University website.
The company has also published videos made with the tool and the text prompts used to create them.
“This is pretty amazing progress. It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time,” Mark Zuckerberg, the co-founder of Facebook and chief executive of Meta, said in a statement.
“Make-A-Video solves this by adding a layer of unsupervised learning that enables the system to understand motion in the physical world and apply it to traditional text-to-image generation. We plan to share this as a demo in the future. In the meantime, enjoy the videos.”
The success of Meta’s AI model is likely to spur an increase in investments into videos generated by artificial intelligence among other companies and institutions.
Last month, an artist based in New York City was granted the first known registered copyright for artwork made using latent diffusion artificial intelligence.
Kris Kashtanova received a copyright for a graphic novel titled Zarya of the Dawn made using the commercial AI art generator Midjourney, according to a statement posted to their Instagram account. The copyright was verified by UPI through public records.
Though AI-generated art has likely been registered with the U.S. Copyright Office in the past, Kashtanova’s claim marks the first known to have been registered that used models powered by latent diffusion.
Meta’s model, unlike previous models, trains on unlabeled video footage — rather than under human supervision — in addition to pairs of images and captions.
In generating the videos, it uses the existing techniques of diffusion — taking visual static and denoising it until the image described in the prompt appears.
“Make-A-Video research builds on the recent progress made in text-to-image generation technology built to enable text-to-video generation,” Meta said in the Make-a-Video website.
“The system uses images with descriptions to learn what the world looks like and how it is often described. It also uses unlabeled videos to learn how the world moves. With this data, Make-A-Video lets you bring your imagination to life by generating whimsical, one-of-a-kind videos with just a few words or lines of text.”