Published on November 16, 2023, 10:52 pm

Fully AI-generated movies and TV shows could become a reality in our lifetimes, according to industry experts such as “Avengers” director Joe Russo. Recent advancements in artificial intelligence (AI), including OpenAI’s text-to-speech engine, point to a future where AI-generated content is prevalent. Meta, a tech giant, has taken this concept further with the introduction of Emu Video.

Emu Video is an evolution of Meta’s image generation tool called Emu. This new tool can generate four-second animated clips by analyzing captions, images, or photos paired with descriptions. To enhance these clips, Meta also introduced Emu Edit, an artificial intelligence model that allows users to make modifications using natural language instructions. For example, users can request the same clip in slow motion, and the AI will generate a new video reflecting the requested changes.

While video generation technology is not entirely new – with both Meta and Google having experimented with it – Emu Video stands out for its high fidelity. The 512×512 resolution and 16-frames-per-second clips produced by Emu Video are incredibly realistic and often difficult to distinguish from real footage.

However, there are limitations to AI-generated videos. Emu Video seems most successful when animating simple scenes that do not aim for photorealism but instead resemble styles like cubism, anime, paper cut craft”, or steampunk. Even in its best work, some AI-generated quirks can be observed, such as strange physics or odd appendages. Additionally, the lack of strong action verbs suggests that Emu Video may struggle with creating dynamic movements.

Nevertheless, the basic b-roll produced by Emu Video already possesses qualities that would fit seamlessly into movies or TV shows today. This development raises ethical concerns about the future livelihoods of animators and artists who currently create similar scenes manually. While Meta states that their generative AI tools should augment rather than replace human artists, the impact on the creative industry remains uncertain.

An example of AI’s potential impact comes from Netflix, which used AI-generated background images in a short animated film to address anime’s labor shortage. However, this practice overlooks the low pay and difficult working conditions that often drive artists away from the industry. Similarly, controversy arose when AI was employed in the credit sequence for Marvel’s “Secret Invasion.” While the series director argued that using AI aligned with the show’s themes, many artists and fans strongly disagreed.

Beyond animation, there is a concern that actors could also face repercussions as AI continues to improve. Issues related to SAG-AFTRA, a union representing actors, arose over using AI to create digital likenesses. While studios eventually agreed to compensate actors for their AI-generated likenesses, future improvements in technology may prompt them to reconsider.

Furthermore, these generative AI tools are often trained on content created by artists without their knowledge or compensation. Meta’s whitepaper accompanying Emu Video’s release mentions training on a dataset of 34 million video-text pairs but lacks transparency regarding copyrights or licensing agreements with creators.

While attempts have been made to establish industry-wide standards that allow artists to “opt out” of training or receive fair payment for their contributions to AI-generated works, progress has been slow. The rapid advancement of technology means ethical considerations risk lagging behind.

The advent of Emu Video represents an exciting leap forward in generative AI technology. However, questions about its impact on various creative industries and ethical concerns surrounding intellectual property rights remain critical topics for discussion going forward. As always, it is essential to strike a balance between innovative advancements and preserving the livelihoods of human creators.


Comments are closed.