A NEW DIRECTOR IN FILMMAKING




Go to any conference about post-production or VFX nowadays, and you’re bound to find more than one session about Artificial Intelligence or AI and how it will revolutionize the filmmaking process.

Let’s step back and start with a definition of what AI is because today at least it’s far from being Skynet in The Terminator. The pioneers in this field, Marvin Minsky from MIT and John McCarthy from Stanford, described it in the fifties as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task. By intelligence we mean behaviors like planning, reasoning, problem solving and perception. The next level would be behaviors like social intelligence and creativity, but AI systems are not there yet (thankfully…).

In the non-entertainment visual field, AI is already being used for applications like helping radiologists look for tumors in X-rays and interpreting video feeds from drones inspecting infrastructure for faults. Behind the recent resurgence of interest in AI are breakthroughs in the connected and complementary field of machine learning. According to Carnegie Mellon University, the field of machine learning seeks to answer the question, “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?”

Experiments in using AI for entertainment applications began a few years ago. In 2016 20th Century Fox partnered with IBM to create a trailer for the sci-fi suspense movie Morgan using Watson, the supercomputer. To start, Watson had to be “taught” how to cut a trailer for that specific genre by feeding Watson one hundred horror movie trailers cut into moments and scenes. Watson then analyzed the footage to understand the types of shots that fit into the structure of a suspense movie trailer before it was fed the entire cut of the movie. Once this was done, it picked out six minutes of material appropriate for the trailer. At this point, a human editor needed to cut the actual trailer from this material, but the process was substantially shortened. In other words, AI made the trailer cutting process more efficient but couldn’t complete the job without human intervention.

Adobe is one company leading the way in machine learning and artificial intelligence applications for the media and entertainment industry. Adobe’s Sensei is an AI and machine learning framework that works behind the scenes of Adobe’s Creative Cloud applications to speed up creative workflows by automating some of the more mundane tasks. According to Mala Sharma, VP and general manager of Creative Cloud, 74 percent of creative pros spend over 50 percent of their time on repetitive non-creative tasks. For example, the new Color Match tool for Adobe Premiere Pro introduced at NAB 2018 uses Adobe Sensei to apply the color grade of one shot to another automatically. It includes Face Detection so Premiere can intelligently adjust skin tones. Another example is Autoducking which will automatically turn down the music tracks when dialog or sound effects are present, generating key frames so the automatic ducking can be overridden if desired.

Visual effects is another area where AI and machine learning are set to have a significant impact – from completely automating the rotoscoping process to animating complex CG characters in real time. Doug Roble, senior director of software research and development at Digital Domain, utilized machine learning in creating Digital Doug, in the simplest terms a digital character based on Doug Roble. The first steps in creating Digital Doug were fairly traditional for high-end VFX today - high-resolution scanning and fairly traditional motion capture were used to create very high-resolution images and meshes demonstrating Doug in a wide range of poses. This data was fed into the machine learning algorithms and used to train the system to compute the right combination of high-resolution textures and mesh for any live-action input, creating a real-time, highest-resolution, markerless, minimally rigged, single-camera facial animation system. Once the training period was complete, whatever Doug does, Digital Doug, or another digital character, does too. In other words, a director will be able to see final quality digital characters interacting with actors in a scene.

This future is now closer than ever before – AI and machine learning will disrupt visual effects, editing, sound and color grading. They have the potential to streamline workflows by removing some of the more boring, repetitive tasks, allowing creative people to create, and maybe shortening their work days.

PIX is a valuable post-production team collaboration software system for people working in visual effects and post. Interested in learning more about PIX’s online collaboration system? Call or email us at +1 415 357 9720 or sales@pixsystem.com to set up a demo and learn more!


This site uses cookies. Learn More.