What makes a movie?

The use of visual effects in film is vast and complex. Audiences often lump all of it into CGI, but this just isn’t the case. VFX is the umbrella term for all the digitally unfolding that happens in post-production. It includes CGI, chroma keying or green screens, motion capture and much more. However, these concepts are not mutually exclusive, and they have many overlapping traits.

code

The World of CGI

CGI, or computer-generated imagery, is a composition made using computer technology. End products are still or moving pictures which are used pretty much everywhere. Today, CGI has permeated our lives so much that it is difficult to avoid. Our television, film and video game industries rely heavily on CGI’s capabilities to execute their goals. Think of the latest blockbuster film or trending online game – it’s likely made possible by CGI. Currently, entertainment and advertising are two of the main users of this technology, but there are many more on the rise. Companies use CGI to depict things that are either impossible, like filming the landscape in 2015’s The Martian, or too impractical (e.g. unsafe, expensive, etc). Either way, CGI provides a cost-effective way to infinitely expand our horizons.

CGI in film dates back to the late 50s, with Alfred Hitchcock’s Vertigo (1958) renowned as one of the movies to incorporate a VFX shot. Fastforward through the 60s, which had many notable developments in technology, and we get the glorious age for VFX in film – the late 70s and 80s. The first 3D animation was 1972’s aptly-named “A computer animated hand” by Catmull and Parke. They used software to painstakingly build a hand in motion. Then, a couple of Star Wars films skyrocketed the interest in VFX, aided by 1982’s Tron. The growing intrigue translated into more and more advancements in technology and software. This snowball effect led to the first fully computer-animated movie in 1995: Pixar’s Toy Story. This film was a landmark in digital-based movie making and helped to shape the industry as we know it today.

Modern CGI consists of many fields and departments working together towards one goal. The art department turns the ideas of a director or a script into visual concepts like storyboards and landscape art. Next, the asset department works to fabricate the digital objects - like a bus that will crash or a robotic limb on the antagonist - and the animated characters. Whether working with objects or characters, the result is often a compilation of effort from modeling artists, texture developers and riggers (who rig animated characters for their range of motion). Animators turn these static objects into film by determining movement. The introduction of space and time makes animation a key part of filmmaking. In modern cinema, motion trackers allow 3D movement in reality to be applied to animation (more in What is Motion Capture?). There’s also the contributions of simulation artists, specifically using procedural, dynamic and particle systems, to create complex effects like the flow of water or smoke in the air. Much of this process is done using software and algorithms. Additionally, there are also lighting artists which aid the rendering of CGI, and matte painters, who create beautiful stills that are often used for green screen backgrounds. Lastly, there are the compositors who are tasked with combining layers of film and CGI into one seamless shot. Compositing incorporates many concepts, most notably chroma keying (more in How do Green Screens work?) and rotoscoping (the act of manually tracing objects to move/layer them in a shot). And under it all, there are researchers and developers who create the algorithms and software used throughout the industry.

A summary the VFX pipeline:
  1. Art/Concept Work
  2. Asset Design
  3. Animation
  4. Motion Tracking
  5. Simulation
  6. Lighting
  7. Matte Painting
  8. Compositing
  9. Rotoscoping
  10. Research and Development

Altogether, these fields make up the world of modern film-CGI. It takes many people and many hours to create VFX – often much more than we, the audience, realize. Movies like 2012’s The Avengers and 2009’s Avatar are a cumulation of many decades of technological advancements. Now that CGI can arguably create shots so real that we can’t tell the difference, it’s tough to speculate about the future. Maybe we’ll continue to develop and create cinematic masterpieces that wouldn’t be otherwise possible. Or, maybe, the negative use of CGI will create deep fakes and fuel conspiracies that render the technology illegal. It will be interesting to see how it all plays out.

Check out this post on CGI from Andrew McDonald, a former CG Supervisor at the renowned Industrial Light & Magic: What is CGI (Computer-Generated Imagery) & how does it work?


clapper

How do Green Screens Work?

The green screen is a staple in modern cinematography – you simply can’t miss it. In fact, its bright, atypical colour is what makes it useful to VFX artists. Specifically, computers analyze a shot to remove or make transparent this particular colour, which is later layered with something else through compositing. Using this green screen technique is also called chroma keying. Its most often used when landscapes are too dangerous or far to film in, or simply don’t exist at all. Imagined settings often take shape in matte painting, which are still images that a compositor will put in the green space. If the camera placement is moving in 3D space, however, the background image will have to be tracked (I.e. follow the same motion/angle/position relative to the camera’s motion). Other uses for chroma keying have been gaining traction, such as prop removal and even disappearing limbs.

In Focus film school defines the computer’s chroma keying process below:
  1. The new background is composited (i.e. two images or video streams are layered together) into the shot.
  2. The chroma key singles out the selected colour (usually the green) and digitally removes it by rendering it transparent. This lets the other image to show through.
  3. When used with more sophisticated 3D techniques, this process can add any new element (smoke, fire, rain, etc.) to complex moving shots.

We’ve all seen green screens in work; take the daily weather segment, where the background is swapped out for a map displaying information. Chroma keying is all around us, we just don’t notice it. There are a couple of “rules” when using this helpful tool. Primarily, don’t wear green. This colour green is chosen because of its uniqueness and its absence in most of our wardrobes, nature, props, etc. It also has very little trace in skin tones, making people appear as opaque as possible. If this green isn’t practical for your use, try switching the green out for the second-in-line colour: chroma key blue. The advantage that blue has over green is in the colour spill. Because the green is so bright, it can lead to colour spill which is like a soft green hue reflected onto the object, of which the blue does less of. Blue is also better for nighttime shots. Regardless, chroma keying is a practical and widely used technique in the entertainment industry, and thus a fundamental part of VFX. Whether it’s a sixth grade project or one of Marvel’s many fantasy backdrops, green screens take everything up a level.


mocap

What is Motion Capture?

One of the more recent developments in VFX is motion capture. This technique combines real world motion with computer-animation software, making it a very useful tool in filmmaking, game development and even physical wellness (e.g. rendering gait for physical therapy). For VFX, it uses a ‘mocap’ suit to take data from the actor and apply it to a CG character, such as Gollum from Lord of the Rings. Widely regarded as one of the most famous mocap performances, Andy Serkis is able to transform into the small, unsettling creature via this technology.

There are several different types of motion capture. One is called optical mocap, and it comes in two forms. Passive optical mocap uses reflective markers, like the famous reflective ping pong balls that jump into our minds, to reflect light. Multiple special cameras detect the reflections, translating them to computer animation. Active optical mocap, instead of using reflective markers, uses light-emitting markers such as LEDs which a camera picks up on. This is useful when conditions make seeing the markers difficult, such as dim lighting or the outdoors. There is also marker-less technology which uses computer algorithms to calculate motion rather than markers, however, it is more prone to error. Lastly, there are inertial mocap suits that don’t require cameras because they use technology within the markers to analyze motion. Inertial sensors like gyroscopes, magnetometers and accelerometers record motion in IMUs (inertial measurement units). All types but marker-less motion capture use specialized suits to read the actor’s movements - often called mocap suits. Altogether, these techniques make up the popular motion capture method that is revolutionizing filmmaking as we know it.

What does motion capture allow us to do? Specifically, it facilitates the animation of 3D characters in all types of media. Instead of modeling and rigging a character entirely via a computer program, VFX artists can get a head start by having the movements already generated. Moreover, it has been the root of advancements in organic, natural motion. Take Marvel’s Avengers Endgame (2019) and the introduction of the ‘Smart Hulk.’ Here, Mark Ruffalo is suited up in both a mocap body suit and facial markers. His acting is applied to the 3D rendered Hulk character to create a seamless, realistic performance. Motion capture was even thriving back in 2015 with the release of the video game Until Dawn. This interactive, horror game uses motion capture to place eight actors into a choice-based storyline. Using only motion capture to render the actors, Until Dawn utilizes the life-like movements and facial expressions to relate to audiences and to enhance the fear factor. Check out this behind-the-scenes documentation of Until Dawn: