Search form

'Boom Boom Sabotage': Extreme CG Skateboarding

Philippe Moins turns the pages on Astix film production history to offer perspective for the latest animated feature, Astix et les Vikings, moving across European theaters.

Skateboard superstar Tony Hawk, shown here during MoCap filming, is ready to join the ranks of athletes who gain movie stardom. Tony Hawk in Boom Boom Sabotage is slated for a fall DVD release. All images © Mainframe Ent.

Just how many American athletes have made it to the big screen with a major part in a feature film? Probably less than a dozen: one remembers O.J. Simpson, Howie Long, Kareem Abdul Jabbar, Michael Jordan, Shaquille ONeal, Dennis Rodman Now, skateboard superstar Tony Hawk is about to join this very select club. Tony Hawk in Boom Boom Sabotage is a 70-minute CG feature film that was developed at Mainframe Ent., a leading provider of direct-to-DVD animated features. It is slated to release on DVD in mid-September and air on Cartoon Network in mid-November. Directed by Johnny Darrell, Andrew Duncan and Logan McPherson, the movie will be distributed by FUNimation Ent. in the U.S., and by Alliance Atlantis in Canada.

Besides having to deal with a tight budget and a demanding deadline, the major challenge on Tony Hawk in Boom Boom Sabotage was finding a way to translate the extreme motions of top-notch skaters into dynamic CG animation. The greatest challenge actually was creating a pipeline that could fulfill our creative goals in terms of the skateboarding sequences, recalls production designer and CG supervisor Casey Kwan. We wanted to get away from the storyboarded feel of most animated movies and inject a more live-action feel with regards to the editorial. This meant that we needed to create a pipeline that would allow us to previsualize our sequences in 3D at an early rough stage with our motion capture, so that our directors and director of photography could establish multiple camera angles for our editor to work with.

The Art of Capturing World-Class Skaters

Mainframe has always strongly relied on motion capture to create CG animation. Up to 80% of each project is animated via this technique. When the Tony Hawk project got the green light, it was considered a perfect opportunity to bring motion capture to a new level of realism. Our set-up for MoCap was a state-of-the-art Vicon MX-40 system with 16 cameras, adds producer and co-writer Ben Burden Smith. The capture volume was 20' x 40' for the Vancouver skateboard shoot, and we also used a customized set-up for a separate shoot with Tony Hawk that fit more with the size of his own ramp. Our dramatic shoot was very straightforward in our capture volume, with takes being as long as three minutes. Each actor had 42 markers on them, and we also placed four markers on each skateboard wheel. This way, we were able to have rotational data on the boards and could calculate where the board should be based on the feet data from the skater. We ended up with thousands of skate tricks to choose from, which is a really great liberty to have as a filmmaker when youre making a skateboard movie.

During the motion capture process, the key tool was Autodesks MotionBuilder, a piece of software that was specifically added to Mainframes pipeline for this project. The core of the pipeline is XSI based, which we used all the way down the line from design, to modeling, animation, lighting, vfx, rendering and compositing, Kwan explains. In the end, it was the most economic solution, especially since Mainframe was already XSI based. The choice to go MotionBuilder was a no brainer considering the volume of MoCap and the 3D approach we wanted to take with our pre-production. MotionBuilder allowed us to build long skate lines, set multiple cameras and evaluate character performance on set during MoCap shoots. This last detail was huge as we could review our MoCap actors performance right after a take on the actual 3D model. Usually, you wait days to see it only to realize that the performance didnt suit the character design or that the blocking didnt match the 3D set. For a project of this scale, having the ability to visualize this stuff on set was an amazing luxury. In the end, MotionBuilder allowed us to achieve the language of film within a 3D animated context.

The major challenge on Tony Hawk in Boom Boom Sabotage was finding a way to translate the extreme motions of top-notch skaters into dynamic CG animation.

The skateboard MoCap was shot with one skater at a time under the technical supervision of motion capture technical lead Adam Hansen. For the dramatic shoot, Mainframe was able to motion capture up to five actors at a time. MotionBuilder was then utilized to blend motions together. We could go through our data base of tricks and say: OK, lets start with a few pushes, then go to a backside nose blunt slide on the small ledge, then blend a carve to the stairs and have him do a kick flip down the stairs, Burden Smith comments. MotionBuilder is so fast, you could get that into editorial, and easily say: Lets get another trick in that skate line. You then do a quick blend, finesse your camera movement, and all this without breaking the bank. So, it was a great winning combination of creative flexibility as well as budget flexibility, which is at times not a reality on a low-budget project

Adjusting the Design Process

The motion capture was used to animate CG characters that were built in a unique way for Mainframe. Normally, the art department would design a character on paper, and then create a schematic turn-around of the design for the modeling team to work from. Sometimes, a maquette would be sculpted if the budget allowed for it, but for the most part, CG artists had to try and build the model verbatim to the drawing. Although this process is a standard approach, the team always ended up disagreeing over whether or not the 3D model and the design matched. On the Tony Hawk project, Mainframe decided to avoid those conflicts by getting the designs into 3D as quickly as possible. This allowed the team to comment on a 3D model rather than a representation of a design on paper.

Our approach was to get the 2D design approved in terms of its overall proportions, and then we would build a 3D proxy of it, Kwan observes. This enabled our art director Gil Rimmer to see his designs in 3D at a very early stage. As a designer, you begin to see new problems with your work once in 3D. You also resolve things that you were worried about, such as functionality, but couldnt explore in 2D. When time allowed, we even rigged the proxies and applied MoCap to the characters, which gave us an opportunity to see how they moved and performed. This was extremely handy when it came to the more exaggerated character proportions, such as our lead villain Grimley.

Mainframe avoided the standard approach of presenting the characters on paper or as a maquette. Instead, the design team got to the 3D design stage very quickly. 

Moreover, as supervising modeler Francois Pelletier and his team were busy building proxies and resolving functional issues with the design, our art department was busy refining the details such as color, costume design and facial details. It was definitely a more organic way of working. There were some adjustments that both the designers and modelers had to make in terms of workflow, but the results were much faster turnarounds, better communication, and a stronger team. Overall, I think this process raised the quality of our characters. Combining motion capture with some key frame elements, the animation was overseen by supervising animator Chris Buckley, with lead animators Mike White, Larry Anderson and Francis Cardeno.

The environments went through the same creative process as the characters: first, 2D drawings by environmental concept artist Derek Toye, and then 3D proxies. Once the designs were nailed down, the proxies were imported into MotionBuilder for the team to establish the blocking for each scene as the first pass animatic was being built. The idea was to allow the director and the previs unit to work in a low-resolution 3D environment to establish camera movements and rough out a visual structure for the show. Working with the 3D scenes created by the previs team, the modelers could then get a sense of the coverage for each location. With that in mind, they would be able to focus their attention on the details that mattered most, augmenting the resolution and refining only what the camera would see.

This approach proved to be a huge cost saver, especially when it came to texturing, Kwan relates. In retrospect, I think our greatest problem was getting our artists on every level to buy into the process as opposed to focusing on the final solution. The strength of a 3D animatic is also its fault. People start to assume that the animatic is true to the final show minus fancy rendering. In reality, there is still a mountain of details for the director to define.

Adding New Tools to the Pipeline

For facial animation, the modeling department developed a new approach that would allow them to stay within XSI, which had not been the case on previous projects. On the Tony Hawk project, we needed to have the freedom to make changes on the fly, Burden Smith observes. What we did was to use the existing tools within XSI to build an in-scene facial interface. Since the new approach was all within XSI, changes could be instantly updated by the modeler directly. He could see how his shapes were interacting within the UI, thus allowing him to see the direct results on the user end. I think this played a huge part in forcing the modeler to think further through the pipeline and evaluate how his work would be used. This type of thinking is integral to creating an efficient pipeline its not all about hardware and software.

To give a more live-action feel to the shots, the team developed numerous cinematography rigs that allowed the virtual camera to mimic Steadicam shots, hand-held shots and skateboard cinematography. 

The tool was used to project the character depictions out for a more theatrical performance. This was especially true for the main villain, an over-dramatic circus ringleader whose emotional flare was realized through manic facial expressions. To give a more live-action feel to the shots, the team also developed numerous cinematography rigs that allowed the virtual camera to mimic Steadicam shots, hand-held shots and skateboard cinematography.

The show was finished with Quantels eQ YUV Model, a high-end compositing and finishing system. We had a lot of flexibility with the eQ system to tweak our rendered layers, and really give the movie a stylized look after we had rendered it, Burden Smith says. We worked at a resolution of 720 x 576. Animation director Logan McPherson was trained on the eQ, and he gave the show its final once over, adding in more saturation, playing with different kinds of film grain for certain sequences and really pushing our overall aesthetic for the show.

One traditional aspect of CG character animation that the team never bothered to deal with was cloth simulation. We just got rid of them, Kwan smiles. Seriously, though, we had to keep things simple and pick our battles. Cloth and wind-blown hair were done by hand with a simple rig. We kept this stuff to an absolute minimum. Adds Burden Smith, tongue-in-cheek: I also had a strict regime of not wearing any cloth during the production to support Caseys decision on limited cloth simulation, which made for an overly awkward work environment. But the breeze was nice!

Alain Bielik is the founder and editor of renowned effects magazine S.F.X, published in France since 1991. He also contributes to various French publications and occasionally to Cinéfex. Last year, he organized a major special effects exhibition at the Musée International de la Miniature in Lyon, France.

Tags