Loading ...
Sorry, an error occurred while loading the content.
 

[ANIMATION] Jerome Chen (Sr. SFX Sup.) Explains Polar Express' Visuals

Expand Messages
  • madchinaman
    A connect-the-dots picture Tiny paste-and-place sensors helped bring the vivid animation in The Polar Express to life. By Susan King, Times Staff Writer
    Message 1 of 1 , Dec 8, 2004
      A connect-the-dots picture
      Tiny paste-and-place sensors helped bring the vivid animation
      in "The Polar Express" to life.
      By Susan King, Times Staff Writer
      http://www.calendarlive.com/movies/boxoffice/cl-ca-
      how21nov21,2,3115927.story?coll=cl-home-utility


      The $170-million animated children's film "The Polar Express," based
      on Chris Van Allsburg's bestselling book, features a new character
      animation technique called "performance capture" that allowed Tom
      Hanks to play five roles, including an 8-year-old boy, a balding
      train conductor and a burly Santa Claus.

      Like any number of grand experiments, performance capture made its
      debut to decidedly mixed reviews from critics and audiences. Before
      the film opened on Nov. 10, senior visual-effects supervisor Jerome
      Chen of Sony Pictures Imageworks explained how he and Ken Ralston
      and their team created the unique character animation in the film,
      directed by Robert Zemeckis.

      What makes it different: "Movies like 'Shrek' and 'The Incredibles'
      are done by using key frame animation. A character animator actually
      animates the characters. You start with these maquettes
      [sculptures], and you scan them into the computer.

      "We kind of did the same thing, but we started with real people."

      Focus on the face: "Motion capture has probably been around for a
      decade and a half. It had its roots with the body. It was very easy
      to do body capture. What was unique in this case is that we captured
      the body and the face at the same time. The feeling was if the
      characters had to look like people, it would be best to get real
      actors to play them.

      "Our system was designed to have three or four actors together,
      acting with each other. They have to wear these skintight [suits]
      with reflective markers on them — they are strategically placed.
      Each little marker is at a specific point of the body — the elbow,
      the middle of the forearm…. The body had about 48 markers. It is on
      the face that we needed even more detail. So the face had 150
      markers. The ones on the face are like 4 millimeters [about 1/64 of
      an inch] wide. They are wrapped in this really reflective material,
      sort of what you see on street signs.

      "I think we went through 50,000 of these little markers."

      70 cameras, one "eye": "Basically, the actor acts in this little
      area — we had to create a zone that was 10 feet by 10 feet — and the
      area is surrounded by more than 70 special motion-capture cameras.
      They are like recorders; they don't take pictures per se. They are
      looking at the markers. These cameras are all connected to these
      computers that run this special software.

      "The cameras all work together like a compound eye. Every camera can
      only see a small part of the actor, but it is sending whatever it
      sees to this software running on the master computer, and the
      software looks at what all the 70 cameras are seeing, so you
      actually get a 3-D image of the movement of the dots.

      "There is a virtual skeleton and virtual muscle system [in the
      computer]. So if a dot moves on [Tom Hanks'] face an inch up when he
      raises his eyebrows, that distance will tell the muscle system to
      move. So the actors' movements drive the virtual version of it. The
      only difference is that the [virtual model] doesn't have to look
      like the actor or be the same size as the actor."

      Editing the performance: "Bob looks at the video reference of the
      performance capture and picks the pieces of the performance he
      likes. It is like the first stage of editing. Then he gives that
      videotape [of what he likes] and we find the appropriate little
      snippet of motion capture and we stick it on the right character. We
      have to piece them together and put them in the right [virtual set]."

      Pulling together the movie: "This is the part where you actually
      figure out the point of view of the cameras. After we create in the
      computer points of view and sequences of scenes, we give that back
      to Bob, and then he edits it. The second part of his editing is
      really looking at what the movie looks like, because it now has a
      point of view.

      "Then at Imageworks we find, then fine-tune, the performance,
      meaning sometimes we don't get fingers [in motion capture], so an
      animator has to just work on fingers. We will work on the eyeballs.
      We have to look at what the eyelids are doing in the video
      references. Once we have the movement, we have to do clothes
      simulation.

      "At the same time, depending on what the shots are about, there's
      effects animation — snow has to be put in, and steam from the train —
      and then we talk about how we are going to light this scene with
      the lighting screen. We are making it as pretty as we can imagine
      and then looking at it from shot to shot making sure everything
      feels right. When we are far enough along that you can tell how [the
      movie] looks, we go back to Bob and show it to him and see what he
      thinks. And he loves it!"
    Your message has been successfully submitted and would be delivered to recipients shortly.