Before we start, here’s Kung Fu vs. Zombies for your viewing pleasure.

The premise behind our short Kung Fu vs. Zombies was simple – pay homage to the iconic alley fight scene from Big Trouble In Little China.

Inspiration for Kung Fu vs. Zombies
John Carpenter’s Big Trouble In Little China (1986) – Chang Sing vs. Wing Kong

(If you haven’t seen Big Trouble In Little China yet, please do. You’ll thank me later. It’s on Amazon for a dollar as I write this.)

When watching the clip above (there’s no spoiler in it, don’t worry), there are about 20 different fights happening at once. The fights that we focus on are in the foreground, but there’s a lot happening in the background that fills the scene out beautifully. We’re not supposed to focus on the background fights… but I can’t help myself. They’re hilarious.

Check out the fight in the background
Choreographer: “Alright everyone, even though you’ll be in the background, I want your fight scene to be intense and fast-paced.”
Stuntman with the hatchet: “Whatever, man, nobody’s gonna be watching us.”

Still, anyone who saw this scene ages ago was blown away by the amount of action happening on the screen. We wanted to make that. This would require building a set, hiring 50 martial artists, and shooting for 3-5 days with a crew of 30. Total budget: $100,000 USD on the low end.

Or, we could motion capture the entire thing with TWO martial artists and a crew of THREE. We could get an awesome location on the Unity asset store for $50, buy a couple character assets, and shoot the entire thing in two days in a motion capture volume. Our version would have a twist: the kung fu heroes fight against a zombie invasion in Chinatown. We could even bring in a dragon and a giant zombie that we could climb on. To make it a true homage, include a Jack Burton walk-on at the end.

When we started motion capturing Kung Fu vs. Zombies (KvZ) we didn’t even have a location in mind. We just knew it had to be a Chinatown alley. I’d deal with that later. I had Dennis for a day, so we just started shooting.

We started with the lineup, an iconic Western-style standoff that kicks off the scene. We did 16 or so takes for both sides, including the run up. We only had enough run distance to get a total of 2 steady run cycles, but this is enough when bringing it into post.

Kung Fu vs. Zombies mocap shoot

We then shot a ton of fight scenes. Dennis and I have worked together for over 15 years. Ever since Contour, we’ve figured out how to move together so easily that we can complete each other’s (physical) sentences. Choreography isn’t even really a process. We just move around and it becomes fight choreography. Sometimes I’d be the zombie, other times he’d be the zombie.

Eric and Dennis motion capturing the action in Kung Fu vs. Zombies

The goal, again, was to shoot everything the way John Carpenter did. I’d put every fight scene happening consecutively in the same location and just shoot vignettes, so the entire location would be chock full of action constantly. All the background action is just a repetition of what you see in the vignettes, but the audience is engaged, so they’re only watching the foreground stuff.

Some of these fight scenes were shorter than others. This would become a problem later on.

We then shot the giant zombie scene. This is the tentpole of the whole piece, the “Bad Guys Close In” segment to use Save the Cat terminology. Dennis acted out some basic navigation. We simulated the giant zombie eating one of the characters, which was just me sitting on a barstool, and also did some ladder climbs so we could have some characters climb the giant zombie leg.

Lastly we shot our Jack Burton footage outside. The Xsens system works off a router, as there are no cameras. I went out of range but that didn’t matter, since the suit buffers all the capture and loads it onto the computer once you go back within range.

Post-Production

For post, I decided on Unity because it allowed me to quickly drop the motion capture files in, put them on the free 3D models, and not really have to mess with anything. Plus I had already tinkered a fair amount in Cinemachine in our previous 3Viz video.

When I lined everyone up in Unity, I looped the run-up animations at the crossing of the feet so that it would blend relatively smoothly. You have to tinker with the motion settings so they don’t go off at an angle, but when doing this, I could have them run forever.

A short run cycle for each character that’s repeated so our characters can cover any space we put them in

I used the original Big Trouble in Little China scene as reference and shot it essentially the same way using Cinemachine. All the shots were done manually with keyframes.

I went with these low-polygon heroes at first because they were free, plus I thought it was a cool throwback. But when we released our first and second dev diaries, people commented on the low quality assets compared to the more realistic zombie assets. In the end we purchased some higher quality character packs from the Unity asset store for $12.

Another asset that we needed before making much progress was the location itself. I bought this Chinatown scene for $25 from the asset store and spent many hours tweaking the lighting and environment to get it to look cinematic.

Then I had to fit everyone into the alley. This was redundant work because I had already positioned everybody on the blank white background. Had I started with this set, I would have been done much faster.

Yet more lighting tweaks. You really have to put an arbitrary limit on things like lighting, or else you’ll tweak it forever.

Note: I added a 24-frame pre-roll before my animations started. This allows the lighting to “kick in”. Otherwise, the first second of animation would have various lights clicking on for whatever reason. There might be a toggle switch for this somewhere in Unity but I didn’t find it.

Special thanks to Adam Myhill for the cinematic tips like adding fog and some Cinemachine filters.

What I noticed right away too was the sheer amount of swaying and unnecessary movement from the heroes. The Wing Kong and the Chang Sing were almost perfectly still, but our guys were shuffling around like we all had to pee. And I had tried deliberately to reduce this movement during motion capture, because there’s a tendency for mocap actors to add way too much unnecessary movement (neck kinks, shoulder twists, lots of nervous footsteps in place, random weight shifts, none of which is natural). Still, old habits die hard. I slowed a bunch of these down to 0.5 speed to resolve this.

After ensuring that no characters were clipping through one another or through the surrounding scenery, the lighting looked good enough, the seams in the ground weren’t too obvious (which could be resolved by raising the range and decreasing the intensity of the corresponding light source, a task that became increasingly more complex as more lights were added), I shot this scene with Cinemachine and released it as dev diary episode 2.

Since I was dealing with so many characters, each shot required lots of cheating and shifting of characters and it took many hours more than necessary to shoot this part. If I had to do it again, I could bang this scene out in an hour or two. With an actual Vcam (like Dragonfly, which we used in Cabin Fever) it might take 20 minutes.

Every camera angle was exported using the Unity Recorder package add-in, something you can add to your project for free (make sure you select “Show Preview Packages” in your package manager window). I only exported mp4 files, which are relatively low quality in Unity Recorder, even when the quality is set to “high”. Adding an image sequence export option would have concurrently exported a PNG sequence, which could have been stitched together into a beautiful 422 ProRes file in Media Encoder, something I didn’t do until the next scene.

Movie export settings in Unity Recorder
Image sequence export settings in Unity Recorder

Another mistake I made in this scene was failing to make a proper hierarchy for my vcam assets. All the cinemachine vcams were in the root hierarchy, which was messy, but I was more concerned with just pumping shots out and getting the thing done. I assumed I could rearrange these after shooting everything, and so after I was done I stuck all the vcam shots into an empty “Virtual Cameras” object. But suddenly Cinemachine didn’t know where these vcams were located anymore, even when I re-linked them in the animation window. I would have to move them BACK OUT to the main hierarchy for the Cinemachine angles to function properly again. Still, not wanting to hamper my progress, I just tucked them away somewhere and moved on.

We felt pretty good about our dev diary. Then someone mentioned that the low-polygon heroes looked cheap, especially compared to their sleek, sinewy zombie opponents. So I paid $12 for two packs: PBR Fighters and this Ninja Pack. I spent a day or two re-fitting all the characters with their new, high-poly skins. For $12 I went from Virtua Fighter 1 graphics to PS3-PS4 quality.

Swapping high-poly characters for older low-poly characters

Kung Fu Fighting

With the lineup done and shot, it was time to move on to the fighting. My plan was to animate every fight scene as a single animation strip and layer them on top of one another and place them in various places throughout the scene. The question then became: how do I do a new timeline with all this stuff already in the scene?

I probably went about it all wrong, but it still worked. I basically duplicated all my characters and made one folder for “Fight Scenes”, where each Kung Fu vs. Zombie fight would happen, along with their associated weapons and props and effects, and one folder for “Vcams” with their camera angles.

Since each character had a parent object that could be moved about freely, I could mess with distancing whenever necessary. So when Dennis throws a kick at my stomach, we could do it from 2 feet away and I’d put a keyframe there to move the characters close together. When I grab him immediately after, I keyframe that spot and reposition the characters. It works flawlessly, save for a bit of foot sliding (which you could tweak in MotionBuilder), but the camera rarely looks at the feet. This is the benefit of havirng control of your camera and style. You don’t have to worry about making everything look perfect: just shoot the stuff that looks good and cheat the rest.

Fudging character distance in the Unity timeline

Lots of the fights had swords. We should have done prop capture, which would have made for less work and more realistic looking weapon animation. Instead I had to hand-key all the weapons, though parenting the weapon to the characters’ hands makes this relatively painless.

Some fight scenes were shorter than others. If a background fight suddenly ended, the characters would vanish. I’d catch this most of the time, but sometimes there’d be a distant fight scene with a disappearing kung fu fighter that I didn’t catch until much later into production.

To resolve this, I would go into the vcam animation track group and add an “Activation Track” for the background fight in question and make sure that it was deactivated on a camera cut. Some fights mysteriously disappear between shots, but at least it’s not mid-take, and nobody will notice this stuff.

Activating and deactivating characters per virtual camera in Unity’s Cinemachine

Vcamming all the fights was fun. Shooting and editing is where you can make action really sing. We had all these shots and edits basically planned in our heads as we were motion capturing them, so this part went smoothly.

We don’t shoot coverage-style, which would mean shooting the entire fight from multiple angles and then editing it together later. This shooting-editing style tends to come off as unintentional, but we wanted every shot to have a definite meaning, like a proper kung fu film. We pop off particular shots for particular action, even if that means doing 1 move in a camera setup. This was the Sammo Hung style, which negates the need to repeat action over and over and risk injury. Sammo’s impact-heavy style meant that stuntmen would be subject to wear and tear, so he’d pop off a couple takes of only that action, call it good, and never do that action again. I call this performer-economy since it doesn’t exhaust the performers.

Performer-economy shooting style, with zero coverage
From Sammo Hung’s Pantyhose Hero

In 3D, we’re tempted to over-shoot because it’s free to keep popping off angles. There’s also the perverse incentive of repeating choreography in different angles and attempting to sell it as different action, but we believe in disciplined shooting and maintained this same performer-economy style. In keeping with a rhythm Dennis and I have developed over the years, we would typically do 3-5 camera setups per action vignette, for a total of about 120 vcam shots for the mass-fight scene.

The entire short has the same motion blur. I shoot my live-action fights with a 1/50 or 1/60 shutter, or 1/120 if there are weapons involved (though for this short I never changed it). Strobe-y fight scenes shot with a 1/500 shutter are visually strange. This trend took off after Gladiator, where it was used to effect with the weapons and cool production value, but for some reason cinematographers and directors decided every fist fight needed to also look like Gladiator. The naked eye has a natural motion blur that registers movement to the brain. If your fight has a strobe-like quality, the brain might register the images, but without motion blur, the viewer requires more processing power to string these images together and process the fight scene.

Strobe-y fight scene from Fast & Furious 7 (2015)

Another issue with a strobe-like effect in a fight scene is that contact lines get compromised. A punch across the face shot at 1/50 shutter creates a pleasant motion blur, which allows the performer 2-3 frames of leeway to react in time. But at 1/250 or 1/500 shutter, the punch will be on one side of the face, then the other side of the face, with no blur. The audience will wonder why the impact is off, and it’s because there was never any contact.

So, I applied the Cinemachine motion blur filter to the global camera profile so it would never change.

I had the option of vcamming everything with the Dragonfly vcam, but unlike in Unreal’s Sequencer, there was no obvious way to edit these vcam shots in Cinemachine. Dragonfly works much better in Unreal. There’s also the Expozure vcam system, which is super high-grade, but we weren’t ready to transition by the time we started working with that one. We’ll use Expozure for the next short.

I should have put the vcams in the same folder as the fights themselves, because then if I moved the fight, the cameras would move with them. Instead, whenever I moved a fight, I’d have to move the cameras independently as well, which messed up all my shots.

Benefit of having vcam shots in the same folder as the corresponding action

Some ideas had to be thrown out, such as some bone breaks, which would have required some MotionBuilder work, as well as dismemberment, which would mean editing the 3D model in Blender or some other 3D tool to show the cross-section of the removed limb. The choreography never really called for this, and it was too technical for me. Maybe next time.

Next time, we will do finger capture. This would have been a huge help to the animations, and as it is we only did minimal finger adjustments for the “Hung Gar hands” portions, another nod to Big Trouble In Little China.

Paying homage to the Chang Sing hand signal (“Hung Gar hand”)
Big Trouble in Little China 1986

After an internal viewing, the Sumo stuff stole the show, so we added some shots for him on a pickup day. We also added a comedy bit with the red ninja’s shuriken (Dennis’s idea) and the rhinoceros smashing a zombie into a sign. This was accomplished with a series of 3 animations blended together.

Zombie Giant

In theory, adding a giant to your scene is pretty simple. Just mocap your actor, compensate for his size (maybe 10 meters translates to 50 meters in the 3D scene), and slow him down a bit. We fit the giant zombie into the scene without much of an issue, but I realized that he’d collide with all the Chinese lanterns I’d hung in the scene. I tried to animate them so they’d fall down, but it looked crummy. So, like any good filmmaker, I just cut away and added off-camera effects.

We barely had enough of a “scared run” cycle, and only 4 of them at that. We should have captured 16 of them, and tripled their length. So whatever you see is all that there was. You’ll see some characters begin to stop, but I tried to cut away to avoid showing this.

Climbing up the giant zombie legs was relatively simple. I parented the ladder climb animations to the zombie leg and compensated for the pant leg depth. In the end it looks okay. It would have been even better to animate the heroes’ legs dangling more and swaying with the motion of the walk cycle, but for a cheap edit job this worked pretty well.

Parenting a character animation to another character’s leg, with keyframe editing on the transform properties

We planned for a dragon to enter the scene and didn’t think much more of it. It turned out to be pretty simple, as the dragon cost us $15 on the Unity store. There was a built-in effect for the fire, but I couldn’t get it to work, so I had to build my own using the particle generator. I also parented a similar fire effect to the giant zombie’s head for when he goes down to the ground.

Using Unity’s particle emitter to create a dragon flame

After spending a bit of time learning particle systems, I figured I’d try my hand at making blood effects too. This turned out to be a huge ordeal. I tried purchasing two blood asset packages, neither of which worked. There was no clear-cut tutorial on making a blood particle system. So I tweaked and tweaked, probably for a total of 10-16 hours, until something looked acceptable. Even then, the blood has no collision properties and falls through the floor, so this would have to be (again) hidden with camera and editing.

Using Unity’s particle system to create blood effects

Still, once the blood was made, it was easy to replicate it everywhere. I could easily make splatters for the Sumo attacks, and by parenting a blood particle system to a character, it will follow them around. These really add to the scene and I’m happy I invested the hours.

For the final Jack Burton cameo, I didn’t consider the fact that we’d actually have to find a character that resembled Jack from Big Trouble In Little China. And it turns out there’s nothing out there, nothing even close. Also, creating the iconic Jack Burton tanktop was way beyond my pay grade, so I used Adobe Fuse to build a Jack wearing his simpler cream-colored poncho in the beginning of the film.

Jack Burton created in Adobe Fuse and the $5 truck from the Unity store
Jack Burton, late to the party as usual

Getting your Fuse animation into Unity isn’t a simple task. First I exported it to Mixamo, which generates the rig. However, this skeleton isn’t prepared to run the Xsens mocap animations we had. Our MotionBuilder tech Mike Foster rigged it up in MoBu and made it ready to go. Still, importing the character into Unity results, for some reason, in the textures being set to transparent. So I had to extract the textures and reapply them to all the body elements. Then we had Jack.

I spent about 2 days on sound design. I have a very fast process for doing sound that I developed when doing previz on Heart of a Champion where I can bang out sound designs really quickly. It involves a lot of hotkeys and organization techniques. I’ll write a separate post about that someday. I also nested each character’s sound effect group and created sequences from those and dropped them in as background sound whenever I wanted to fill in some ambient fight sounds.

Sound design and mix done in Adobe Premiere

Mark R. Johnson handled all the titles. We went with the Carpenter style for both the intro and final credits using the Albertus font. And JP Franco created our thumbnail.

Final Takeaways

I learned about 6,482 things doing Kung Fu vs. Zombies, but here they are narrowed down to a top-8 list:

  1. You can make crowds really quickly using motion capture.
  2. Unity assets are cheap and they have everything you could ever want on the asset store.
  3. Cinemachine looks great but takes time compared to using DragonFly iPad-based vcam. (Unreal’s Sequencer takes arguably as much time as Cinemachine.)
  4. Making a single blood particle system is hard, but once you invest in making it look right, you might as well use it everywhere.
  5. If Unity had nested timelines like Sequencer, it would be a far more competitive filmmaking tool.
  6. Organize your vcams carefully when using Cinemachine. If they’re intended to move around with the action, parent them under the same object. Do not start editing your vcams until your hierarchy is set!
  7. Adobe Fuse is a powerful tool for making quick character models, but you’ll need to tweak it in MotionBuilder before it’s ready for an Xsens mocap animation.
  8. When exporting using Unity Recorder, if you’re exporting a clip that starts 2 minutes into the sequence, Recorder renders everything, rather than just skipping to that 2-minute mark. Recorder is solid, except for this one issue.
  9. All the cool people have seen Big Trouble In Little China. Have you?

    and finally…
  10. Learning that we could make a full-blown action movie using nothing but some motion capture suits and a Unity scene changed how I see filmmaking. This kind of movie would have been impossible 10 years ago, but as storytellers we have all the tools we could ever want to make whatever we want. I look forward to seeing how Unity, Unreal, and Maya get utilized by the indie filmmaking world, because today, there’s no longer a barrier to entry to telling a story. Just learn the tool and start making stuff.

Many thanks to the people who have kept watching our projects over the years. We believe the action of Kung Fu vs. Zombies and the ease of creating it is a sign of things to come.

by Eric Jacobus
I spent years doing traditional pre-vis (or “previz”) for action scenes in films and shows like Altered Carbon, Black Panther, and A Good Day to Die Hard. I took this overseas for Heart of a Champion and Man Who Feels No Pain. Previz is a video blueprint for a movie. They did one of the first pre-vizzes for The Karate Kid, essentially a walkthrough of the entire movie. You can see it on YouTube [EDIT: looks like the video is gone. Pre-viz became more advanced with digital filmmaking, which Yuen Wo Ping employed in The Matrix‘s previz, and with 3D tools in the sequel. Serenity‘s previz, done by 87Eleven, employed sound effects, props, and crowds of stuntmen wrecking on concrete and wooden stairs. This was probably when the previz market exploded. Every indie action filmmaker had learned camera and editing skills over the previous 10 years. They scored big Hollywood jobs, but those filmmaking skills sat dormant. Now they could be employed to full effect to sell a coherent action vision. (The industry term, in the action world, is “stunt-viz”.)

High quality stunt-viz became its own selling point. It became common to work stunt-viz into the budget. The market began to demand that stunt-viz include, besides choreography blocking, all camera angles, editing, sound design, visual effects, music, color correction, and if possible wirework.

The limitations of a live-action stunt-viz required constant re-shoots, repeated falls and reactions for the stunt performers (unwelcome wear-and-tear), and many late-night re-edits. The result was the equivalent of a short action film created over the course of a week or two, which the production could use to demonstrate its high-quality action team.

When it came time to shoot, it was anyone’s guess whether they would actually use the stunt-viz. Most of the choreography would inevitably be thrown out due to time or performance constraints. They threw out the entire rooftop stunt-viz for A Good Day to Die Hard, but I never found out why. They might use some camera angles from the stunt-viz, but the DP will have his own vision. (Forget even asking him to come to the stunt-viz session. He won’t.) And if they do use the stunt-viz edit, then you’ve found a unicorn, or the production just wants everyone to be happy so they can traffic heroin on the side.

At any rate, whether or not any of the stunt-viz was used in the project might not matter. The team still got a long gig out of it, and the stunt coordinator got a high quality demo reel with the stunt-viz.

Since then, previz has become an entire market. Halon, Third Floor, and every stuntman on earth has the means to create high quality previz. Some of them are ridiculous in their production value and have so much gloss they could almost be short films. Almost…

What exactly can you do with stunt-viz? Or live-action previz in general? You can polish it up and add vfx and try to make it into a short film. At best, it’s a bunch of stuntmen in workout pants doing choreography in a gym. Live-action pre-viz just doesn’t carry very far beyond:

  1. Reference for the production
  2. A demo reel to pitch for the next production
  3. Fun behind-the-scenes material that you hope will get a couple thousand views on YouTube

The Process Is the Problem

Live-action pre-viz is a process problem. The reason stunt-viz loses value is because film productions are linear by their nature. They weren’t always that way. Chaplin and Keaton films were, in a way, non-linear. They were like live performances when everything happens at once, only with a camera. Set-construction was Keaton’s specialty, and his gags hinged on this process. Chaplin would rearrange entire scenes to get a gag right. Jackie Chan, with the same live performance background as these Vaudeville performers, used the same process to make his great works. These productions were vertically integrated. The auteur‘s vision had perfect continuity because he exercised control over the elements of production. And that’s how you made good comedy. The auteur was a busy guy because he had to ensure every department carried his vision to completion. Or he just did it himself.

Jackie Chan editing Project A in 1983

The studio system commodified film by making the process linear. Set construction was graciously taken off Keaton’s back so he could focus on things the studio deemed more important, like learning his dialog lines to take advantage of the new sound capabilities of film. This was the death of Keaton and the rise of screwball stars who could say funny things in funny ways. Fortunately, Chaplin had a good voice… and his own studio. The filmmaking process would become more like an assembly line.

The linear filmmaking process.

The linear studio system is what we have today. It’s Netflix, Universal, WB, and Disney. Production departments have relative autonomy over their processes. There’s some oversight, but generally these teams are free to do what they need to do, but they do it with caution. Camera team will overshoot (just to cover themselves), and editors will edit the mountain of footage. When a single edit might work perfectly in a scene, the editor might use ten, because ten angles were shot. And you don’t want to throw stuff away. The camera operator doesn’t edit the film, and the choreographer doesn’t shoot the action. The common result is the “Blockbuster style” – lots of camera angles, lots of editing, lots of money. Bollywood and Chinese blockbusters are the same. The linear process is the antithesis of the Chaplin, Keaton, and Jackie Chan genius.

You got a job at Marie Calendar’s because they tasted your grandma’s signature apple pie recipe. Now you work the line building pies. There are fifteen stations of the Marie Calendar apple pie. Your job is to cut the apples. The guy down the line puts marshmallows in the apple pie, because they sold 80 million applemallow pies in China last year. You wish you could make your grandma’s apple pie, but hey, it’s a job.

The Non-Linear 3Viz Process

I did some motion capture for God of War and some other games. One day I walked past a sound booth on the way to the mocap stage, where a sound designer was working on the sound for today’s mocap shoot. This broke my linear filmmaking brain. How can you predict what the sound design will be for something you haven’t even shot yet?

Game design, and 3D filmmaking in general, is not a linear process. It’s a spiral. I snapped this photo of the game design process during a Unity presentation:

This isn’t a revolutionary way of thought. This is how great action and comedy were made almost a century ago. Sometimes, great ideas are very old. The process also applies to virtual production, which is when 3D engines and filmmaking cross paths. In a virtual production, you can motion capture animal movements and stream it live onto an LED wall or into a green screen, composited on the fly, and tracked with the camera movement. Once disparate processes of filmmaking suddenly collide into the same moment. The auteur’s vision can be executed at every second, but only if he can grasp the tech.

That’s the moment I took a right turn from the traditional, live-action world and began learning Unity, Unreal, MotionBuilder, and the Xsens system. We created action scenes using these tools, pitching them as high-end 3D pre-viz, which I dubbed 3Viz. With 3Viz, we could shoot and edit the pre-viz, or we could ship it to the production and let them do it. Reshoots and re-edits in 3D were as simple as moving some camera icons around, altering the timeline a little, and re-exporting. A reshoot might take a couple hours for a single person. The alternative was the live model, which meant getting our 15 stuntmen together again at the gym and re-shooting and re-editing everything.

The goal of 3Viz is to get the director’s action vision solidified before post-production, before cameras roll. The director might want to change the environment to accommodate the action. He might want a character to be 30 feet taller. Or move the sun 90 degrees west. All of this is 3D modeling 101 and requires a few clicks. Finishing the 3Viz mocap, shoot, and edit requires a team of 4-8 people, who can work remotely from cruise ships or hot springs at the same time and see live updates. The film is pre-finished this way.

The 3Viz is sent to all relevant departments. Art department replicates the textures when painting the set, carpentry builds the set that the director devised for the action, wardrobe is looking at the asset costumes, and camera team has a very defined shot-list. Any shots they can’t accomplish practically have already been sent to VFX, who are using the camera animations created in the 3Viz to build those VFX shots. VFX also have the motion capture files, character assets, and anything else for creating VFX shots in post. Publicity are using the assets for creating posters and social media posts to promote the film. Sound department is designing a soundscape based on the 3Viz edit. The composer has already playing to the 3Viz, and his music can be played on set like Morricone’s.

Cameras haven’t even rolled yet, but the film is almost done. During shooting, production can change the lighting on the fly to reflect the 3Viz using LED walls, and other lighting setups were pre-programmed weeks ago. Dailies are passed to the editor, who edits to the same 3Viz edit that he’s supervised for the past few months.

Post-production? What post-production? Clean it up, take a week-long vacation, and release the movie a month after shooting completes. The result is an action vision that is exactly the way the director planned it.

Cabin Fever is a demonstration of the 3Viz process which allows continuity of vision throughout the entire production. 3Viz can be a pre-viz and just stay that way. Only shoot what you need. Mocap for a day, edit for 2 days, and it’s done. Iterate away. But all the assets acquired during pre-viz can be kicked up to production and be worked into the final product.

In the case of Cabin Fever, the 3Viz IS the final product. With some facial capture, finger capture, asset creation, additional lighting and all that, we could have made it look as good as a Pixar film, but the action and the comedy, the bread and butter of the project, remain the same through all this. We hope you enjoy the short, but even more we hope you like the process. We cover that at the 5-minute mark.

Email me at eric@superalloyinteractive.com if you’d like more info, or if you wanna give it a shot.

Credits:
WRITTEN & DIRECTED BY ERIC JACOBUS
PRODUCED BY ZAC SWARTOUT
ERIC JACOBUS AS “THE MAN”
DENNIS RUEL AS ZOMBIE, ROBOT, AND “BERNIE”
SET TECH/BEHIND-THE-SCENES/TITLES MARK R. JOHNSON
MOCAP SUPERVISOR/UNREAL TECH/MOTION EDITOR MIKE FOSTER
MOCAP TECHNICIAN CORDERO ROCHE
LIVE VCAM OP JEREMY LE
PA’s – CHRIS CORTEZ & DANIEL SHEPHERD

MUSIC: “HOME ON THE RANGE” BY CHRISTIAN LABRECQUE
“DAWN OF THE DEAD” THEME BY GOBLIN
“A HAPPY DAY” BY Z80
“GODZILLA THEME” BY ARTIFICIAL FEAR
“TERMINATOR THEME (COVER)” BY NEON FRONTIER

POST-PRODUCTION BY ERIC JACOBUS
BUILT IN UNREAL 4.23
SHOT USING GLASSBOX DRAGONFLY
SOUND DESIGNED IN ADOBE
MOTION EDITED IN AUTODESK MOTIONBUILDER
SPECIAL THANKS TO TJ GALDA AND ALINA KLINAEVA

CREATED WITH 3VIZ
COPYRIGHT SUPERALLOY INTERACTIVE 2020 ALL RIGHTS RESERVED

Eric Jacobus (God of War, Mafia III) is taking what he’s learned from his work as a motion capture performer in triple-A videogame titles and applying it to the indie film and game development world. Only days after utilizing motion capture to embody an Omen of Sorrow character for a live stunt show at Santiago’s FestiGame, Jacobus used the same Xsens marker-less motion capture system to record himself kicking as a robot in a 3D environment. Normally a fight scene requires at least two performers, but Jacobus took it upon himself to record both sides of the fight scene. He then created a 3D geography within Unity and played the animations against one another, simulating a fight scene between two actors. A small behind-the-scenes look at Jacobus’s process can be seen at the end of the video.

Jacobus says he’s just scratching the surface of what’s possible now that he’s able to easily execute action choreography within a motion capture system.

Nerdmacia recently interviewed Eric Jacobus at FestiGame in Santiago, Chile about his work as a stuntman in God of War. Below is the Google translation of the original interview.

At Festigame Coca Cola 2018, we were able to interview Eric Jacobus briefly, Kratos’ double of action in the new God Of War game. We asked him some questions related to his work and what it means to be double action, and this is what he answered.

Nerdmacia : Being double action must be a very fun job. How was it that you became one?

Eric: Uff is a long story, but very nostalgic for me. It happens that I grew up in the era in which feature films were at their peak (mid 80’s and part of 90’s), and I have fond memories of having seen very good shows on MTV and other interesting movies. I remember that I began to imitate everything they did, the pirouettes and other things. That was on my side, but in Hong Kong I had some friends who made action movies at a low price, so we finally decided to get together and started making independent films in the patio of our house. Over time we had the attention of several people for what we did, and then in the 2000’s Youtube was born, a platform that allows you to upload content for free. So I made my channel, and I started uploading the content that we generated in that place, and that was when the bomb exploded. They began to see us from many parts of the world, not only people, but also important companies. So the work began to viralize and began to have more attention, and the next thing I knew was that I was already working as a full-time action double. So eventually I came to Sony to work on the new God Of War.

Nerdmacia : Could you tell us about your experience of working in a video game?

Eric : It’s very different to work in a videogame than in a movie. I thought I was going to do more things than I actually did, but it turns out that in a video game you only do twice as much as a 3D model, it’s not that you’re really participating. In addition Kratos is played by 2 different people, on the one hand is Christopher Judge, who is the one who lends his voice and movements in general, and then I am double acting. So yes, it’s very different. In this game it only required, to give you an example, of about 8 hours working out every day, which is very exhausting, whereas in a movie it is much easier to divide the times, and it is a lighter process in the long run.

Nerdmacia : Working in videogames and traveling through fantastic worlds should be a truly unique experience. What was the first video game you worked on and how was the experience of seeing yourself in a world that does not exist?

Eric : The first video game I worked for was Mafia III from the 2K company, since fortunately I had a friend who was involved in the production and I lived a few blocks away from him at that time. Then he contacted me mainly because they needed a guy who accepted all kinds of beating haha, that is, stabbing, kicking, shooting, so my first job was capture movement and I remember being days and days and days in that and had to recreate falls and different types of pirouettes that in the movies is very rare to see, and not only for one person, but for several at the same time. In the game we see children falling, old people falling, women falling, then I had to, as I say, receive all kinds of beatings haha. It was very funny because I ended up being the laughingstock of everyone, but finally that is what led me to success. The truth is ironic.

Nerdmacia : How long does it take you to make a scene on average?

Eric: Again, it’s a totally different process in video games and movies. In a movie, a scene of fights takes a lot of minutes to shoot and if it does not go well you have to do it again and so on until someone says “I liked it” and then it’s cut and that’s what’s left. In a game like Mafia III or God Of War there are several cameras and with different angles each, then they are not even minutes. You simply do a pirouette and say “cut” and then do another and “cut” again. And in the end all those divided shots unite them, and the whole sequence appears. In movies it is longer, but by not being divided, you get to enjoy the process much more. But that’s what it’s about being double, of putting all your energy into what you do, whether it’s a lot or a little.

Nerdmacia: In an interview that we did earlier you mentioned that you had a very serious experience with one of your knees. How was that and what health measures are taken to ensure that nothing happens with the production?

Eric: Oh yes, my knee. It happened that right in the recordings of God Of War I fell badly in one of the many pirouettes, and I began to have a very sharp pain in one of my knees. I did the work, but obviously many people in production realized that I was limping, so one of them, Carlos, comes up to me and says “Hey … how’s your knee?”. Of course nobody wanted to stop the production but Carlos told me that, if necessary, we would stop it if my pain continued. The pain continued, but Carlos was very kind to me, almost an angel … he gave me some medicines and advice so that it would not happen again, and in fact he helped me a lot in those scenes so that he did not have to suffer so much. Finally the pain healed thanks to his advice and I have not had a bad experience like that again. But of course, at the time it was very scary.

 

In light of his performing stunts for Kratos in Sony’s 2018 hit God of War, Eric Jacobus was invited to participate at FestiGame in Santiago, Chile in early August. Jacobus was first asked to give a stunt demonstration and motivational talk, but then he saw the numbers: over 40,000 video game fans from all over Latin America would attend FestiGame, and he had a stage to work with. So he quickly brought together some teams to do something that’s never been done before: a motion capture stunt show starring a live video game character.

Jacobus knew that the right tool for the job would be Xsens MVN, a marker-less motion capture system that utilizes sensors and can be utilized anywhere without the optical cameras one requires in a Vicon or OptiTrack system. He originally saw the Xsens suit at E3 in 2017, and now he knew how to apply it. All he needed was a character to embody in the live show, and he discovered that the upcoming Chilean fighting game Omen of Sorrow would be at FestiGame. The show coordinator contacted AOne Games, and they agreed to let Jacobus use their Dr. Hyde character.

Chris Adamsen of Xsens rigged the Dr. Hyde character in Unity, and using an Xsens plugin streamed Jacobus’s movements in the Xsens suit directly into Unity and manipulate the Dr. Hyde model. The result was a stunt demonstration in which Jacobus brought a video game character to life in front of a live audience. (Video shot by Zac Swartout)

Jacobus plans to bring live motion capture stunt shows to other venues and hopes to portray other video game characters in the near future. If this is of interest for your show or if your video game character is a good fit, perhaps you can both make it happen.

Eric Jacobus is reachable at theericjacobus@gmail.com.

Eric Jacobus is back with another entry in his Tekken In Real Life series, this time with Akuma from Tekken 7. Jacobus recently took a short break from producing the series while moving his studio again, but now that he’s settled in, the raging demon could be unleashed.

Jacobus writes in the video’s description:

Akuma’s moves are the result of doing Karate in sandals for a thousand years. His stance is totally grounded, as if his toes are gripping the ground. Everything is heavy-handed and Karate-based, except for some of his impossible airborne attacks. Nonetheless, if you find yourself in a street fight and you’re wearing flip flops, don’t try to fight like Akuma.

I had the insane privilege of working on Santa Monica Studio’s monumental Playstation 4 game God of War doing motion capture for Kratos, and the team put together this interview about how we approached Kratos’ combat in the game.

Eric Jacobus as motion capture stuntman for Kratos in God of War

[T]hey didn’t exactly hire a UFC fighter to do the motion capture for Kratos. Instead, they turned to a YouTuber who had been, for fun, making videos where he recreates moves from fighting video games – Eric Jacobus.

Bruno Velasquez, the game’s principal animator, had seen Jacobus on YouTube years back, saw him recreating moves from Street Fighter and Tekken, and said, “That guy needs to be our Kratos. Like he’s Kratos. Look at his moves. Look at how he’s flying and doing Superman punches!”

So they pretty much just sent the guy a message on YouTube …

While the actor Chris Judge plays the voice of Kratos and does all the cinematics, it’s Jacobus’ moves you’ll see doing the occasional chokehold and unleashing a fury of fists on one of the game’s unlucky foes.

Rappler (March 19, 2018)

Eric Jacobus motion capture audition video for Kratos, God of War

I was working on the Tekken In Real Life series when Santa Monica Studio, the team behind God of War, called and asked me to audition for the Kratos role. I proposed making a move list for them, and after tinkering some more in my garage I made a 6-minute reel for the character, like the IRL videos. They called me down to the studio and I started work soon after.

And thank God, because Santa Monica Studio saved this dad and his family when we were at a real low point. As a father on the brink of failure, I channeled that frustration and swung that ax for 8 hours a day as hard as I could, dropped on my neck as many times as they wanted, and climbed and kicked and punched non-stop, and I’d have done it 8 hours more. I got to work with top-level video game directors like Mehdi Yssef, Bruno Velasquez, Dori Arazi, James Che, and Tomek Baginski, and it was a joy working alongside Jade Quon, Chris J. Alex, Thekla Hutyrova, TJ Storm, and Kelli Barksdale.

Eric Jacobus Kratos – God of War Mocap with Chris J. Alex and TJ Storm

Game creators, filmmakers, and stunt coordinators are always scouring the internet for inspiration, and that’s how they found me. If you have a skill, the hone it, film it, and put it online. And do it nonstop. Treat it like a second job. I did at least ten of these Tekken IRL videos before they called me for God of War. Work hard, and you might be doing stunts for a project like this too.

Eric Jacobus motion capture performer for Kratos in God of War

Thank you to the team at Santa Monica Studio and all the people behind Sony Playstation for this great opportunity, and Katsuhiro Harada and Bandai Namco of Tekken for helping this garage man with a GoPro chase his dreams.