About The Film

The Plot
Li tells the story of a future where Earth's remaining population lives content within the ergonomic, medicated, and peaceful confines of Link. Li, the titular protagonist, faithfully serves his duty to the city as a Programmer during the day but come night his sleep is tormented by nightmares of a half-forgotten past. When a mysterious woman is brought into the Programming Bay for mental reconstruction Li is struck by a strange sense of familiarity and begins a tumultuous journey of discovery that will challenge Link itself.

Making the Film
Live action production was spearheaded by Digital Hydra using a stereoscopic beam splitter rig and dual RED One cameras at DePaul University's East Jackson soundstage (as well as various locations around the Chicago area). The film runs approximately 25 minutes long and is intended to be the concept pitch for an eventual feature film.

During production I served as a Visual FX Supervisor and now, during post production, I am responsible for developing the stereoscopic visual effects workflow.


Credits
Screenplay: Ross Heran
Director: Ross Heran
Producers: Patrick Wimp, Hamzah Jamjoom, Jacquelyn Chenger

VFX Supervisors: Hamzah Jamjoom, Tim Little

Friday, September 10, 2010

Stereo Matchmove with Keying & Compositing

This shot will eventually be one of the central "glamor" shots of the film, featuring a beautiful vista of the city and a setting sun spread out before Li. The city will be created using matte paintings projected onto proxy geometry within The Foundry's Nuke, allowing for flexible art direction without the expensive render times of Maya. The matchmove was created in Boujou from the right eye camera and then a stereo rig was parented to this singular camera.

Wednesday, September 1, 2010

Stereo Matchmove Test

Although I am quite comfortable with matchmoving and compositing a moving camera, this is my first time trying to blend stereoscopic CG renders with stereoscopic footage. Unfortunately I am not yet able to reliably build Maya scenes based on recorded measurements of the set so the IO and convergence plane of my stereo camera are aligned by eye (based on anaglyph renders of the raw footage). Works well enough for a calm camera move like this but I will definitely need to engineer a better system for more complex shots.

(Sorry about how short the video is, I am trying to prove my post workflow and rendering two 2240x960 resolution images for each frame becomes expensive very fast.)