All posts by Evan

Feature Turnover Guide – VFX

Feature Turnover Guide – VFX

Managing VFX is a daunting subject and a big task, even for smaller films. It’s so complicated that films that can afford it will hire a separate VFX Editor just to keep track of the film’s VFX and to create temp comps as placeholders until the VFX come in. I’ve been putting off writing this article for years now because the thought of trying to encapsulate it all in a generically useful way was a bit overwhelming, but here goes…

The Job of A VFX Editor

On bigger films, there is at least one VFX Editor, and often there are two or three. On smaller films, the kind with one editor and one assistant, that assistant editor handles all the usual AE duties plus all the VFX Editor tasks. If you’ve never done it before, it’s a steep learning curve. VFX Editors are responsible for:

  • Creating temp comps for the editor to use while cutting, before any VFX vendors start working
  • Tracking every shot in the timeline that needs any kind of VFX, and giving it a shot ID
  • Tracking changes to the cut that change what VFX are needed. This includes knowing when a shot is cut out, when a new shot is added, when a shot is slipped far enough that your handles no longer cover it, or when sync is changed within a shot that will affect sound or the vfx work that needs to be done.
  • Creating “count sheets” (sometimes called lineup sheets) that detail each shot, what work is needed, what elements the VFX vendor needs to complete the shot, and the timing of each element in relation to the overall shot.
  • Creating EDLs or Pull Lists for a Post facility to scan/render DPX frames of your plates and elements to hand over to the VFX vendor. If you don’t have a Post facility, you might be in charge of rendering out DPX frames yourself using the raw camera footage and software like DaVinci Resolve.
  • Receiving versions of all the shots from your VFX vendor, cutting them into the timeline, reviewing each version with the editor and director, then taking the notes from those review sessions and communicating them back to the vendor.
  • As shots are being finaled, assuring that the finished shots are delivered in the appropriate format to the DI facility, and that the right version of the shot was delivered.
  • Constantly checking and re-checking all of the above, because things always fall through the cracks.

The VFX Database

One essential tool that VFX Editors need in order to do their work is an effective way to manage all of this information. This usually comes in the form of a VFX Database, and most VFX Editors bring their own to each new job they start. Most of the time the database is made with Filemaker Pro, but some VFX editors have custom solutions, and if your needs are minimal you can get away with using a simple spreadsheet. There is no standard VFX database out there, though I’ve seen at least one that you can pay for if you don’t want to get into designing a Filemaker database on your own. Many VFX Editors are protective of their databases, which is understandable given the hours of customization they’ve put into creating them, so if you need a database and find someone with a good one who’s willing to share, consider yourself lucky.

Marty Kloner's VFX Database for Star Trek Into Darkness

Marty Kloner’s VFX Database for Star Trek Into Darkness

If you do borrow someone else’s database, one thing to consider is whether you like their workflow and are willing to emulate it. One of the reasons that these databases are so customized is that everyone has their own ideas on how to do each part. Are you someone who likes to enter all the shot information manually, or do you create subclips in a bin and then export a tab-delimited file to import into your database? Do you need a thumbnail for all your elements in addition to the shot itself, and if so do you need just one thumbnail or a heads & tails set to confirm start and end frames? And how do you name your shots? With a two letter sequence prefix and a padded four digit number, or do you break it up by scene first?

The answers to all of those questions help determine the needs of your database, so if you inherit someone else’s be prepared to do what they do, because if you want to go a different way you’ll find yourself very frustrated.

Also, it’s important to note that many VFX vendors will now give you access to their internal tracking systems. This is great, and can be a useful way to communicate, but never rely on the vendor’s database in lieu of your own. That’s a guaranteed way to have things fall through the cracks. You must always keep your own list of shots and what their statuses are.

The Basics

So assuming you’ve got an idea of how you’re going to track your shots, let’s go over the details of what information you need to include.

VFX Shots

A record of every shot is the basis for everything you’re going to do from here on out. The most important information to track is:

  • the Shot ID, which you are responsible for creating. A regularly used naming system is to come up with 2-letter abbreviations for all the VFX sequences in the film, and then starting from 0010, name all your shots in increments of 10. So if your sequence is called “Things Explode”, you would start out with shots IDed as TE0010, TE0020, TE0030, etc. This pattern is easy to communicate verbally, easy to type (no need to hit shift for an underscore separator, e.g.), and allows you to maintain a rough chronological order if you need to insert a new shot after an existing one. It is also not dependent on the scene number where a VFX shot is located, which some people like to include as part of the Shot ID, but which I think is an irrelevant piece of information for VFX purposes.
  • the duration of the shot. This can be either the duration in cut or the total duration turned over for work, or both. Whatever’s more useful for you.
  • the shot’s handles. Handles refers to extra frames you’re asking the vendor to include beyond just what’s currently in the cut. It’s common that you’ll receive a shot back and want to add a few frames to the head or tail. If you only turned over the footage that was in your cut at the time, you wouldn’t be able to trim the shot. But if you have 8-frame handles, for example, that’s 16 extra frames you’ll get back that you can use in the cut if you need to.
  • the description of the shot. This is where you tell your VFX vendor what exactly you want them to do (and hope that they read it). Even if it’s really obvious. Do they need to key out the greenscreen and add laser beams coming from a cat’s eyes from frames 39-47? If so, write it down in the description.
  • the status of the shot. Keep your own list of what shots are In Progress, On Hold, Omitted, Final, and CBB (meaning “could be better”). Don’t rely on your vendor’s list, but do crosscheck your list with your vendor’s at regular intervals to be sure you’re on the same page with what work is left to do.
  • the vendor. You might have more than one vendor working for you. Make sure you track which shots go to which vendor.
  • the turnover date. It’s useful to know what date you turned a shot over to be worked on. If you name your turnover batches, note that down too.
  • the final version and date. When you’re nearing the end of your film, you will want to check that the vendor delivered the right version of each shot. Keeping a record in your database of what version was finaled and when will allow you to make sure you’ve got the right files in your DI. If you find that your vendor has delivered a newer version of a shot than what you noted down, be sure to ask them about it. It might just be a tech fix (something small they noticed and fixed without needing client review), but best to be sure.

Screenshot of Marty Kloner’s VFX database, showing a shot list for Star Trek Into Darkness

Elements

Every VFX Shot requires at least one Element. An element is a piece of footage required to complete the shot. If your VFX needs are not complicated, many of your shots will have only one element. For example, if you’re removing a scar from an actor’s face, you only need to hand over the shot that’s in the cut. If you’ve got complicated shots, then you might have a background plate and multiple foreground elements. For example, a screen replacement is a 2-element shot. You have the shot in the cut that has a TV in it, and you have the content you want to be inserted into the TV. Both of those elements would need to be handed over to your VFX vendor, along with information on how the TV content should be lined up with the background plate.

Important information to track for Elements is:

  • the element name and version. This can be as simple as taking your shot ID and adding a suffix to it. So if the shot is TE0010, your element might be called TE0010_bg1_v1. And another could be TE0010_fg_smoke1_v1. Have a conversation with your vendor to determine if they have a particular preference for element naming. The version number is useful in case you extend a shot beyond its handles. Then you would have to deliver a new element at the extended length, and you would increment your element version to v2.
  • the tape and timecode of the element. This should be pretty obvious. You and your vendor both need to know which parts of each element you’re actually using. You need this so you can generate DPX files, and they need it so they can line up the elements correctly. If you have a post facility making DPX files for you, you might not get a chance to check that the DPX elements are right before they go off to the vendor, but if you have a record of what material was supposed to be turned over, you can start to troubleshoot.
  • the handles you’re including. Element handles often mirror the shot’s overall handles, but sometimes you might need to customize it.
  • turnover dates and scan orders. It is common, especially if you have to scan film or go through a post house for DPX files, to turn over the elements for a lot of shots at once in one lump EDL or Pull List. It helps to keep track of when these batches were sent and what the name of the batch was
  • speed information. If there are any speed effects on your elements, note that in the element description. If it’s a fancier timewarp effect, you might also include a screenshot of the graph and note which frames have keyframes and what their speed % is.
  • You should be prepared to locate lens and focal distance data for a particular piece of footage if requested. This can usually be found on the original camera reports, or sometimes in the notes of an on-set vfx supervisor if there was one.

Received Versions

Keep track of every Quicktime you get back from your vendors, the date you received it, and any notes from the director or editor on fixes that need to be made. As mentioned above, also note when a version becomes final.

Avid timeline from John Wick 2

Final John Wick: Chapter 2 timeline with dailies on V1 and final VFX on V2. VFX Editor: Kim Huston

What Your Editor Needs Of You

Every film is a fight against entropy, but there are some steps you should do to make life easier for yourself and the Editor.

  • You need a fast way to navigate to every shot in your timeline, so use timeline clip notes (as of Media Composer 8.8) or put locators in the center of every VFX shot on the timeline. Preferably, put the locator on the plate/dailies. Put the Shot’s ID as the locator text. When I cut, I like to keep my dailies on V1. When I get a version of the shot back from the vendor, that goes on V2. Adjust as necessary if you need to use multiple tracks for a temp comp. When I get a new version of the shot, unless there’s a compelling reason to stack them, I’ll overwrite the old version on V2 with the new one. So in this way your locator always stays in the timeline even as you get newer and newer versions of shots above it.
  • Check the cut every so often for changes, and do it more frequently the closer you get to the end of your schedule. Make sure every clip note or locator is still there. Since you can’t rely on the editor to always tell you when things have changed, reviewing the cut yourself will help you find shots that may have been cut or trimmed without your knowledge.
  • If a shot has been extended beyond the frames that were initially turned over to VFX, confirm with the editor before proceeding. If it’s only a frame or two beyond the handles, the editor might opt to cut those two frames in order to stay within the boundaries of the shot. If it’s been extended more than that, you’ll need to revise your element’s timing and resubmit a new version of it to the vendor.
  • Give clip colors to your shots. I like having one color for versions of shots in-progress, and another color for versions I’ve finaled. This makes it easy to see at a glance if there are any missing shots that have not yet been finaled.
  • Always check with your editor how they want to handle putting new versions of shots in the timeline. Do they want to cut them in themselves? Do they want you to cut them in on a new track and then leave it to them to drag down to a lower track, or do they want you to just cut it in normally and tell them which shots to look at? Any way is fine, as long as you are able to show or tell them what’s changed and needs to be reviewed. Never make a change to an editor’s timeline without their knowledge.

Turnovers

A turnover is the name for the package of information that you generate and give to your VFX vendors and DI facility so that the VFX vendors can get to work.

In the most basic format, a turnover involves:

  • Generating count sheets (example) and reference Quicktimes to give to your vendors
  • Generating a pull list that you give to the facility managing your raw footage so that they can render your elements into DPX files to be delivered to the vendor
  • Determining how to get those DPX files to the vendor. Bigger Post facilities will have their own file transfer software (Aspera, GlobalData, etc.), but on an indie level you may need to provide a solution like MASV Rush.

What Your VFX Vendor Needs From You

  • The cut. Every shot needs to be viewed in the context of its surrounding shots, and it will help your VFX vendor tremendously to have a copy of the scene where the shots they’re working on will go. With it, they can check their own work and timing before wasting your time with a version that may look good in isolation and have an obvious problem in context. So when you’re first turning over shots for a scene, send them a Quicktime with the shot names burned in (along with your usual Property Of…. security titles). I’ve written up a workflow for quickly creating these burn-ins using the locators on the timeline and the Avid SubCap tool. Check with your editor, and studio or post supervisor for any security requirements specific to your show before sending a cut sequence out.
  • Quicktime reference files. In addition to giving your vendor the full scene, you should send a reference for each individual vfx shot, including handles. If you’ve done a temp version of a shot then you should send that to your vendor as well. And some vendors will also ask for Quicktimes of every element you’re sending them to their full scan length.
  • Count Sheets (example count sheet for a timewarp from Hellboy 2). These PDFs (or occasionally CSVs) tell the vendor about every shot you’re requesting from them, what materials they will need to complete the shot, and where to find them. They detail any bit of information relevant to the artists working on the shot, such as speed effects, resizes, extensions, and elements that will come from secondary vendors.
  • Dailies LUTs may be requested so that the vendor can send Quicktime versions to you for approval that match the dailies color you’ve been editing with.
  • Communication. You should be in constant contact with your vendor about the status of all your shots, what versions of shots you should expect to receive week after week, and what notes you have to relay back to them so they can move on to the next version.

What You Need From Your VFX Vendor

  • On a regular basis you should receive Quicktimes of each shot, in the spec and codec of your offline edit (DNxHD115, e.g.), to put into your cut. These usually come any time there is a new version of a shot that you need to review, and should have their filename and a running frame count burned in on every frame, plus usually a 1-frame slate at the beginning with a few more details like the vendor, date, etc. This is all standard, your vendor will likely do this automatically.
  • When you’re ready to begin your DI, you should establish a workflow to get finished shots from your vendor to your DI facility. Sometimes the vendors will send them directly, and sometimes they’ll send the finished DPXs to you to check and relay to the DI.

Count sheet page from my Hellboy 2 Opticals database. VFX were handled separately by Ian Differ, but I handled the workflow for the hundreds of timewarps we used.

Finishing

When you get to the finishing/DI part of the process, things can start getting lost easily. Your DI facility is often receiving shots from multiple vendors that have to match up exactly to the filenames listed in the EDLs that you’re providing them, and with so much data coming in all the time it’s common for mistakes to be made. Catch those mistakes as early as you can, but you should also get in the habit of asking for a VFX EDL from the DI timeline whenever they provide confidence check Quicktimes to Editorial. When you receive those, go through and make sure that each VFX version listed in the DI EDL matches up to the expected final version in your editor’s timeline. Use the confidence check Quicktimes as another means of visually making sure that all the shots look right and are correctly cut in. You may be duplicating some of this error checking work with the 1st Assistant Editor, but that’s okay. In this part of the process, you cannot be too careful. Errors that go unnoticed at this point can easily make it into the final deliverable, and obviously you don’t want to catch an error when you’re delivering the final DCP.

In this phase it is highly likely that a sound mix will be going on concurrently to the delivery of the last remaining VFX shots. It’s very helpful to the sound team if you keep an eye on any changes to the VFX that would affect what they’re doing. Like if the editor slips a shot that has a muzzle flash in it, your sound team will want to know that so they can adjust the sfx of the gunshot. It’s hard to keep track of everything that might affect sound, but just keep that in the back of your mind as you’re going through your normal duties.

Conclusion

I have not gotten very specific on a step-by-step workflow in this post because it is honestly different for everyone. Create a workflow that works for you, your team, and the specifics of your project. As long as the right information is getting relayed in a timely manner to your vendor, DI facility, VFX Producer & Post Supervisor, then you’re doing fine. Good luck!

Attachments:

Feature Turnover Guide

Feature Turnover Guide

I had written a whole post on turnovers, but quickly found that it went out of date. So here’s my attempt to write a more future-proof guide to getting the materials of your film turned over to all the various departments that editors and assistant editors interact with. This post primarily deals with turnovers to sound, music, and DI/online. There will be another post dealing with VFX.

For the TL;DR crowd, see my normal spec list

What Is A Turnover?

When you hear someone referencing a turnover, it simply means the handing off of all or part of the film from Editorial to another department. Since the other departments on your project (sound, music, vfx, digital intermediate/online conform, etc.) don’t generally have access to your editing system and all of the media you’re using, you have to give them the material to work from in the appropriate format for what they need to do with it.

How Do I Know What To Turn Over?

Once you do this a few times it’ll become an easy and predictable task. The first time you do it, it may be a little intimidating, and you may have lots of questions that even the people you need to turn over to can’t answer. To help, always think of what your turnover recipients are going to do with your materials once you hand them off. What’s most useful for them? What would get in the way if it was done wrong? Putting yourself in their shoes can help you answer a lot of questions on your own.

For example, sound and music departments often ask for split audio Quicktimes. For sound, this means dialogue on the left channel and sfx/music on the right. For music, however, this means they want dialogue/sfx on the left channel and music on the right. If you pause to think about why these specs are that way, it will help inform any other questions that come up. Sound departments’ primary concerns are cleaning up your dialogue and adding a great sound effects track. So it makes sense that for their offline reference they’ll want to be able to hear clean dialogue, and mute your temp sound effects and temp music if they want to. For a composer, many of them have no interest in hearing your temp music, so they want to be able to keep your dialogue and sound effects while composing a new score in place of your music track. As I mention below, though, I now tend to make stereo WAVs of dialogue, sfx, and music all separate from the Quicktime so that I don’t have to deal with panning a stereo QT and they have even finer grain control over the audio they hear.

On the picture side, an example might be the request not to letterbox your reference Quicktime when turning over to the DI. If you think about it, the DI facility is responsible for recreating your cut using your RAW footage or scanned film frames. The more information they can get about how you cut the film together, the faster they can work and the more accurate their timeline will be. EDLs go a long way, but sometimes you just have to go frame by frame and see what’s in the cut. If you matte your reference Quicktime, they might have to eye-match to your footage, which is time-consuming and error prone. By not matting your reference Quicktime, they can see the source filename and the timecode burn-in of each frame, and cross-check that with their online edit.

Components of a Turnover

These are most of the common items you’ll need to generate for a turnover. You need a set of these for each reel unless otherwise noted.

Quicktimes

First and foremost, a turnover generally requires a visual reference of your project, and this is usually a Quicktime rendered out from your editing application. For a film project, which is what I’ll focus on here since that’s my primary area of work, this usually means one Quicktime file per reel. Before you render a Quicktime out, you should check a bunch of things:

  • Do your reels have head and tail leader?
    • Head leader is an 8-second countdown with a one frame audio pop on the 2. Tail leader has a Finish frame and pop 2 seconds in, but can be of varying length after that. I usually go with a 10 foot (160 frame) tail leader. Get in the habit of putting your leader and audio pops on every track in your sequence, not just V1 and A1 (but do lower the volume of all the pops to a comfortable level).
  • Do your reels each start on the right hour?
    • Film reels start on the hour according to their reel number, including head leader. So Reel 1 should have a Start TC of 01:00:00:00, Reel 2 at 02:00:00:00, etc. On your head leader, the hour mark would correspond with the Picture Start frame, and your first frame of content would be at 8 seconds flat.
    • While checking Start TC you should also verify that your Footage counter is zeroed out. In Avid, if you right-click on your sequence and go to Sequence Report, you can check the starting Footage (or EC, for Edge Code) count of the sequence and reset it to 0+00 if need be.
  • Does your sequence need a matte?
    • Most departments will not want to see all the burn-in information such as filename, source timecode, audio timecode, etc., so you’ll want to keep a matte handy to cover it all up to the appropriate aspect ratio of your project.
    • Some departments will want to see this information, and you should be sure not to matte your burn-ins for them. When turning over to the DI or to anyone in Marketing (trailer editors, e.g.), don’t include a matte.
  • Does your sequence need additional burn-ins such as TC or Feet & Frame counters?
    • Most everyone will want a visible timecode counter on every turnover you make. I usually put sequence TC in the upper left and F&F in the upper right corners of the letterbox, and I make the font size pretty big so it can be read at a glance from far away.
    • There is a school of thought that you can export QTs more quickly by adding these counters with other software (Compressor, e.g.) after the initial export from your NLE. I really disagree with this approach, because the burn-ins you add directly into your sequence serve as a good double-check that everything is as it’s supposed to be. All of your other deliverables (AAFs, WAVs, EDLs, etc.) will be wrong if your sequence TC is off and you didn’t notice that because you just exported your video without a burn-in and pasted one on later.
  • Have you burned-in the name and date of your sequence somewhere in the frame?
    • You’ll send lots of turnovers over the course of a film, and you need to give your turnover recipients a way to talk with you about a particular version. If you put the sequence name and date visible on screen, that gives them multiple ways to refer to a particular version or a particular date. For example, R1_v20_SOUND_140204 might be how I’d name a Reel 1 (version 20) sequence for Sound that I delivered on 2/4/14.
    • Sometimes, as with trailer editors, you’ll receive Quicktimes back that have been cut together using multiple turnovers and asked to reconstruct that edit in the Avid. In this case, knowing exactly which turnover sequence each shot is referring to and what timecode a shot was pulled from will help you to quickly do your overcut, so having a lot of information burned-in can help you later on as well.
  • What kind of security markings do you need to add?
    • I usually keep a bin of title templates handy that add things like Property Of Production Company, the date, and the name or initials of the person receiving the file. These burn-ins go in frame, not in the letterbox, so that people can’t crop them out as easily. I put security titles on my topmost video track so it doesn’t get buried by anything. If you have subtitles in your film, make sure you’re not covering them up.
    • Some studios have a spec sheet for how they like their security markings, and others just want something but aren’t specific about what or where in the frame they want it. It’s all pretty standard, but I like to have my text be partially transparent so as to be very visible but not super annoying to watch.
  • Does your Quicktime need embedded audio?
    • I personally prefer to export audio separately and leave my Quicktimes mute. Most of the time this is ok, but sometimes you’ll need to embed audio. If you need to embed split audio (Left channel: dialogue, Right channel: music/sfx, e.g.), I usually recommend against doing this in Avid since it involves re-panning your sequence. Avid doesn’t do well with re-panning, especially if you’ve used AudioSuite effects in your timeline. In that case I export dialogue, music, and sound effects separately as mono WAVs and then add them to the exported Quicktime using Quicktime Pro, panning them there as necessary.
  • Is there a specific codec required? If not, what is the best codec?
    • EDIT August 2018: This paragraph used to recommend codecs that are now outdated, and since it’s now much easier to transfer large files and more applications support DNxHD, I use DNxHD for everything. Probably within a couple years that will include DNxHR, which as of this writing I am only just starting to try out. H.264 is still a no-no for turnovers for a variety of reasons including that it does frame interpolation, color often gets crushed upon export from Avid, and H.264 is meant mainly as a playback codec and not as a means for actively working with video files.

Here’s an example of what a fully prepped Quicktime could look like. I didn’t have much content I could post publicly, so instead you get to see me solving the Golf Ball Water Globe. Notice the tracks are split, too. The burn-in is maybe a little big.

Guide Tracks

This is self-explanatory, but it’s one of the main reasons to try to keep your audio tracks organized. When you get into post you’ll have to turn over a set of audio files (usually WAV or AIFF) that contain only dialogue, only effects, and only music. The only way to export these files is to keep the audio clips on separate tracks, so I usually do the work of splitting my tracks out in a duplicated sequence for the first turnover and then copy the split tracks back into the main reels so that my work is preserved even as the edit changes.

Each guide track should be exactly the same length as your Quicktime, and have a 2-pop and finish pop as described above so that the guide tracks can be lined up with the Quicktime in an audio editing application to assure the sound team that everything is in sync.

AAF (a.k.a. OMF)

An AAF is a file that allows you to give your timeline to other applications. For example, an AAF of your audio tracks can be imported into Pro Tools where a sound editor can work with your clips exactly as you’ve cut them in your NLE. At the minimum, AAFs contain metadata about your timeline and the clips in it. You can also choose to include the media that those clips reference inside of the AAF, or you can have the AAF link to an external media directory. Either way, you would  usually export an AAF with media handles, so that the person receiving your file has a bit of extra media on either end of each clip’s in/out points.

If you’re making an AAF for ProTools, make sure you have AAF Edit Protocol set. This will allow your AAF to exceed 2GB in file size if it needs to. I also usually set the options to Render All Effects and Include Rendered Effects. Be a little careful with this, as you might run into the same problem with AudioSuite effects that I mentioned above. It’s best to make sure all of your AudioSuite effects are rendered in your timeline before starting to make any turnovers. You can render them globally using the fast menu of the AudioSuite window. If there were unrendered effects in your timeline, check that they rendered correctly before going further.

Typical AAF Export Settings

Typical AAF Export Settings

The terms AAF and OMF are mostly interchangeable, or are at least used interchangeably by many people including myself. This is because OMF is the original file format everyone used, but now the AAF file type has superseded it. This is in part since OMF files are limited to a maximum size of 2GB, which isn’t enough if you’re embedding a full reel’s worth of audio. If you’re running a newer version of Avid, you will only be able to export an AAF. FCP7 will only export OMFs natively, and AAFs with Automatic Duck. I have no idea what FCPX can do, but you probably need to buy a plug-in to get AAF functionality. In any case, the two files are almost the same thing, and a lot of people will still ask for an OMF even though they actually need an AAF.

Also, if your project is still using OMF audio media, which is different than making an OMF files, that can cause some weird problems with AAF exports, among other things. There are many reasons why you should not have any OMF audio in your project, and there is not a single reason why you should. Consciously using OMF audio in Media Composer is like willfully using Windows 95 when a new Mac is available. Don’t do it, go MXF/PCM all the way and you’ll save yourself a lot of headache.

Cut Lists, Change Lists, and/or EDLs

Cut lists and EDLs tell other people and other applications about the order and duration of clips in your timeline. For Change Lists specifically, even though every department we interact with relies heavily on our ability to make them, they are currently a nightmare to make in Avid Filmscribe. The program has been super buggy for years to the point where it’s almost unusable, and it can require you to spend a lot of time simplifying your sequence before Filmscribe will make a change list without crashing. I can’t accurately describe the loathing I have for Filmscribe. That said…

Cut Lists

Cut Lists are good for giving to DI companies when doing your conform. They can reference either source timecode for tape or file-based footage, or list keycode for material originating on film. Fewer and fewer companies are using Cut Lists, though, and instead opting for other methods involving EDLs.

Change Lists

As soon as you turn over your movie more than once, you’ll likely need to hand over a change list. As you would suspect, this tells the person receiving your turnover how the edit changed between the last time you gave it to them and now. For sound and music departments, this helps them conform their ProTools sessions so they can update and smooth over their sessions to match the new version of picture. For the DI, a change list helps them match their online sequence to your offline Quicktime reference, and then determine if there’s new material they need to get from you in order to recreate your cut.

Until Filmscribe gets a total rewrite, it’s best to thoroughly scrub your sequences clean of complexity before loading them into the app. By this I mean you should reduce your sequences to the fewest number of tracks possible, save one layer out of any collapsed clips before removing them, and then remove any additional effects. Since you need Filmscribe to tell you only what’s changed, be sure to reduce the complexity of your sequence the same way every time. For example, if you pull the V1 layer out of a collapsed clip the first time, make sure you do it the second time, too. Try to make the sequences as similar as possible, so that the change list only reflects actual changes instead of having a lot of events in the list that look like changes but aren’t.

Once you’ve got your sequences set, drag the appropriate sequences into the Old and New sections, set your settings and click Preview. I usually like to do a Columnar or Optical Block list with Master Durations and Ignore Color Effects set (though you should’ve removed all of these already). Under the Change List options I select only Name, and KN Start if applicable. Save it out as a .txt file and you’re good to go. If the app crashes instead of giving you a previewed file, you might still have an effect in your timeline that it’s too dumb to handle.

Change List Options

Change List Options

Rebalanced Change Lists are made when you’ve moved a chunk of footage from one reel to another. To make one of these, first make sure your sequences have the Reel column filled out in your bin. Then in Filmscribe, drag your two old sequences and two new sequences into the appropriate spots and make your list. If you don’t do this and make only single change lists from each version, it won’t be clear from the change lists that material was moved between reels. It will just look like you deleted a bunch of stuff from one reel and added a bunch of other stuff to another.

EDLs

EDLs are a timecode-only (no keycode) list of “events” that describe, in order, the clips in your timeline. They’re a relic from the days of tape, but they’re surprisingly useful for all sorts of other things. Most editing and finishing applications take them, and they provide a simple and standard way to define a timeline and the material in it. For example, color correction software can use it to notch a timeline, so that if you give them a Quicktime to color-correct, you can also give them an EDL to tell them where each shot starts and ends. You can also use them to get a source list of the files used in your edit, or to get information out of Avid and into another form, such as a Subcap file. I also sometimes receive them, as in the case where you’re finishing a DI and the DI company sends you an EDL of all the VFX shots they’ve cut into their timeline so you can check that all the shots are there and are the right version.

Normal options for EDLs are to include the Source Clip Name, Locators, and Effects. Make sure to use the File_16 or File_32 list format if your source is an Alexa/RED/Sony/etc. so you can fit the full filename in. For digital cameras you should always have your full filename in the Tape bin column. DaVinci Resolve, which many DITs use to transcode footage for Editorial, does not have that option turned on by default, so make sure the DIT turns it on in the Timeline Conform pane in Resolve. If you get footage without proper info in the Tape column, I usually recommend duplicating the filename from the Name column into the Camroll and Labroll columns before renaming the Avid clips into Scene/Take format. The Camroll and Labroll columns are more easily changed than the Tape column, but if you’re careful it should work fine, be redundant, and either of those two columns can be used in EDL Manager in place of Tape.

Audio EDLs are useful for dialogue editors to go back and get the original audio file’s iso tracks, and most will ask for a set of EDLs on your first turnover to the Sound department to help them. In this case you would turn off all the V tracks in EDL Manager, turn on all  A tracks containing your dialogue, and use the Soundroll column as your source tape setting (if you have Soundroll and Sound TC filled in). If you don’t have Soundroll and Sound TC info, adjust these settings to match whatever metadata you do have.

One word of advice: always check the Console if EDL Manager gives you a warning message. It looks daunting but it will tell you what clips didn’t have the metadata it was expecting, and you can use that to see if you have a problem or if you can ignore it and keep going.

My Usual Specs

Unless I hear otherwise from the departments I’m working with, here are the specs I will give them for each reel:

Sound

  • Mute Quicktime – 1080p DNxHD 36 or 115. QT is letterboxed with sequence TC and F&F burn-ins, along with sequence name, date, and security markings. Sometimes you may want to leave it unletterboxed if there is audio timecode that the sound department wants to see.
  • Stereo WAVs  – one each for dialogue, music, and sfx tracks
  • AAF with 96 frame handles, AAF Edit option checked, media embedded if necessary. Render All Effects and Include Rendered Effects options checked, but no others
  • Audio EDL of each reel’s dialogue tracks (usually for the first turnover only)
  • Change List (from the last version they received to the current one. They might not be sequential versions)

Music

  • Mute Quicktime -1080p DNxHD 36 or 115 unless otherwise requested by the composer. QT is letterboxed with TC and F&F burn-ins, along with sequence name, date, and security markings
  • Split Stereo WAVs  – one each for dialogue, music, and sfx tracks
  • AAF with the same options as above, if requested.
  • Change List (from the last version they received to the current one. They might not be sequential versions)

DI

  • Mute Quicktime – 1080p DNxHD 36 or 115 (or whatever is Same as Source). QT is not letterboxed, but still has sequence TC and F&F burn-ins, along with sequence name, date, and security markings
  • EDLs of split video tracks, so that’s one EDL per camera type (35mm, RED, GoPro, etc.), one EDL with all VFX, and one EDL with any opticals (resizes, speed effects, dissolves).
  • Stereo WAV, not split out, for reference and so they have something to play during reviews
  • Avid bin containing your sequence(s), if requested
  • Change List (from the last version they received to the current one. They might not be sequential versions)
  • Pull List and Cut List (for film only)

Marketing

  • Mute Quicktime – 1080p DNxHD36 or ProRes (Gamma Correction Off), no fast start. QT is not letterboxed, but still has sequence TC burn-in with sequence name in large font, plus date and other security markings. I usually make security markings for Marketing turnovers big and annoying, since these often get forwarded on to multiple companies out of your control or contact.
  • Split Stereo WAVs – one each for dialogue, music, and sfx tracks.
    • If embedded audio is required, dialogue on Left channel, sfx and temp score on Right (or sometimes full mix on Right, if requested)

ADR

  • Mute Quicktime -1080p DNxHD 36 (unless ADR requests otherwise). QT is letterboxed with TC and F&F burnins, along with sequence name, date, and security markings. For ADR, don’t have a center security burn-in since it can get in the way of an actor’s ability to sync to their mouth onscreen. If you must have a center burn-in, make it center-ish with partially transparent letter outlines only and no fill.
    • If you’re only getting ADR in a small portion of a reel and are worried about security in the ADR studio (especially for remote sessions), trim the export sequence to only the necessary scene, but make sure that the timecode is still correct. In Avid if you make a subsequence, it will keep the timecode it came from originally, which is what you want. Make one QT for each scene that you need. Alternatively, you can export one Quicktime of the whole reel but lift out the portions of it that are not needed so there’s just black inbetween. This isn’t totally necessary, but why have more material floating around than you have to?
  • Dialogue WAV only

MPAA

  • For delivery to the MPAA, either DVD, some tape formats, or DCP (2D or 3D) is allowed. 2D Blu-ray is ok also (but risky in my opinion), and no 3D Blu-rays are allowed at all.
    • Property of Production Company burn-in only, no bigger than 10pt font, located at the very top or bottom of frame
    • No other burn-ins allowed
    • Total Running Time printed on the label

Attachments:

Star Trek Into Darkness

5.1 Temp Mixing on Star Trek Into Darkness

On Star Trek Into Darkness, we wanted to try something new for our sound workflow, and that was to keep a running 5.1 mix in the editors' reels throughout the entire editorial process. During Post on Super 8, J.J. Abrams had mentioned to his Post Supervisor, Ben Rosenblatt, that he wished he could have better sound while editing, and a "temp dub as you go." He wanted to be able to screen the rough cut with no notice, allowing him to edit right up until it was time to show it, instead of having to lock a version of the cut a week in advance in order to do a temp sound edit and mix in ProTools.

When Media Composer v6 came out and we saw that we could do a surround sound mix inside MC for the first time, Ben decided to go ahead with this experiment. We started digging in to exactly how MC handles 5.1 sequences, what pitfalls there might be between systems that had a 5.1 speaker setup and ones that had only stereo, and after that was all figured out we started building a 5.1 edit suite at Bad Robot. We mounted a projector, installed speakers, rigged up a screen, tuned the room to match the theater upstairs, bought two Artist Mix consoles and an Argosy console to hold everything, and started making the most complex temp soundtrack ever contained within 16 mono tracks.

Getting Started

Our first task was to figure out how Avid had designed their new 5.1 functionality, and what effects that might have on the editors. We wanted this process to be as seamless as possible for them, so if something about having a 5.1 sequence got in the way of their ability to cut, that would be a problem. Thankfully, we found that a 5.1 sequence will automatically and gracefully fold down on stereo-equipped systems, and you might not even notice that the sequence is set for 5.1. The biggest change we had to make across the entire Editorial team was that everyone had to work in stereo or 5.1. Many editors still like to work in Direct Out, but unfortunately if you are planning to mix in 5.1, Direct Out is not an option. All panning is done with clip settings and keyframes, and you need every available inch of space on the timeline, not to mention many other reasons why Direct Out is not ideal.

Once that change was made and we'd double-checked that everyone's mixers were set properly, we started receiving reels from the editors as they finished a first pass with J.J. They were working chronologically, so we did as well, and the first sequence we designed and mixed was the prologue on the Red Planet.

Post-5.1 First PassPre-5.1 Sound Work

Drag over the timeline to see before we worked on the space jump sequence and after

Crew

We had four main people on the sound design team, with three of us working entirely in Media Composer, and our responsibilities divided up similar to a real sound department.

  • Matt Evans went to town cleaning up and normalizing all of our production dialogue
  • Robby Stambler provided us with an awesome sound effects library, and specialized in cutting foley.
  • I handled fx editing as well as the overall mix
  • Will Files came down from Skywalker Sound to supervise the whole process, and hooked up a ProTools system to our ISIS in order to make custom Star Trek effects like ships, phasers, and transporters.

Track Layout

When Avid released Media Composer 6.5, there were a host of new audio controls that didn't exist previously. We wanted to take advantage of them, but we didn't want to upgrade the whole show all at once on the day the new version came out. This left us with a mix of versions, which is also not desirable, and meant that we had to be conscious of their differences, such as the maximum number of simultaneously playable tracks. In order to keep the editors' sequences as easy to cut with as possible, and since they wouldn't be able to play more than 16 tracks at one time anyway, we worked within the confines of 16 mono tracks for the whole movie.

The next time we do this, I think we'd stick to 16 tracks so that the sequences aren't any more of a hassle to cut with, but we will likely use a different track layout that would be established from the start. It would be something like:

  • 4 mono tracks for dialogue
  • 5 mono tracks for SFX
  • 4 stereo tracks for SFX
  • 2 stereo tracks for music
  • 1 5.1 track for pre-cut SFX from ProTools

Eventually we did have to upgrade almost everyone to 6.5.2, when we started constantly running into the maximum number of clip references that a bin could contain. Our timelines were too full to fit into the old bin constraints, though that max level had been set a long time ago and I just think it was forgotten about and never updated. When we brought it to Avid's attention, they sent us a patch to hold us over, and then released the new bin reference limit as a feature of 6.5.2.

How It Works

5.1 panning in Media Composer is done on a clip-by-clip basis. Which track a clip is cut in on doesn't matter as long as your audio output is set to 5.1, 7.1, Stereo or Mono. Direct Out is the only setting to avoid for a surround sound sequence. To set a sequence to 5.1, look for the sequence setting in the upper left hand corner of the Mix Tool.

Setting how you want to mix a sequence

Setting how you want to mix a sequence

Once you put a sequence into 5.1, you can just leave it there even if you then move to a stereo system. All the surround sound panning is retained even if someone switches the sequence setting back to stereo or cuts the clips into a new sequence.

The right setting controls the audio output from your system. We used the SMPTE channel order for everything, so my system was always set to 5.1 SMPTE (L R C LFE Ls Rs).

Audio_Mixer_Output_Options

Setting how you want to listen to a sequence

How to Pan

A clip can be panned to a specific channel or to a mix of channels, with the exception of the subwoofer. Each clip has an LFE level you can set in the big panner window that determines how much of the low end of the clip is sent to the LFE channel, but you can't send a clip exclusively to LFE.

Big_Panner

Little and Big Panners. Click the dot beneath the mini panner to open the big one.

There are several ways to pan a clip, and which one I use often depends on what I need to do. The first and fastest way is to use the small 5.1 panner above each channel in the mix tool. You can drag the little yellow dot anywhere in the grid and that will set the clip pan.

The second way is to open the big panner, by clicking the yellow eye-looking icon on any of the channels. This allows for more precise panning than you can get from the mini panner.

 

Opt-click the track name to switch the active track in the big panner

Opt-click the track name to switch the active track in the big panner

If you want to keyframe your pan, you have two options. You can enable keyframing for any one of these directions: Front Left-Right, Rear Left-Right, Front-Rear, Center % (if you want a center panned clip to also be sent to L & R). If you're doing a complex pan, you will often need to add keyframes in the first three settings. Unfortunately there is no way to display all the settings at once, so this method involves a lot of clicking between settings to get a pan right.

MC lets you keyframe four properties: Front left-right, Rear left-right, Front-Rear, and Center % (if you want a center panned clip to also bleed into L & R)

MC lets you keyframe four properties: Front left-right, Rear left-right, Front-Rear, and Center % (if you want a center panned clip to also bleed into L & R)

The second way to animate a pan is to change your Mix Tool mode to Auto, and use either the mini or big panner to set your keyframes. You can do this live by recording automation, or you can position your playhead where you want your first keyframe, drag the yellow dot to where you want to start your pan, then move the playhead to where you want the next keyframe and drag the yellow dot to where you want the panning move to end. As soon as you move the yellow dot, a set of keyframes are made at that location in the clip representing that pan setting.

When using this method, it is often necessary to set the yellow dot to the exact opposite end of the grid from where you want it, and then set it back to desired pan location. This ensures that a keyframe is made in all three axes. Otherwise, MC might not set a keyframe that you actually need and your panning move won't go exactly where you intended it.

For example, if you have a three point animation, from Front Left to Front Right to Rear Right, MC won't make the Front-Rear keyframe if you're only dragging the yellow dot across the top of the grid because it hasn't perceived a change in the default Front-Rear value that would warrant making a keyframe. This means that you would end up with no Front keyframe at all and a move that goes Rear Left to Rear Right without stopping at Front Right first. The video below explains this better:

Splitting Up the Work

Dialogue Cleanup and Normalization

As you may have seen in my article on quick dialogue cleanup, using RTAS effects was a core component to getting our dialogue in the right space. That was far from the only modification we did to the dialogue, though. We bought iZotope RX2, which is commonly used in ProTools to clean up noisy tracks, as well as Speakerphone 2, which handled all of the different futzes that are needed for a movie where people are constantly talking on all sorts of different communication devices. Both of these are common ProTools plugins, and it turns out that they work almost as well in MC. iZotope in particular requires a bit of handholding in MC, but once you know its problems you can take care to avoid them.

MC's built-in 3-band EQ is also very handy for reducing boominess, increasing clarity, and quickly taking out a problem frequency. Moreover, Matt would often use keyframing to bump or dip individual syllables in order to make sure that the dialogue was as easy as possible to understand.

Matt Evans keyframed individual syllables to help both clarity and evenness of volume

Matt Evans keyframed individual syllables to help both clarity and evenness of volume

AudioSuite Effects and Bin Size

One side effect of all the dialogue cleanup and futzing we had to do turned out to be a massive increase in the size of our bins. While a sequence bare of AudioSuite effects and EQs might have been 15 MB, a sequence with all those effects easily reached 100-200 MB in size. If there were multiple versions of a reel inside the bin, the file size of the bin would multiply accordingly. A bin with a full copy of the movie in it, such as the one I used to take the screenshot at the bottom of this post, could come in over 1GB easily. Because of this, and because these sequences get duplicated frequently into other bins for turnovers and outputs, the total size of the Star Trek project directory was over 100GB, and that's with some culling of old bins into another archive folder.

Sound Effects Editing

I handled most of the sound effects editing, though I was often given a head start by the 1st Assistant Editors, Julian Smirke and Rita DaSilva, who did first passes on ambiences and key sound effects for the editors during the assembly. Matt, Will, Robby and I would create a master list of effects goals we wanted to accomplish for our first pass, including what existing effects were working well and what we could improve. I would then go through and start removing one channel of most of the stereo effects that were already in the timeline, and then re-pan them so they still sounded right. This included flybys, where I would usually keyframe a pan in one channel instead of keeping two channels on the timeline. Effects that needed to be in stereo, such as ambiences, would of course get to keep both channels.

After I had removed as many redundant channels as possible and stripped out effects that we would replace, I dug into Robby's library and started filling the timeline back up with new fx. I like to do a rough mix and pan as I go, even though a more thorough mix pass still needed to be done after taking delivery of Matt's cleaned up dialogue tracks, whatever fx Will was making, and our temp score from Ramiro Belgardt and Alex Levy. Robby also tackled a lot of foley, since once you start down the road of making your temp sound good, you start to notice that you can no longer skip the things you would normally ignore for a temp, such as foley.

Over on ProTools, Will was busy doing things that were better suited to be done in a real audio editing application. I would hand him a Quicktime and linked AAF of the reel after I had cleaned it up, and with his ProTools rig on our ISIS, he was immediately able to get to work. He designed and edited many versions of the lasers and jump ship engines for the conference room attack, the sound of the Vengeance engines, a lot of new ambiences, phasers, transporters, and anything else that he wanted to tackle. When he had some effects to bring back into Media Composer, he'd put an embedded AAF on a shared ISIS folder, and I'd import it and cut it onto new tracks at the bottom of our timeline, where it would already be cut to time. I would just need to apply the proper panning and gain, and then try to find some open space on the timeline so I could keep the sequence within 16 tracks. Playing Tetris as a kid came in very handy here!

A lot of the effects and design choices that the four of us made during this process ended up in the final mix, which is a testament to the work we did as well as the iterations we were able to go through so that by the time we got to the final mix, we knew what J.J. wanted to hear.

Reel 7 Full timeline

Reel 7 with all sound effects, music, and dialogue

Music

Our music editors, Ramiro and Alex, delivered to me stereo bounces of the temp score they had cut in ProTools, and after I cut them in we would mix them together. We set almost all of our music at a default pan of 75% front on both sides, so that there would be just a little of the music in the surrounds at all times. Originally, this was the area where I most used the automation recording feature of MC in combination with the faders on the Audio Mix console. It worked alright, but after a while I found it was still faster and cleaner to use manually placed keyframes.

Music Keyframes

Temp music tracks with manually-set keyframes. Also, never use this color for audio clips.

A Note on Automation

When you record automation, you then have to go back and filter half of the keyframes out, but even after that you still can't get the moves as clean as you'd like them. If you want to go manually touch up a section, you end up having to grab dozens of keyframes, and if you're zoomed out you might not be able to see all the keyframes you need to grab. This can leave you with a sudden bump in volume if you grab a range of keyframes but miss the last one. Because of all this, I gave up on recording automation for our music mix, and our Artist Mix consoles became useful only for their dedicated Solo/Mute buttons. If Avid made a console that was only Solo/Mute buttons, I would buy it.

Mix

With all of the dialogue, effects, and music now in the timeline, I would then go through the sequence again to make the final mix. I made sure the RTAS effects were in place (they can get dropped if you make a new sequence instead of duplicating an existing one), and went through to make sure that no effects or music were drowning out the dialogue, and that the effects and music weren't competing against each other. Will would often sit in and provide direction while I was mixing, and once we were satisfied for the moment the four of us would reserve time in the Bad Robot theater to go preview it there. After doing another pass through the reel after our preview screening, we would then show it to the editor whose reel we were cutting and get notes from her.

Showtime

Full Timeline

The whole movie as of December 2012 (5 months before release)

The first full screening was in mid-October, and it was a big deal because it was the first time anyone including J.J. had watched the movie all the way through. He gave notes on everything: story points, music, vfx, sound, you name it, and all of it needed to be fixed before a studio screening that was fast approaching. We fared pretty well, and I think that is due in part to the fact that although he had not heard more than bits and pieces of our 5.1 mix by that point, he had been hearing it in stereo while cutting with the editors. So by the time he got up to the theater, most of the audio content wasn't a surprise for him, and he was able to think about other areas to fix while watching his first surround sound Director's Cut.

Almost a Taft-Hartley

The voice of yours truly was placed in the temp track a couple of times. Replacing one of my two lines was one of J.J.'s first audio notes, and by the time the film was locked the second line was gone, too. But don't fret, if you want to hear me in action, check out Hellboy 2, where I voice the BPRD PA system!

Keeping Up With Changes

Our first pass through the whole movie took about four months. Some of that time was dependent on when the editors were ready to hand their reels off to us, but even still there was not a lot of extra time. Most reels took us a week or two to prepare, though the first few reels took longer than the later ones, and the big ship crashing sequence towards the end of the movie was initially designed and mixed over the course of a few days because that's all we had.

After the first full screening, we had the dual tasks of continually improving the soundtracks while also keeping up with changes. When the first few studio screenings came up, we brought all hands on deck again to smooth out the soundtrack where the editors had made cuts, and add new sounds to match new visual effects that had come in from ILM. Later on in Post, we were able to reduce the workload of prepping for a screening to one person in the 5.1 edit suite just patching things up over one or two very long days. Since the design work was basically done by then, checking the mix involved just watching through the reels listening for pops, missing or out of sync fx, new dialogue lines that needed an EQ or noise reduction, and sometimes mixing in a new music cue.

Handing off for the Final Mix

Keeping the soundtrack up to date inevitably fell by the wayside the more that Skywalker Sound's crew took over. Will moved into the 5.1 edit room that had been my office, configured it for ProTools, and started doing predubs. A couple other Skywalker crew members came down, set up their own ISIS, and worked out of Bad Robot until everyone moved to Fox for the final mix. At this point I was actually off the movie, having finished what I was hired to do and left for another job. When at last I heard the final mix, I was amazed at how much of our work was still in there. A lot of it had been combined with other effects to make something new, but even in places where our effects had been entirely replaced, the replacements often reflected our initial design choices.

Those of us on the Editorial side, namely everyone except Will, always had low expectations for how much of our work would survive. After all, as picture assistants we're used to nothing from our temp track making it into the final mix, and usually that's as it should be. On this movie, I'm beyond thrilled that we were able to contribute so much, and I'd like to think that the work we did provided Skywalker a useful head start.

 

Francine at the Bad Robot letterpress, late on a Saturday night

Francine at the Bad Robot letterpress, late on a Saturday night

On a Personal Note

Every film is an all-consuming commitment to the project. You have very little free time, and go months without seeing friends, family, or even the person you're living with. Nevertheless, life can't stop completely, and on this film I was balancing my job on Star Trek with the job of planning for my wedding. I proposed in July 2012, on the day before I started work on the movie, and I got married a week after we broke for Christmas hiatus. Many sleepless nights inbetween were spent at Bad Robot going back and forth from mixing a reel to printing invitations, addressing envelopes, and editing my wedding slideshow video. I am very thankful for my wife's understanding and patience with my crazy schedule throughout those six months.

Mid-way through Reel 2, Noel Clarke's character drops a ring into a glass of water, causing a massive explosion. In the final mix, the sound of that ring is a processed version of my wedding ring dropped into a glass of water. It's a pivotal moment in the film, and its sound is a reminder of both my work on this film and that I got married during it. I can't think of a better way to remember this time in my life than that.

Quick and Easy Dialogue Cleanup with RTAS

Quick and Easy Dialogue Cleanup with RTAS

On Star Trek Into Darkness I had the opportunity to break out of my usual Assistant Editor responsibilities and tackle a new experiment in temp sound editing. Will Files, Matt Evans, Robby Stambler and I formed a new mini-department within Editorial that was tasked with temping out the Editors’ sequences and mixing them in 5.1. There’s a lot to the process that is new and interesting, and I hope to get another post up soon that more fully flushes it all out, but for the moment all I want to talk about is a method for basic, global dialogue cleanup that is probably old hat to some (and par for the course for professional sound mixers), but was new and amazing to me.

This tip comes courtesy of Will Files, who as a loan-out from Skywalker Sound was the guy who guided this whole process on Trek and helped teach me, Matt, and Robby the ropes of the sound world.

RTAS Is Your Friend

Before this show, I didn’t really know what RTAS was useful for, much less how awesome it really is. It allows you to use many of the AudioSuite plugins that you would normally apply to a clip, and apply them to an entire track instead, without rendering (thus the RT in Real-Time Audio Suite). Up to five RTAS plugins can be chained together per track. When applied to dialogue tracks, you can chain together 3 RTAS plugins that will make your dialogue much more understandable and leave more room in other frequencies for your sound effects and music.

So, to get started, you have to show the expanded audio controls in your timeline, and make your track size big enough that you see the little RTAS boxes:

RTAS

You can see that I have an EQ, a Compressor, and a De-Esser, in that order, on my dialogue tracks.  Let’s go through them:

1) EQ

The EQ you add here is basically a band pass with a little customization. Everything below 60Hz is gradually stripped away, as well as everything above 12kHz. This is because your typical dialogue won’t produce any audio in those frequencies that you want to keep, but by throwing it away you can start to address issues of boominess, high frequency hiss, and other technical problems with your production audio that get in the way of understanding the dialogue.

Aside from the band pass, this EQ also lowers frequencies around 120Hz by 2db, and raises frequencies around 4kHz by 2db. Again, this helps with boosting the frequencies of your dialogue that are most useful for comprehension, and removing frequencies that tend to get in the way, but without being as blunt as the band pass since these are frequencies you do want to hear.

RTAS EQ

2) Compressor

Now that you’ve removed unwanted frequencies, it’s time to normalize the volume. For that you use a Compressor, which will actively limit how loud your dialogue can get. If it gets too loud and crosses our set threshold, the Compressor will bring it back in line. The more the volume goes past the threshold, the more it will be reined in. This helps make sure there are no loud surprises in your dialogue, and will save you some of the hassle of mixing loud clips down to a more comfortable listening level.

In this case, we’ve modified three of the settings from their default states:

  1. Knee = 6.0 db.   This adds a little curve right at the threshold point, so that the ratio of a loud input level to its compressed output level is approached more smoothly. Without it, the compression would switch on at full force when the volume crosses the threshold. For a better explanation of this, read this article.
  2. Threshold = -20db.  By moving this up 4db from the default -24db, we’ve allowed our audio to be a bit louder before it activates the compressor.
  3. Gain = 4db.  This knob controls the output level of all audio passing through the Compressor, even audio that is below the threshold line. Since compression only reduces volume and can leave your dialogue levels feeling too low, adding a bit of make-up gain can help keep it at a good baseline.

Compressor

3) De-Esser

This one does exactly what its name implies, and helps with any S sounds in your dialogue that can be particularly piercing to listen to. It’s basically another type of compressor that handles high frequencies instead of high decibel levels.  On this we’ve set:

  1. Frequency = 5.4 kHz.  This means the De-Esser will be triggered for frequencies above 5.4kHz.
  2. Range = -3.0db.   When the De-Esser is triggered, it will reduce the gain of the signal by up to 3 db, which should help reduce the effects of any piercing audio.

De-Esser

As Quick as A-B-C

For those short on time, I’ve attached an Avid bin called Dialogue RTAS Effects.avb to this article which contains these three presets. They are labeled A, B, and C and should be applied to your RTAS chain in that order.

Tip: To quickly copy RTAS effects from one track to another, hold down Option and drag the effects you want to copy from one track’s RTAS chain to another.

 

 

Attachments:

Thoughts from NAB 2012

Thoughts from NAB 2012

So after spending a day and a half on the floor of NAB 2012 (and a fun night at Media Motion Ball!), here are some of the thoughts I had and things I’m excited about after talking to various companies on the exhibition floor.

ATTO Technology Thunderbolt-to-10Gb Ethernet (link)

I asked the ATTO guys whether anyone had used one of these to connect a laptop or iMac to a Unity, and they said that they were so new there weren’t enough units available to send out for certification to companies like Avid. In theory it should work, and Avid is at the top of the list to receive a test unit, so hopefully we’ll see some results on that either from them or someone else who just gives it a go to see what happens. These would be great in scenarios where you just quickly want to connect a temporary system to your Unity, like for giving access to a trailer editor so they can pull selects from your dailies without taking the time or system off an assistant editor.

Amazon S3 Secure File Delivery (with or without Aspera)

So I wandered into the Aspera booth since independent-level secure file delivery is something I’ve long been interested in solving in a cheap way. Aspera is not cheap, and all of my experience with it has been with big studios or facilities that can afford it, but I saw something about Aspera linked to Amazon S3 and wanted to learn more. Having “Freelance” on my NAB badge made sure that no one from Aspera was interested in talking to me, but the representative from Amazon there was very nice and I chatted him up for a minute.

The rub is this: Aspera is offering so-called “On Demand” service, whereby you use their FASP transfer protocol to get your files quickly up to S3. You then get charged by Amazon for the bandwidth of whoever downloads that file, as well as for the use of the Aspera software. I was hopeful that something called On Demand would be more affordable for indies and people who still need to send very large, secure deliveries but don’t have the money or server infrastructure to have an enterprise-level solution at their disposal. Predictably, this is not the case. In fact, I’m not even sure why they’re calling it On Demand, since they want to charge you a monthly subscription fee of $750.

The upside, though, is that by Aspera effectively ignoring me and Amazon giving me all the time I wanted, I learned a lot about S3 that I didn’t know before. Most importantly, the Amazon guy pointed me in the direction of access control, which eventually got me to run into a page on the Amazon S3 docs titled Signing and Authenticating S3 REST Requests. It’s a mouthful, but down near the bottom what it says is that you can use what they call Query String Authentication to send a expiring link to a private file on S3. With some work in PHP one could pretty easily create an app to send links to private files that expire. It doesn’t seem like Amazon offers the ability to expire a link after it’s been clicked on once, or to provide logging, but for basic large file delivery this should work well enough to start.

Wacom Intuos5 (link)

I’m a huge fan of Wacom and I insist on having an Intuos tablet at my desk wherever I edit. I was a little skeptical of the Intuos5 just because I didn’t see the need for adding touch capability to the tablet when I already use the pen 100% of the time. Having played around with it, it is very nice, though I still don’t think I would use the touchpad much (I’m sure I’ll eat my words later). One problem it would solve, though, is that other people who jump on my system wouldn’t have to fumble around with the pen in order to do a quick task. I will miss the little displays that the Intuos4 had, though. The HUD that pops up on screen when you touch one of the side buttons is annoying and takes too long to display. So I’ll definitely be keeping the Intuos4 I have at home.

DCP Creation

I spent part of my second day on the floor talking to the EasyDCP and Doremi people. I have conflicting desires when it comes to DCP generation. On the one hand, free and open source tools already exist to allow you to roll your own DCP, and I oppose paying thousands for something I can do for free. On the other hand, even with the free tools it’s still a pain in the ass to actually do it properly, so paying for the cheaper end of DCP creation software like EasyDCP is possibly worth the time it would save me to figure out all the color space, multiple reel, and subtitling issues that are only the beginning of my problems when testing the open source tools. Additionally, EasyDCP offers a version that allows KDM generation, and they are coming out with a KDM database app to keep track of your server certificates and issued KDMs, and encryption plus KDM management is something the open source tools haven’t gotten to yet.

On the fancier, more expensive end of things is Doremi. They sell a software package to allow you to make your own DCPs (not sure the cost), in addition to selling hardware that can take an HD-SDI input and encode your DCP in real-time. I asked the rep what it would do with a 1080p signal, and he verified that it can upres the 1080p signal to 2k with either a flat or scope preset. This, for me, presents interesting possibilities, since on my last show we made quite a few temp DCPs with cuts played out from Avid, and in order to get the DCPs made we had to record our cut to HDCAM-SR, hand that off to a post house, and wait 24 hours to get a DCP back. If we could’ve rolled our own DCP, not only would we maybe have saved money in the long run, but the turnaround time for the DCP could be much shorter. We would only have to watch the layback of and then QC the DCP, instead of watching the layback of and then QC-ing the tape, running the tape across town, waiting a day, and then QC-ing the DCP. This is definitely something I want to pursue further for production companies or post houses where I anticipate regular work.

File-Based Camera Dailies Prep

At the Sony booth there was a guy demoing YoYo, which seemed very well thought out and potentially very very useful. It handles all the usual backup and transcoding of the master files from the camera, in addition to allowing LUTs and basic color correction, sound syncing including multichannel mixdowns or inclusion of only selected channels, and to top it off it will take advantage of your connected broadcast monitor if you have one. It’s definitely the most full-featured “DIT” app I’ve seen so far. The two things I wish it would do that I don’t think it does currently is maintain a database of all the clips it’s processed (useful as a codebook), and mark the audio it sends to Avid as coming from a film project (even if it isn’t) so that when you get into Avid you can slip sync by perf. The YoYo rep did say that if you have time the software can sync by processing the audio and finding the clap rather than by timecode alone, and that when it does that it would nudge the audio as much as needed on a sub-frame level.

The Arri booth also had the Codex Vault, which I didn’t get much time to check out but could be an interesting alternative, albeit one that doesn’t seem to allow quite so much customization beyond its presets as the YoYo software does. I definitely want to check this one out more before the next show where I need to worry about this.

Streaming Dailies to iPads

I checked out the G-Technology G-Connect, and I’ve also previously looked at the Western Digital version of the same thing. The goal for something like this would be to put your dailies on one of these devices, which then act as a Wi-Fi hotspot and can stream the video stored on them to any connected iPhones or iPads. This would be useful for allowing people on set to view dailies without having to load up each iPad with a copy of the dailies, but since they’re intended more for consumer level use the encryption involved (or lack thereof) becomes a sticking point. There is password protection on the user interface, but the Wi-Fi transmission itself is unencrypted, and no studio would allow dailies to be transmitted over the air like that.

Field Recorders

I’m cutting a show shot on Alexa right now, and we initially considered using a field recorder to dual-record DNX media while the Alexa was shooting ProRes 444, but we had to abandon it since the field recorder couldn’t grab the filename the Alexa was using for the ProRes file over the SDI connection it was using as its input. Since then the Alexa introduced exactly what we needed as a native option, but nevertheless it surprised me that this was a problem, since what use is a field recorder for editorial if the names are different from the master files I’ll want to relink to later?

The Sound Devices PIX 240 and 260 do now grab the R3D filename off a RED camera and can name their proxy files accordingly, but don’t yet work with any other cameras. Hopefully this becomes standard across all camera brands, as it would make an editor or DIT’s life a bit easier.

New Cameras!

There were a few new cameras to check out at NAB this year. I played around with the Blackmagic one, and I saw the new offerings from Canon and Sony. I don’t really know enough about camera tech to comment knowledgeably, but I do like Blackmagic’s consideration of not creating another proprietary file format, and that’s about all I have to say on that.

And I Still REALLY Want One Of These

Flanders Scientific LM-2340W

 

Well, that’s it for my initial thoughts from my first ever trip to NAB! Next year I’ll have to try to get more time off!

Automate VFX Sequence Titles

Automate VFX Sequence Titles

This tip comes by way of George McCarthy, who was our VFX Editor extraordinaire on Mission: Impossible 4. I also created an online EDL to SubCap Converter you can use in lieu of the more manual way described below.

If you’re on a show that has to turnover a sequence to a VFX house, you’ll likely find the need to export a Quicktime of that sequence with titles over any shot in the sequence that will be a VFX shot. This reduces confusion between Editorial and the VFX vendor, and is useful not only to label each shot with the shot ID, but also because it’s not always obvious which shots are supposed to have work done to them. Makeup fixes, for example, wouldn’t be immediately obvious when scrubbing through a Quicktime, but if you title the shot then it’s easy for the VFX vendor to match your count sheets to a visual reference.

SubCap Example

Before Avid added Generator clips to the effect palette, you had two options for titling your sequence. One was to manually type in shot names, durations, etc., and save each title individually in a bin. The other was to attempt to use the Autotitler function that existed in Marquee, though almost immediately after Marquee came out, Avid broke the Autotitler in a software update and left it that way for years. In either case, you’d still be left with the task of manually cutting in titles over each shot, which is a tedious and error-prone task.

There are now multiple methods for doing this in a more automated fashion, including one I just learned about that uses the Timecode generator plugin over a subclip, but in this article I’ll look at using the SubCap effect and feeding it a text file converted automatically from an EDL.

Prepping an EDL

There are a couple reasons why an EDL is handy for generating a subtitle file. The first is that you can include locators in the EDL, so you can reuse the locators you’ve already created in your sequence that list each shot’s ID. The second is that by using an EDL instead of a straight locator export, you can get timecode ins and outs for the shot, so that the subtitle is the appropriate length. From this you can also calculate the duration if you’d like to include that in your title.

So the first step is to make sure your sequence is ready, you’ve run the Commit Multicam Edits command, and all of your VFX shots have a locator somewhere on them that lists the correct shot ID.  Save your bin and open the sequence in EDL Manager. You do not have to lift out non-VFX shots from your sequence, but you do need to make sure that Locators are turned on in the EDL settings, and that your EDL type is CMX3600.

Make sure Locators are enabled in the EDL settings

 

Export an EDL from the video layer where your VFX locators exist, and then either use the converter I created to do it for you, or bring that into a text editor that allows regular expression Find & Replace (such as TextMate or jEdit). This is the regular expression I use to grab all the right bits from the EDL

\d{3}[^\n]*([0-9:]{11})\s([0-9:]{11})\s?\n(?!\d{3})(?:.*\r?\n(?!\d{3}))*?\* LOC: [\d:]{11}\s(\w+)[^\S\n]+([^\r\n]+)\r?\n?
Looks scary, I know. But that one line of gibberish looks for any series of lines in an EDL that include an EDL event next to a Locator comment. When it finds one, it saves the timecode in and out for the sequence as well as the color and text from the locator. With all that information saved you could choose whether or not to use the color to handle a certain color of locator differently than another, or to calculate a duration based off the timecodes. Using backreferences, you could fill in your Replace field with $1 $2\n$4\n\n, for example, and that would give you the format you need for a SubCap file. This RegEx won’t get rid of all of the non-vfx EDL events that you would want to ignore, so you’d have to go through and do that manually remove those lines or write a RegEx that negates the one above. Don’t forget to add the opening and closing tags, too. A small sample of the final product of a SubCap file looks like this:
<begin subtitles>

04:00:00:00 04:00:08:00
CS0010 (FORMERLY CS1000)

04:00:42:22 04:00:51:00
CS0020

<end subtitles>

Importing into the SubCap effect

Once you’ve got your SubCap text file, throw a SubCap effect on an empty video layer and go to Import Caption Data to bring your titles in. Make your adjustments for appearance (make sure to check out the Global Properties pane as well), and optionally you can save a stylesheet for the future so you only have to make those adjustments once.

SubCap Effect Panel

 

Check Your Work!

This is the last step, and it’s very important. Just because the process is automated doesn’t mean that there wasn’t an error, or that your source EDL was perfect. Check your sequence to make sure it has everything it’s supposed to and nothing extraneous. Even on small shows there can be a lot of hands in the locator jar, and you might find an errant locator buried in a nested clip, or a missed two-cut shot that got separated from its locator. If you need to add a title, it’s easy to do so from the SubCap effect editor).

Timeline with SubCap-Imported Titles

EDL to SubCap Converter

I've released a new version of this tool at shift-e.net/tools. Please start to use that, and let me know if you run into any problems. This site will remain online, but all future development will happen on the new site. Thanks!

This tool takes an EDL as input (paste it in), and converts it to Avid DS Subtitle format, which is one of the two formats you can supply to Avid's SubCap effect....

Keeping Mobile Avid Media Updated

On most of the films that I’ve worked on, I’ve needed to keep at least two Avid systems up to date. This could be because an editor wants to be able to cut from home, or the Director wants the ability to cut on set. To handle this I wrote a Terminal (bash) script that searches for any new MXF or OMF media on specified volumes since the last time the script was run. On first-run, the script does nothing but set the date to compare against the next time the script is run, though if you need to change that instructions are below.

Mission: Impossible 4 Formats and Aspect Ratios

Mission: Impossible 4 Formats and Aspect Ratios

In terms of workflow, planning for MI4 was certainly a challenge. If it were just one film format or just one release format there would be nothing special to write here, but from the outset we knew we would be shooting multiple film formats and releasing in three different aspect ratios, and keeping on top of all that takes a little bit of effort. The final film contains imagery shot on six different formats, which means that there are a lot of different native aspect ratios and keycode/timecode systems at work. We also coerced all six formats into the constraints of just two aspect ratios, so in the end the anamorphic RED footage was slightly resized to fit into 2.35, and everything else was cropped to fit into the 8-perf aspect ratio, since that format had the most footage in the IMAX parts of the movie.