Exporting with Premiere Pro 2017 and Adobe Media Encoder
What Is Compression?
When you encode a video, you're compressing it so it takes up less disk space. There are tons of different codecs that let you do this and many different file formats. For example, H.264 is a codec and MOV is a file format. H.264 can encode your video, but you've probably seen it delivered as an MOV, AVI, or MP4 file. That's because all these formats can serve as a content container for H.264 video. There's no real significant difference between H.264 files with these various file types, so don't worry too much about how to deliver. My preference is MP4, because pretty much everything can play it, but most video software can handle the other formats too.
Bit Rates
When you're encoding video, you're going to be dealing with bit rates. Bit rates are how much data is used for each second of video. Let's say you have a video that was encoded at 1000kbps. Despite what it looks like, that doesn't mean each second of video takes up 1000KB, but rather 125KB. In this case, kbps stands for kilobits per second, not kilobytes. Kilobits are basically eight times the number of kilobytes, so you can get kilobytes per second by dividing your number of kilobytes by eight. If your video was exactly 94 seconds long and encoded at a bit rate of 1000kbps, that means it would be 11.75MB in size. This is all assuming that every second of video equals exactly 1000 kilobits, which is only the case if you encode at a constant bit rate (CBR). CBR encoding is generally used for streaming media to keep the flow of data as consistent as possible, but for progressively downloaded video (what you find on YouTube, Vimeo, and most other video sharing sites) you're better off encoding at a variable bit rate (VBR). VBR encoding can work in a couple of ways, but most encoders just have you specify a single number (in kilobits per second) as the average bit rate. This means that if you specified 1000kbps, each second of the video would be encoded at around 1000kbps. Some seconds of your video will not be as complex as others, so ones with lower complexity won't require all 1000kbps. The ones that require more will take more, and on average this will result in seemingly higher quality video without affecting the file size too much. There's more to it than that, but that's the basic idea.
Key Frames
One more thing you should know about encoding is key frames. Pretty much every codec you'll use to export video for the web, DVD, Blu-ray, etc., will have key frames. Codecs that don't use key frames are generally designed for editing purposes because they're less processor-intensive and therefore make editing a bit faster. The DV codec is one such codec. Compare it to H.264 and see how much faster it is when editing. So what are key frames? Key frames are the full picture. Let's say you have 30 frames of video and the first and 30th frame are key frames. You can think of those frames like photos—all the detail of the video exists in that frame. Frames 2 through 29, however, do not have the full picture. Instead, frame 2 just contains the changes that have happened since frame 1 because frame 2 is not a key frame. How often do you need to use key frames? Less often than you'd think. Every six seconds is customary nowadays, but the more key frames you have the easier it is to scrub through the video. The downside is that more key frames tends to result in lower-quality video. Why? Because it takes a lot of data to store a key frame, and if you only have 1000 kilobits (or whatever) every second, the more frames in that second that use a full frame mean you have less room to store the changes between those frames. If you use key frames too frequently it'll degrade the quality of those changes and the quality of those key frames to try and stick to the average bit rate. It may seem like many key frames is better, but you generally just end up with a lot of lower-quality key frames and, therefore, lower quality video.
How to Design an Encoding Specification
When designing a specification or a preset of your own, the first thing you need to figure out is how you're delivering your video. If you're delivering it on the web, you're not going to want to target anything slower than the slowest broadband connection because anything at that level isn't really fast enough to handle video worth watching in the first place. (Well, unless it's a cellphone, but we've learned to be patient with those.) The slowest broadband connection you're going to find is probably the 768kbps DSL connection. This is the peak data rate, so if you're thinking you should encode your video at 768kbps you are setting yourself up for trouble. If you're reading this, you're paying for an internet connection. Does it always perform at the peak rate? Probably not. If you want people to be able to progressively download your video in real time you need to take the lowest target connection speed and reduce it by one third. For 768kbps, that's 512kbps, so you want to encode your video at 512kbps. This is really only a useful bit rate for standard definition video and so you shouldn't use it with anything larger than 640x360 or 640x480. Around 1000-1200kbps is a good target video bit rate for 720p files and 1080p should be twice that, if not more. If you're simply creating a source file to upload to YouTube, Vimeo, or some other video service, those bit rates can be much higher because those sites will re-compress the file using their own standards. If that's the case, you should allocate bit rates closer to 3500-4000kbps for 720p and 8000-9000 for 1080p. Since your video is getting compressed for a second time, this extra quality will make a difference in the final product people see when you upload it to a video sharing site. As far as audio goes, I like to use 192kbps for MP3 or AAC audio, although you'll need to keep it to 160kbps if you're encoding for an Apple device. Why? Good question.
All of the suggestions mentioned in the last paragraph are based on the H.264 codec, but they should work well with pretty much any modern codec. The better the codec, the lower the bit rate can be. That means that 512kbps will look a lot better in H.264 than it will if you use a codec that's not as good. Try encoding a file at 512kbps using both H.264 and the standard MPEG4 codec. You'll see a difference, although it may be a little subtle.
The Most Important Thing You Should Know
Don't mess anything up!
Yeah, in a perfect world, right? The thing is, if you shoot terrible quality video your encode will look even worse. Sometimes you will shoot terrible video and you won't even realize how bad it is until you encode it and it looks like someone blurred out all the detail.
A lot of us think our poorly-lit video looks pretty good at 1080p, but that's because you're getting quite a bit of detail at that resolution. That video is also at a really high bit rate, so detail is retained. The more you compress your video the more detail is thrown out, so if your video isn't well shot or lit, you're going to lose a lot more detail when encoding than you would if you just shot it properly in the first place.
While it's easier said than done, do the best you can do get the highest quality video you can before you bring it into post production. We can do some amazing things in post, but nothing miraculously turn your poor quality footage into a work of art. In fact, there's almost nothing you can do to save bad footage regardless of how good you are. You just can't find detail that isn't there.
So before you sit down to create your amazing film, how to video, or whatever, take the time to shoot it well with good light or you will be kicking yourself when everything is over. It sucks to put a ton of work into something and then find out it looks terrible when you put it online or on a DVD, so do everything you can to make it look good so you don't have to try to salvage it in post.
When you encode a video, you're compressing it so it takes up less disk space. There are tons of different codecs that let you do this and many different file formats. For example, H.264 is a codec and MOV is a file format. H.264 can encode your video, but you've probably seen it delivered as an MOV, AVI, or MP4 file. That's because all these formats can serve as a content container for H.264 video. There's no real significant difference between H.264 files with these various file types, so don't worry too much about how to deliver. My preference is MP4, because pretty much everything can play it, but most video software can handle the other formats too.
Bit Rates
When you're encoding video, you're going to be dealing with bit rates. Bit rates are how much data is used for each second of video. Let's say you have a video that was encoded at 1000kbps. Despite what it looks like, that doesn't mean each second of video takes up 1000KB, but rather 125KB. In this case, kbps stands for kilobits per second, not kilobytes. Kilobits are basically eight times the number of kilobytes, so you can get kilobytes per second by dividing your number of kilobytes by eight. If your video was exactly 94 seconds long and encoded at a bit rate of 1000kbps, that means it would be 11.75MB in size. This is all assuming that every second of video equals exactly 1000 kilobits, which is only the case if you encode at a constant bit rate (CBR). CBR encoding is generally used for streaming media to keep the flow of data as consistent as possible, but for progressively downloaded video (what you find on YouTube, Vimeo, and most other video sharing sites) you're better off encoding at a variable bit rate (VBR). VBR encoding can work in a couple of ways, but most encoders just have you specify a single number (in kilobits per second) as the average bit rate. This means that if you specified 1000kbps, each second of the video would be encoded at around 1000kbps. Some seconds of your video will not be as complex as others, so ones with lower complexity won't require all 1000kbps. The ones that require more will take more, and on average this will result in seemingly higher quality video without affecting the file size too much. There's more to it than that, but that's the basic idea.
Key Frames
One more thing you should know about encoding is key frames. Pretty much every codec you'll use to export video for the web, DVD, Blu-ray, etc., will have key frames. Codecs that don't use key frames are generally designed for editing purposes because they're less processor-intensive and therefore make editing a bit faster. The DV codec is one such codec. Compare it to H.264 and see how much faster it is when editing. So what are key frames? Key frames are the full picture. Let's say you have 30 frames of video and the first and 30th frame are key frames. You can think of those frames like photos—all the detail of the video exists in that frame. Frames 2 through 29, however, do not have the full picture. Instead, frame 2 just contains the changes that have happened since frame 1 because frame 2 is not a key frame. How often do you need to use key frames? Less often than you'd think. Every six seconds is customary nowadays, but the more key frames you have the easier it is to scrub through the video. The downside is that more key frames tends to result in lower-quality video. Why? Because it takes a lot of data to store a key frame, and if you only have 1000 kilobits (or whatever) every second, the more frames in that second that use a full frame mean you have less room to store the changes between those frames. If you use key frames too frequently it'll degrade the quality of those changes and the quality of those key frames to try and stick to the average bit rate. It may seem like many key frames is better, but you generally just end up with a lot of lower-quality key frames and, therefore, lower quality video.
How to Design an Encoding Specification
When designing a specification or a preset of your own, the first thing you need to figure out is how you're delivering your video. If you're delivering it on the web, you're not going to want to target anything slower than the slowest broadband connection because anything at that level isn't really fast enough to handle video worth watching in the first place. (Well, unless it's a cellphone, but we've learned to be patient with those.) The slowest broadband connection you're going to find is probably the 768kbps DSL connection. This is the peak data rate, so if you're thinking you should encode your video at 768kbps you are setting yourself up for trouble. If you're reading this, you're paying for an internet connection. Does it always perform at the peak rate? Probably not. If you want people to be able to progressively download your video in real time you need to take the lowest target connection speed and reduce it by one third. For 768kbps, that's 512kbps, so you want to encode your video at 512kbps. This is really only a useful bit rate for standard definition video and so you shouldn't use it with anything larger than 640x360 or 640x480. Around 1000-1200kbps is a good target video bit rate for 720p files and 1080p should be twice that, if not more. If you're simply creating a source file to upload to YouTube, Vimeo, or some other video service, those bit rates can be much higher because those sites will re-compress the file using their own standards. If that's the case, you should allocate bit rates closer to 3500-4000kbps for 720p and 8000-9000 for 1080p. Since your video is getting compressed for a second time, this extra quality will make a difference in the final product people see when you upload it to a video sharing site. As far as audio goes, I like to use 192kbps for MP3 or AAC audio, although you'll need to keep it to 160kbps if you're encoding for an Apple device. Why? Good question.
All of the suggestions mentioned in the last paragraph are based on the H.264 codec, but they should work well with pretty much any modern codec. The better the codec, the lower the bit rate can be. That means that 512kbps will look a lot better in H.264 than it will if you use a codec that's not as good. Try encoding a file at 512kbps using both H.264 and the standard MPEG4 codec. You'll see a difference, although it may be a little subtle.
The Most Important Thing You Should Know
Don't mess anything up!
Yeah, in a perfect world, right? The thing is, if you shoot terrible quality video your encode will look even worse. Sometimes you will shoot terrible video and you won't even realize how bad it is until you encode it and it looks like someone blurred out all the detail.
A lot of us think our poorly-lit video looks pretty good at 1080p, but that's because you're getting quite a bit of detail at that resolution. That video is also at a really high bit rate, so detail is retained. The more you compress your video the more detail is thrown out, so if your video isn't well shot or lit, you're going to lose a lot more detail when encoding than you would if you just shot it properly in the first place.
While it's easier said than done, do the best you can do get the highest quality video you can before you bring it into post production. We can do some amazing things in post, but nothing miraculously turn your poor quality footage into a work of art. In fact, there's almost nothing you can do to save bad footage regardless of how good you are. You just can't find detail that isn't there.
So before you sit down to create your amazing film, how to video, or whatever, take the time to shoot it well with good light or you will be kicking yourself when everything is over. It sucks to put a ton of work into something and then find out it looks terrible when you put it online or on a DVD, so do everything you can to make it look good so you don't have to try to salvage it in post.