My buddy Geoff Morrison (internationally best-selling author of Undersea, which yours truly edited) has a new article up at CNET exploring the differences — and the similarities — between 1080i and 1080p that’s sure to spark some debate. That’s what the article is about on the highest level, anyway. What’s cool to me is that, in explaining not only why they’re the same resolution, but also why, in some cases, 1080i and 720p are better at certain tasks than 1080p in real-world situations, Geoff has cooked up some really succinct explanations for things like upcoversion, deinterlacing, 3:2 pulldown, the difference between frames and fields, and why most video games aren’t the resolution they claim to be.
Here’s one of my favorite bits:
ABC and Fox very consciously made the choice to go with 720p over 1080i. As we said earlier, this largely wasn’t based on some limitation of the technology or being cheap. It’s that 1080i is worse with fast motion than 720p.
At 60 frames per second (720p), the camera is getting a full snapshot of what it sees every 60th of a second. With 1080i, on the other hand, it’s getting half a snapshot every 60th of a second (1,920×540 every 60th). With most things, this isn’t a big deal. Your TV combines the two fields. You see frames. Everything is happy in TV land.
But let’s say there’s a sportsball guy running across your screen from right to left. The camera captures a field of him, then a 60th of a second later, it captures another field of him. Uh-oh, he wasn’t nice enough to stand still while this happened. So now field “A” has him in one place (represented by half the image’s pixels) and field “B” has him slightly to the left (represented by half the image’s pixels). If the TV were to combine these two fields as-is, the result would look like someone dragged a comb across him. Conveniently, this artifact is called combing.
Be sure to check out the whole article, and join in on the debate.