What’s more important than technical accuracy? Making sure your test speaks for itself. When others speak for you–especially if they are non-experts–the messaging can quickly spin out of control.
I believe Zacuto, and all the people who participated in their 2012 camera shootout, had only the best of intentions. Unfortunately they don’t know much about doing comparative testing. The part they didn’t think through was the political part: how does this test affect manufacturers? cinematographers? producers and directors?
That’s why most tests let the results speak for themselves. You present your methodology, you show the footage, and you ask if anyone has any further questions. If you present conclusions then anything you present should be backed up by visual evidence that everyone can see. And, most importantly, if the audience only ever sees the test results, and never sees the supporting documentation, they should still walk away with an accurate perception of what transpired. All the important information must be presented at once, because you only have one chance to make a first impression that will linger with the audience for a long, long time.
This test contained a number of fatal flaws, many of which I detailed here. There are two that I want to stress again before I show how the test messaging spun out of control:
(1) Allowing DPs to relight the same set, with the same actors and the same action and camera movement, to make prosumer cameras look similar to high-end cameras, was a huge mistake. Zacuto says that this was really meant to be about the separate DPs and not the cameras, but that’s clearly not the impression given when these DPs changed some of the lighting and nothing else. The camera move remained the same, the composition remained the same, the broad strokes of the lighting remained the same.
String all of these clips, shot by different cameras, together and you have a camera shootout. It’s not a “documentary” and it’s not a DP shootout–the emphasis is clearly on the cameras. If you want to do a DP shootout, then let each DP light the set however they want from scratch and give them a single camera to work with. When the only variable changed is the DP we can now examine how that one variable affects everything else.
It’s incredibly silly to say that these tests are about both the DPs and the cameras when it’s impossible to know where one leaves off and the other begins. Zacuto created a situation where a camera like the Panasonic GH2 looks really good not because it’s a phenomenal camera but because its crew knew how to work to its strengths and compensate for its weaknesses–but there’s no way to quantify what part was the camera and what part was the crew. All the casual viewer knows is that the GH2 looked damned good. Oh, and they know one other thing…
(2) Francis Ford Coppola really liked the GH2. Or at least that’s the message that’s traveling around the Internet right now.
Test results should be presented with as little comment as possible, or the comments should be backed up during the screening by drawing attention to different aspects of the frame. A celebrity audience offering commentary on what they saw can add significantly to the viewing experience, but only if all of the viewers are celebrities because they are visual experts. Francis Ford Coppola is a magnificent director but he does not have as good an eye as most of the people in that room, all of whom have done nothing else but create commercially successful and spectacular images for decades–yet he is, by far, the most famous.
Allowing a famous celebrity, who is not a DP, to participate in this presentation resulted in a quote that has spread across the Internet like wildfire–and the quote, taken out of context, doesn’t mean what many think it means. It does, however, feed into a bias–created originally by RED–that low-end tools are now good enough that anyone can execute Hollywood-level cinematography with inexpensive tools. This is true, but–sadly–the emphasis falls on the tools first, when what really matters is the craftsperson using the tools.
As of Saturday, August 18, 2012, at 1pm, here’s what a Google search turns up based on the keywords “Zacuto test Coppola”:
43rumors.com:
How much of what they liked was the DP’s choice in lighting versus the camera itself? We’ll never know.
EOSHD:
Does anyone think that the takeaway is “Get a great DP!” as opposed to “Buy a $700 camera!”?
Gizmodo:
The article also says: “It’s impressive that a consumer camera could stand up to professional cinema rigs, but there is a great degree of subjectivity at play here. The skill and decisions of each cinematographer definitely played a key role, as did the personal preferences of those voting.” But we have no way of knowing how the skills and decisions of the cinematographer affected the outcome…
Luminous Landscape (forum comment):
It’s a valid point, unfortunately…
Imaging-Resource.com:
Personal-View.com:
My emphasis. And it’s not true. The camera matters tremendously, and must be matched to your budget and project needs.
Ryan E. Walters, cinematographer (who participated in the test), at ryanewalters.com:
He goes on to say that the biggest differences between the low-end cameras and the high-end cameras could be easily seen on a big screen. I had to watch the Vimeo feeds, as I suspect most people do. The differences aren’t as obvious there, so the audience commentary makes a HUGE difference.
pmanewsline.com:
Headline: Video test: $700 Panasonic camera beats $65,000 competition
PhillipBloom.net:
He makes a good point. You could love a cheap, prosumer camera simply because that DP did a subjectively better job of lighting the scene than did the DP tasked with making a higher-end camera look good. The fact that the GH2 looks so good has nothing to do with the camera objectively, but the test presentation makes this impossible to know for sure.
MarketWire:
Are they choosing cameras because they like the way the camera looks or how the DP lights? Who knows? There’s no way to tell.
This is a press release put out by Zacuto itself. It does attempt to give credit to the cinematographers for their exceptional work, but does anyone doubt that the takeaway is that a $700 camera “won”? When Zacuto itself emphasizes that the GH2 received an “overwhelming response,” do they realize what kind of message they are putting out there?
This was also picked up by Reuters.
News.Doddleme.com:
Want to guess who that person was? I’ll give you a hint: he wasn’t the most visually sophisticated person in the room, but he was–by far–the best director…
DPReview Forums:
Were they voting for the GH2 or for Colt Seaman? Most likely Mr. Seaman, but how do we know objectively?
Techcitement:
No, camera “B” was the GH2. It’s hard to know what voters were responding to when watching this footage in a small video window on a computer screen. Most probably they were impressed by the lighting, but what does that have to do with the camera?
Now, what I didn’t know when I started this article was that the website NoFilmSchool conducted an online vote, and the GH2 won. This meant that voters were looking at a compressed Vimeo feed, which probably affected their perceptions as much as the objective look of the camera and the subjective lighting executed by the DP. This kind of thing can’t be avoided when posting video to the web because most people won’t be able to see this in a theater. The problem, though, is that the lack of side-by-side objective comparisons of the cameras make it incredibly hard to see the differences in a small video window. I suspect it would be quite surprising, however, to see what differences CAN be seen under those circumstances, assuming that the cameras are looking at the same set with the same lighting. Once the lighting changes then we really have no idea what we are comparing.
Nearly every article above emphasizes that the tests are subjective and a lot of what people are responding to is most likely Colt Seaman’s excellent work. That’s the problem, though: how do we know, when we view the results from each camera, whether we are responding to the camera’s look or the DP’s touch? We have no idea.
The other problem is that every article first mentions how surprised everyone was with the GH2’s overwhelmingly positive response, and then credits Colt Seaman’s contribution second. While nearly all of these articles do the right thing and try to put credit where credit is due, they all–consciously or subconsciously–mention the camera first. The DP reference is always second. This is the way people think at this moment in time. It can’t be avoided.
Camera tests are hard. Presenting their results is even harder. It helps a lot when experts show you what they see in each image, and you can see what they’re talking about because you can see test images side by side and note where something happens in one frame that doesn’t happen in another. When, though, some of the contributors are experts and others are simply famous you’re opening yourself up for a world of political trouble if the famous people disagree with the experts. In the media, famous people always win.
I suspect that the inclusion of Mr. Coppola had more to do with marketing than with critical visual analysis. He’s an incredible artist, but he’s not looking at those images in the same way that an ASC member is.
In the end it could be pure luck that the GH2 got so much attention in these tests. If Mr. Seaman had instead lit for the Sony F3, would that camera be getting all the attention now? What if he lit the iPhone segment instead? We’ll never know. What we do know is that if he’d lit the scene once, and never changed the lighting, we’d have a much better idea of what the cameras can do. At this point we can only know that every DP does work that some find subjectively better than others, and some cameras appear subjectively better than others, and without isolating one or the other we can’t know which is making the difference.
It’s because we can’t know which camera is objectively better, or which DP is subjectively better, or see results from multiple cameras in the same frame at the same time, that these tests are largely a failure. It is possible to learn something from them, but this could have been a tremendous opportunity to see how each camera–each digital filmstock, if you will–stacked up against another.
Q: Which film manufacturers would line up a bunch of cameras loaded with different film stocks in front of one set, let one DP per filmstock light the set to their liking, and then screen the results as a film comparison shootout?
A: None.
Art Adams is a DP–and educator–who endeavors to shoot tests whose results can not be interpreted incorrectly. His website is at www.artadamsdp.com.
“ProVideo Coalition shut down comment registration a few months ago due to spam attacks. Commenting will be back and fully functional when the site relaunches which is happening soon. In the meantime, you can engage with the community via our Facebook or Twitter pages or send an email to info(at)provideocoalition.com to get a message posted in the comment section of this article.”
Just so you know, I monitor the PVC Facebook page as well as the Cinematography Mailing Lists (CML), so if you can’t post comments here try posting something in either of those places. I’ll most likely see your post and respond.
I understand there’s a discussion happening on DVXUser but I doubt I’ll get around to joining that this weekend. If I don’t walk away from this now I’m going to end up sleeping on the couch.
-Art