What would it mean to “distantly watch” films?
In this post I am interested in considering how something akin to what Franco Moretti has called distant reading* in literary studies might function in film studies. If our current method of evidence gathering for film analysis is close reading—watching a film or a series of films minutely and picking moments that best represent the characteristics we intend to write about—then what kind of method would we require to simultaneously consider corpora of films? More importantly, what sort of project would necessitate that we undertake such a method?
To begin exploring how a distant reading method could exist in film studies, let me consider some existing digital tools for the analysis of films.
Digital Reading of Films
If we take the shot-by-shot analysis as the equivalent of cinematic close reading, then CineMetrics is a starting point in the execution of a film’s distant reading. Launched by film historian Yuri Tsivian with a program developed by Gunars Civjans, CineMetrics is a tool that aids in recording data about film shot lengths (and recently, types of shots), as well as drawing statistics from this data. Since its inception in 2005, CineMetrics has been used by scholars researching the history of film style by tracking, for instance, historical changes in editing patterns or across a filmmaker’s corpus. Its database has steadily grown with the help of researchers’ contributions, currently standing at over 13,000 entries. CineMetrics essentially provides a macro perspective of one of the constituent elements of a film—its shots—by presenting them as statistic data, and allows for comparative analysis at a distance. In fact, Lev Manovich’s Software Studies Initiative already did so in 2008 with the data that was available then.
Another way for looking at a film as a whole made up of representative elements from its constitutive shots is the popular movie barcode visualizations. These barcodes are created by taking each frame from the film and reducing it to a bar one pixel wide, which reduces the entire color distribution of that frame to its most predominant one. These individual bars are then put together to provide an overview of the color palette of the film in sequential order. The Tumblr that popularized these barcodes features dozens of examples, but any one film could be made into a barcode with a simple program (still in beta) or following these step by step instructions. Finally, and decidedly different, there’s the Audio-visual Cinematic Toolbox for Interaction, Organization, and Navigation (ACTION) from the Bregman Lab at Dartmouth. Currently in its initial phase, ACTION creates automatic analysis routines to extract image and sound raw data from films, and then provides a work bench to study this data.
This overview of digital tools for film reading is ordered from that which requires the most human input and guidance to that which requires the least. CineMetrics, in its most current incarnation, is still very much dependent on the researcher to watch a film while simultaneously recording the values needed for analysis. Movie barcodes can be produced automatically, but there’s a lot of refining needed in the inputs for the results to be anything other than colorful bars. Read More