Telescoping, Part Two: How to (Not) Watch One Million Moving Images

In my last post I argued for a form of “distant reading” that could be applied to film studies, and claimed that such a practice could be just as useful as our current close reading practices. In this post I continue this line of argumentation by considering one such way in which distant reading could take place in film studies: by analyzing systems and patterns across sets of films.

Montage Strikes Back

The Soviet formalists, pioneers in film theory and praxis, are making a comeback. Sure, their influence and canonical status within the discipline means they’ve never left, but there’s a peculiar form of resonance in the fact that the first projects of digital analysis of films harken back to those early film practitioners. Everything old is new again, if you will. I am referring specifically to the Digital Formalism project, a collaborative venture between the Austrian Film Museum, the University of Vienna, and the Vienna University of Technology that between 2007 and 2010 took the Vienna Vertov Collection and developed computational tools to aid in the analysis of Dziga Vertov’s works. Resulting from this project, interesting discussions on issues such as the difficulties of digitizing archival films and meta-data annotation of shots of urban landscapes have come out. Most notably for this discussion on distant watching, this project illustrates the challenges facing any sort of visual analysis of films using computers: How do we make legible for computational analysis the multiplicity of semantic, compositional, and technical aspects of one specific shot? For the Digital Formalism project this consisted of using algorithms for minor tasks such as detecting intertitles and human input for more complicated tasks such as motion tracking and image composition. Of course, this solution is only feasible with a very small number of films, and even then it took years to complete. Such a process seems unthinkable when we take ten times the number of films, let alone hundreds.

Visualization of all face close-ups in Vertov’s The Eleventh Year (1928)

In a similar vein, the Software Studies Initiative has a tentative solution to this problem, particularly in their project How to Compare One Million Images? wherein the researchers set out to compare 1,074,790 
manga 
pages and rightfully point out that the only way to do so is through automated computer analysis. In essence, the project consisted of running a digital image processing software on supercomputers at the National 
Energy 
Research 
Scientific 
Computing
 Center 
(NERSC) to measure a number of visual features for each one of the images, and then use the data from this process to produce visualizations that presented the patterns shown across time and among different editions and manga series. The results from this project are encouraging insofar as they show how to work with large number of visual artifacts and to present these results in manageable forms. However, this process falls short of solving the original problem: like the manual inputs in the Digital Formalism project, the Software Studies Initiative project is designed to search for and annotate characteristics in the images that the researchers are looking for—but what if we don’t know what we are looking for?

Pattern-spotting

Philosopher and logician Charles Sanders Peirce proposed the principle of abduction as a formalization of intuition, as the third part of the logical scientific processes, along with deduction and induction. In short, if deduction is based on logical, a priori reasoning and induction is predicated on proof by repeated testing, abduction’s reasoning comes from merely spotting patterns. In disciplines driven by the scientific method, abduction must eventually be proven by way of deduction or induction. However, as (digital) humanists, it may be the case that our purpose is precisely to find these patterns, that abduction is an end in itself.

Recall my example of narcopeliculas, and B-movies in general. By definition, these sorts of films are derivative, a tautology of sorts wherein we assert that their style is generic and because of this assertion we assume that all their stylistic choices are in response to genre conventions. Implicitly, these claims are being made against the exceptionality of those films—blockbusters, art cinema, experimental films—that we do chose to analyze minutely. B-movies do not seek to be exceptional; therefore they are probably not exceptional—or so goes the argument. To be sure, I’m not suggesting we radically rethink B-movies as extraordinary, but quite the opposite. In relation to the hundreds of narcopeliculas out there, one of the questions I previously posed was, how can we tell how many films to watch in order to draw effective conclusions about the characteristics all the films have in common? Contained within this question is another: to begin with, how do you know what these characteristics are? Or, even, if they exist? Answers to these questions, I would suggest,  are not possible by analyzing image features, much less by annual annotation.

Consider this randomly selected set of frames from three minutes of a film:

Shots from three minutes of El Infierno (2010)

Some patterns may be instantly noticeable to the human eye: the sepia color palette, or the prevalence of long shots of the desert, for instance. However, it is possible that these patterns will not be so obvious—in fact, they might not even be significant patterns at all—if we were to consider all the frames in this film. And that’s just one film. Lev Manovich’s idea of “direct visualization“, where data is reorganized into new visual representations, is helpful precisely because patterns that human perception alone would not be able to grasp are computationally rendered for us (for instance, see the visualization of face close-ups from Dziga Vertov’s The Eleventh Year above). But direct visualization requires that we select the characteristics on which to focus on for pattern recognition a priori. As Dan Dixon has argued, digital humanities of the computational, data-driven sort seek to reinterpret texts by discovering patterns that would not be readily apparent to the human researcher. How, then, do we find noteworthy patterns in B-movies—or any group of films, for that matter—if we don’t know what these patterns will be like? How do we find whether there are noteworthy patterns in these films at all? If, as I argued before, in telescoping the films themselves disappear and we are left with systems, trends, and patterns, what kind of tool do we need in order to make these visible?

This post is part of an ongoing The Problems of Film series on DH and film studies. [ Previous Post / Next Post ]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s