Telescoping, or Distant Reading for Film Studies

What would it mean to “distantly watch” films?

In this post I am interested in considering how something akin to what Franco Moretti has called distant reading* in literary studies might function in film studies. If our current method of evidence gathering for film analysis is close reading—watching a film or a series of films minutely and picking moments that best represent the characteristics we intend to write about—then what kind of method would we require to simultaneously consider corpora of films? More importantly, what sort of project would necessitate that we undertake such a method?

To begin exploring how a distant reading method could exist in film studies, let me consider some existing digital tools for the analysis of films.

Digital Reading of Films

If we take the shot-by-shot analysis as the equivalent of cinematic close reading, then CineMetrics is a starting point in the execution of a film’s distant reading. Launched by film historian Yuri Tsivian with a program developed by Gunars Civjans, CineMetrics is a tool that aids in recording data about film shot lengths (and recently, types of shots), as well as drawing statistics from this data. Since its inception in 2005, CineMetrics has been used by scholars researching the history of film style by tracking, for instance, historical changes in editing patterns or across a filmmaker’s corpus. Its database has steadily grown with the help of researchers’ contributions, currently standing at over 13,000 entries. CineMetrics essentially provides a macro perspective of one of the constituent elements of a film—its shots—by presenting them as statistic data, and allows for comparative analysis at a distance. In fact, Lev Manovich’s Software Studies Initiative already did so in 2008 with the data that was available then.

Barcode created from El Infierno (2010)

Another way for looking at a film as a whole made up of representative elements from its constitutive shots is the popular movie barcode visualizations. These barcodes are created by taking each frame from the film and reducing it to a bar one pixel wide, which reduces the entire color distribution of that frame to its most predominant one. These individual bars are then put together to provide an overview of the color palette of the film in sequential order. The Tumblr that popularized these barcodes features dozens of examples, but any one film could be made into a barcode with a simple program (still in beta) or following these step by step instructions. Finally, and decidedly different, there’s the Audio-visual Cinematic Toolbox for Interaction, Organization, and Navigation (ACTION) from the Bregman Lab at Dartmouth. Currently in its initial phase, ACTION creates automatic analysis routines to extract image and sound raw data from films, and then provides a work bench to study this data.


This overview of digital tools for film reading is ordered from that which requires the most human input and guidance to that which requires the least. CineMetrics, in its most current incarnation, is still very much dependent on the researcher to watch a film while simultaneously recording the values needed for analysis. Movie barcodes can be produced automatically, but there’s a lot of refining needed in the inputs for the results to be anything other than colorful bars. ACTION moves further towards automation, working to systematically break down a film into raw data which can then be compared with those of other films.

However, I would be hesitant to call the processes that these tools allow for “distant watching”. Because of their still relatively high dependency on the researcher’s input for each entry, I’d consider these tools more as computational forms of close reading, or as statistical analyses of film form at the micro level. More importantly, I suggest that these methods still fall into the close reading approach precisely because their focus is on the individual film or the select group of films. Even ACTION, which has the most potential for macro-level processing of films into data, is currently organized under very small levels of analysis, such as an auteur’s best known films. If, as Franco Moretti has argued for distant reading, in “distant watching” the distance is a specific form of knowledge*, then the tools currently available have yet to get us there, since the form of knowledge they engender is not so different from what we once produced through close reading.

To put it another way, the specific form of knowledge that “distant watching” produces has as much to do with the disappearance of the categories into which we previously used to break down films for analysis. Distant watching should allow a researcher to focus on categories much smaller or much larger than the individual film text—one particular song being used across all the films produced in a decade, or the number of dissolves per film in all of Italian Neorealist films. In distant watching, like distant reading, it is not just canons or individual auteurs which disappear, but also the texts themselves. What does it mean, then, to watch films when there are no films to watch at all?


Ramon Lobato’s recent monograph Shadow Economies of Cinema appropriates another of Moretti’s terms, the slaughterhouse of literature, and refers to the “slaughterhouse of cinema” to argue that thousands of films—pirated, B-movies, niche, and other community-based productions—are ignored by cinema studies despite making up the vast majority of film production, distribution, and consumption worldwide. In essence, he calls for a paradigm shift in cinema studies wherein our object of study is not limited to canonical texts and mainstream high-value productions, but includes all sorts of audiovisual texts circulating through multiple distribution channels around the globe. If we take Lobato’s proposal to heart, I would contend that the only way to even begin to tackle such an endeavor is through “distant watching”. If the slaughterhouse of cinema is filled with thousands of titles that have never been considered, then approaching them can only be done on a macro level, where the individual texts disappear and we are left with impossibly large corpora from which we can decipher systems, trends, or genres.

For this reason, I think telescoping best illustrates this form of “distant watching” I am describing. On the one hand, the image of a telescope is meant to reinforce the idea of watching from a distance—often an insurmountable distance, but also a distance that produces a specific kind of knowledge not available through closeness. On the other hand, telescoping is the process of reducing the size of something as if by pressing, which is, in a more practical sense, what this distant reading practice would be doing: compressing large corpora of films to be studied into smaller, more manageable forms.

But why? The real question at stake here is whether there is a need for such a method of approaching films. After all, the sole fact that something has not been done before or that there is a new tool for it is not, I would argue, a sufficient enough reason to begin doing it. So while I am sure there may be other arguments, for now let me provide one that resonates most with my work:

photo by: Tamir Kalifa
Narcopeliculas at a video store in Austin

For the past year I have been studying narcopeliculas, films about drug traffickers made around the Mexico-US border. These films belong to an exclusively video-centered industry, which for the past three decades has been producing upwards of hundreds of films per year. The sheer volume of films, along with the fact that they are all remarkably formulaic, represents a conundrum for the researcher: why (and how to) watch thousands of films when they are all the same, but how can one tell how many are enough to draw effective conclusions about the elements all the films have in common? Moreover, without representative texts, how do we make claims about historical changes in style or content? I would contend that the most convincing way to provide answers to these questions is by tackling an immense amount of these films at once. Such a problem is not exclusive to my work either, as any researcher working with other B-genres, video-only industries, or even highly prolific industries such as porn is potentially facing an impossibly large corpus as an object of study.

This is why I see Ramon Lobato’s call to tackle the slaughterhouse of cinema not only as an argument that we should consider texts beyond the canon but also as an indication that there’s a methodological impediment that prevents us making this epistemological shift. And this is an important problem to address. At the very least, it will allow researchers to empirically support the claims of generic style in these large sets of films. But at most, it could allow for a whole new slew of research questions to arise. If, at present, so much of how we understand the cinematic corpus is established by reference to individual film texts, then once we’re able to reference the entire corpus we might be able to understand the individual texts anew.

To be sure, I don’t know that there is a tool that can solve this problem just yet—and it might be that only one tool won’t be able to do so—but good place to start towards building it might be with the digital reading tools we already have. In the meantime, I am arguing that there is a need for such a tool or tools, that the form of knowledge extracted from distant reading will be significantly novel from, but just as valid as, that of close reading—and that film studies will benefit from this knowledge just as much.

This post is part of an ongoing The Problems of Film series on DH and film studies. [ Previous Post / Next Post ]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s