The Machine Learning Center invites you to a lecture by David Bamman, an assistant professor in the School of Information at UC Berkeley.
The lecture will be held at 12:45 p.m. on Friday, November 15 in Klaus 2443.
Title: The Data-Driven Analysis of Literature
Abstract: Literary novels push the limits of natural language processing. While much work in NLP has been heavily optimized toward the narrow domains of news and Wikipedia, literary novels are an entirely different animal--the long, complex sentences in novels strain the limits of syntactic parsers with super-linear computational complexity, their use of figurative language challenges representations of meaning based on neo-Davidsonian semantics, and their long length (ca. 100,000 words on average) rules out existing solutions for problems like coreference resolution that expect a small set of candidate antecedents.
At the same time, fiction drives computational research questions that are uniquely interesting to that domain. In this talk, I'll outline some of the opportunities that NLP presents for research in the quantitative analysis of culture--including measuring the disparity in attention given to characters as a function of their gender over two hundred years of literary history (Underwood et al. 2018)--and describe our progress to date on two problems essential to a more complex representation of plot: recognizing the entities in literary texts, such as the characters, locations, and spaces of interest (Bamman et al. 2019) and identifying the events that are depicted as having transpired (Sims et al. 2019). Both efforts involve the creation of a new dataset of 200,000 words evenly drawn from 100 different English-language literary texts and building computational models to automatically identify each phenomenon.
This is joint work with Matt Sims, Ted Underwood, Sabrina Lee, Jerry Park, Sejal Popat and Sheng Shen
Bio: David Bamman is an assistant professor in the School of Information at UC Berkeley, where he works on applying natural language processing and machine learning to empirical questions in the humanities and social sciences. His research often involves adding linguistic structure (e.g., syntax, semantics, coreference) to statistical models of text, and focuses on improving NLP for a variety of languages and domains (such as literary text and social media). Before Berkeley, he received his PhD in the School of Computer Science at Carnegie Mellon University (LTI).