Vera Tobin

Elements of Surprise / Publications / Teaching / CV
Vera Tobin Department of Cognitive Science
Case Western Reserve University
617G Crawford Hall

vera.tobin at
vrtbn at twitter

I’m an associate professor of Cognitive Science at Case Western Reserve University, where I investigate connections between cognition, language, and narrative, with a special interest in cognitive bias and how people think about other minds. My research looks at how people interpret and construct narratives together, how literature and film capitalize on various aspects of our social cognition, and the intersection of “small” linguistic-pragmatic phenomena with sense making at the level of narrative and interaction.

My book Elements of Surprise: Our Mental Limits and the Satisfactions of Plot was published by Harvard University Press in the spring of 2018. My current projects include Being Difficult, about the place of uncooperative behavior in various kinds of cooperative activities.

I work on irony, presupposition, and other kinds of tricky viewpoint phenomena in language. My work is also about what sorts of stories and constructions capture our imaginations and insinuate themselves into what we believe—the sorts of things that are good news for mystery writers but perhaps bad news for society.

Recently, I have also been working on projects that look at glitchy structural effects of other kinds as well. For instance, people trying to be sarcastic or facetious often experience “irony attrition,” in which expressions and activities they perform ironically become more sincere over time. My research group also looks at negative transfer effects in networks. Sometimes we (and our AI models) don’t just fail to learn structures as quickly or effectively as we would like—sometimes other structures actively interfere. When people begin learning a new language, for example, structures from languages they already know can facilitate their acquisition of structures in the new language (positive transfer) but they can also interfere (negative transfer). In machine learning settings, it is easy to think of “transfer” only in positive terms. Our work attempts to distinguish between the mere absence of positive transfer and the presence of negative transfer.