Modeling Empathy: Fostering Connection Through Data Visualization

As I continue my research and begin to plan out my DH project for the certificate program, I have been finding myself going back to this idea of modeling emotions, or in the case of my research specifically, modeling empathy. So much of my research, though also grounded in network analysis, narrative analysis, and standard data archival processes, goes a step further in trying to understand how the stories I collect impact (more holistically) the individuals and communities that they belong to. I find myself not only concerned with how to go about doing this effectively, but also how to visualize the more emotional components of this research in a way that is respectful and not exploitive.

Catherine D’Ignazio and Lauren F. Klein highlights this delicate push and pull in their book Data Feminism. To begin, they explain that, “‘At every level contemporary technology is deeply rooted in and running on the exploitation of human bodies’” (185). Their analysis diverges slightly from my research as they go more in detail about the various ways digital projects and technologies exploit people. From not giving full and proper credit to everyone involved on projects, or the people working long hours in other countries getting paid low wages to assist in running the very technologies we rely on everyday. However, the sentiment is the same.

In highlighting my project idea specifically, I am looking to collect the digital stories (metanarratives) of the Black Lives Matter movement and counter narratives on Twitter and Instagram during the summer of 2020. These stories come from real people who have and continue to experience the multiple and unending effects of racial distrimination and socio-political inequality both socially and politically. These topics and conversations are deeply personal and challenging to fully encapsulate verbally, not to mention visually. Leaning into D’Ignazio and Klein’s point and taking it a step further. I often find myself thinking about the enormous network of people involved and impacted by these digital movements. It is not just the people posting on social media, but the people working for these platforms that have to constantly be filtering the abusive language and threats that we don’t end up seeing on our timelines. As they spell out later in the chapter in speaking about emotional labor and affective labor saying, >We can see both emotional and affective labor at work all across the technology industry today. Consider, for instance, how call center workers and other technical support specialists must exert a combination of affective and emotional labor, as well as technical expertise, to absorb the rage and irate customers (affective labor), reflect back their sympathy (emotional labor), and then help them with– for instance– the configuration of their wireless router (technical expertise) (193). I feel like it is my responsibility to acknowledge these contributions and the ways these filtering systems impact these online movements and create further context– but when is it too much? Is it possible to fully acknowledge and include all aspects of those involved? Is it too complicated to do for one project?

Even within the data aggregation process, I worry about the ways that we may be causing more harm than good. Regardless of how self-aware a person is, how can you be completely objective? Is it impossible? As Crawford and Joler write in their Anatomy of an AI System, though they perform an extremely detailed analysis of the various ethical and environmental implications of AI software, they still grapple acknowledge this same conundrum, “..this kind of invisible, hidden labor, outsourced or crowdsourced, hidden behind interfaces and camouflaged within algorithmic processes is now commonplace, particularly in the process of tagging and labeling thousands of hours of digital archives for the sake of feeding the neural networks” (2018). This leads me to think about the many ways we as human beings use technology to one another. I feel like we are so used to seeing and interacting with data whether actively or passively that we forget the human components, we forget the people behind the numbers.

Though D’Ignazio and Klein do detail various digital projects that explore these ideas like Atlas of Caregiving which is a project that documents the work of family members caring for chronically ill patients, the ways in which these projects highlight the emotional labor they discuss early is generally through a combination of data analysis coupled with personal interviews (193). This type of affective analysis is what seems to most common. Though I see the effectiveness and the way it lends itself to further contextualization of the data, I wonder what other options and opportunities exist? Perhaps in-person exhibitions where the experience is more visceral? I think of monuments like the Vietnam Memorial in Washington D.C or other spatial, tactile visual representations– but that seems like an enormous undertaking for an academic project. As we have grown so enmeshed with our screens, I wonder just what it would take to shatter the impersonal level of dissociation we have grown so accustomed to. I wish I had an answer to conclude this post with, but I find myself still searching. I will certainly report back when I find out.

Bibliography:

Crawford, Kate, and Vladin Joler. “Anatomy of an AI System.” Anatomy of an AI System, 2018, http://www.anatomyof.ai.

D’Ignazio, Catherine, and Lauren F. Klein. Data Feminism. The MIT Press, 2020.

Comments