Against Technosolutionism: Governing Platforms as Systems of Care
Why do our digital systems break people, and why are they almost never designed to be cared for, repaired, or slowed down? Conversational AI tools like Grok or ChatGPT are promoted as a means to democratize knowledge and expand access to information. In practice, however, they have also made sexual harassment easier, reproduced harmful stereotypes, and, in some cases, encouraged people to self-harm rather than helping them. These outcomes are not rare glitches. They reveal how conversational AI and social media platforms are built, governed, and deployed at scale.
Technosolutionism is the belief that technology can automatically fix social problems, and it strongly shapes how social media and conversational AI platforms are designed and regulated. Under this logic, speed and scale are prioritized, while repair, and long-term care receive far less attention. With growth as the engine of progress, social accountability is frequently delayed or deflected. Users who face the greatest risks, including children, marginalized communities, and people who cannot meaningfully opt out, are treated as secondary concerns rather than central users.
During this panel, we will examine how contemporary ways of governing digital systems systematically fail to protect vulnerable users. How might sustainability, slowness, and relational accountability reshape governance doctrines? And what would platforms be required to do if care, rather than profit, were the organizing principle?
Speakers
Dr. Delfina S. Martinez Pandiani is Assistant Professor of Cultural Data Analysis at the University of Amsterdam, at the Institute for Logic, Language and Computation (ILLC) and the Department of Media Studies. Delfina’s research explores how identity, toxicity, and vulnerability are computationally modeled and negotiated in datafied environments. They hold a PhD in Computer Science and specialize in human-centered, explainable, and queer AI for multimodal analytics, surfacing new ways of thinking about power, representation, and technological change.
Dr. Lani Hanna is currently a lecturer in Global Arts, Culture and Politics at the University of Amsterdam. Hanna has been a part of the Interference Archive collective since 2013. She is an editor of the publication Armed by Design: Posters and Publications of Cuba’s Organization of Solidarity of the Peoples of Africa, Asia, and Latin America (OSPAAAL), and holds a PhD in Feminist Studies at University from California, Santa Cruz with a designated emphasis in Critical Race and Ethnic Studies.. Her other research interests include internationalism, counter-institutional archives, political infrastructure and radical pedagogy.
Mae Sosto (they/them) is a PhD candidate at the Centrum Wiskunde & Informatica (CWI) in Amsterdam, supervised by Laura Hollink and collaborating with the Human-Centered Data Analytics group. With a background in computer science specializing in AI and NLP, their research examines and mitigates bias in models, focusing on heterocisnormativity and discrimination against LGBTQIA+ identities. As a queer intersectional activist and NLP researcher, Mae integrates computational methods with social science perspectives to advance fairness, diversity, and inclusion in language technologies.