Human(e) AI at the Faculty of Humanities
In the Human(e) AI group, we critically situate and examine the development of AI technologies within specific socio-cultural and political-economic histories and regions. Building on this research, we, subsequently, propose a reconfigured, situated AI ethics. This approach challenges universalist claims by being responsive to diverse societal concerns, the perspectives of impacted communities and public institutions. Informed by historical investigations, political economic inquiries, computational linguistics, and empirical ethics, we develop alternative scenarios, practices, and applications.
Human(e) AI researchers collaborate with colleagues across the university and with societal partners to:
- critically examine the historical and societal development of machine learning and AI;
- articulate situated AI ethics, and
- develop alternative AI applications.
To pursue these objectives and collaborations, we adopt four strategies, which are supported and guided by prof. dr. Thomas Poell, the Human(e) AI faculty lead.
From films to paintings, music and texts: the use of generative AI is transforming cultural production. What does this mean for artists and cultural workers? And how does this change the nature of creativity?