For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Human(e) AI researchers collaborate with colleagues across the university and with societal partners to critically examine the historical and societal development of machine learning and AI, articulate situated AI ethics, and develop alternative AI applications.

To pursue these objectives and collaborations, we adopt four strategies, which are supported and guided by prof. dr. Thomas Poell, the Human(e) AI faculty lead.

  1. We are collaboratively writing a position paper, which draws on the expertise of different humanities disciplines. The paper--provisionally titled Situating AI: A humanities and social science research agenda--engages with the opportunities and challenges of big data and machine learning. The paper starts by situating the development of AI within colonial and situated histories of AI as political technologies, as well as within contemporary platform economies. The second half of the paper sets out paths to explore interventions and alternative practices along three dimensions: political economies, empirical ethics, and technical practices.
  2. Building on the position paper, the Human(e) AI group plans, in different constellations and with a variety of academic and societal partners, to apply for external funding. The aim is to operationalize our situated AI ethics approach in society-oriented research projects that take diverse societal concerns and the perspectives of impacted communities and public institutions as the starting point in developing proposals for AI regulation, as well as in articulating alternative scenarios and applications. To enable these ambitions, we are considering applying for funding through the Dutch Research Agenda (NWA) and Horizon Europe.
  3. To make sure that the envisioned research projects and situated ethics approach shape future AI policy making and development, we plan to regularly consult with relevant Dutch and European policy makers and NGOs. We will do so in collaboration with the Public Values in the Algorithmic Society (AlgoSoc) research consortium, in which the faculty lead is involved, as well as with colleagues from the Institute of Information Law (IVIR) and the Critical Infrastructure Lab with which the group has regular contact. 
  4. To encourage members of Humane AI effectively collaborating in research, teaching, and funding applications across the departments, research schools, and universities, we organize regular meetings. In Humane AI group meetings, we have made an inventory of the research interests and expertise within the groups, which have inspired the position paper and the situated AI ethics approach. In addition, we organize annual Humane AI meetings with our colleagues at the universities of Groningen (RUG), Leiden (LEI), and Maastricht (UM) to enable information exchange, sharing of resources, and consortium funding applications.