Towards trustworthy intelligent vehicle technology development

Sammanfattning: This thesis addresses the unresolved issues of responsibility and accountability in autonomous vehicle (AV) development, advocating for human-centred approaches to enhance trustworthiness. While AVs hold the potential for improved safety, mobility, and environmental impact, poorly designed algorithms pose risks, leading to public distrust. Trust research focuses on technology-related aspects but overlooks trust within broader social and cultural contexts. Efforts are underway to understand algorithm design practices, acknowledging their potential unintended consequences. For example, Baumer (2017) advocates human-centred algorithm design (HCAD) to align with user perspectives and reduce risks. HCAD incorporates theoretical, participatory, and speculative approaches, emphasising user and stakeholder engagement. This aligns with broader calls for prioritising societal considerations in technology development (Stilgoe, 2013). The research in this thesis responds to these calls by integrating theories on trust and trustworthiness, autonomous vehicle development, and human-centred approaches in empirical investigations guided by the following research question: “How can human-centred approaches support the development of trustworthy intelligent vehicle technology?” This thesis approaches the question through design ethnography to ground the explorations in people’s real-life routines, practices and anticipations and demonstrate how design ethnographic techniques can infuse AV development with human-centred understandings of people’s trust in AVs. The studies reported in this thesis include a) interviews and participatory observations of algorithm designers, b) interviews and probing with residents, and c) staging collaborative, reflective practice through the design ethnographic materials and co-creation with citizens, city, academic and industry stakeholders, including AV algorithm designers. Through these empirical explorations, this thesis suggests an answer to the research question by coining a novel and timely framework for intelligent vehicle development: trustworthy algorithm design (TAD). TAD demonstrates trustworthiness as an ongoing process, not just a measurable outcome from human-technology interactions. It calls to consider autonomous vehicle algorithms as construed through a network of stakeholders, practices, and technologies and, therefore, defines trustworthy algorithm design as a continuous collaborative learning and evolvement process of different disciplines and sectors. Furthermore, the TAD framework suggests that for autonomous vehicle algorithm design to be trustworthy, it must be responsive, interventional, intentional and transdisciplinary. The TAD framework integrates ideas and strategies from different well-known trajectories of research in the field of responsible and human-centred technology development: Human-Centred Algorithm Design (Baumer, 2017), algorithms as culture (Seaver, 2017) and Responsible Innovation (Stilgoe et al., 2013). The thesis contributes to this field by empirically investigating how this integrated framework helps expand existing understandings of interactional trust in intelligent technologies and include the relevance of participatory processes of trustworthiness and how these processes are nurtured through cross-sector co-learning and design ethnographic materials.

  KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)