LLMs + HRI

Should Large Language Models be Embodied in Social Robotics? Identifying Ethical Considerations and Social Impact through a Design Justice Approach

My dissertation project at the University of Cambridge has turned into a larger research project, with both theoretical and empirical methodologies. The following is an abstract of the research:

Designing social embodied interfaces for Large Language Models (LLMs) is a non-trivial site of ethical enquiry. As LLM-based social AI is becoming increasingly prevalent in a multitude of social applications, an emerging trend is the integration of LLMs in social robotics: embodied and often humanoid physical technological artefacts. This research investigates ethical considerations and social impact of the intersection of these two technologies: LLMs and social robotics. Using LLMs for social robotics may provide benefits, such as enabling less repetitive interaction, open-domain dialogue, less reliance on manual Wizard-of-Oz methods, and fine-tuning specific personalities for different applications or interventions. However, this combination also gives rise to a multitude of ethical considerations, as identified by this study through two avenues of research: a theoretical literature review and an empirical, critical, design justice and interaction study. Physical embodiment of social robots and how it relates to social capabilities and affordances, is a complex design space and integrating LLMs in social robotics adds new important considerations of how to design e.g. personality and behaviour in relation to appearance and voice. Thus, this research identifies social impact and ethical considerations and how to conduct design justice based research to enable situated, context-dependent, socio-technical and human-centred design considerations. Actively practising design justice involves both using the critical design as a framework for analysis, but also practically asking methodological questions on how to incorporate people and communities into the design processes through PD techniques (ibid.).

The empirical study was a qualitative co-design and interaction study with nine participants, which actively included people with indigenous, queer and disabled identities. The purpose of the study was to explore ethical considerations relevant to the process of co-design of, and interaction with, a humanoid social robot as the interface of a LLM as well as evaluate how a design justice-based methodology can be used in the context of implementing and designing LLMs in social robotics. Each participant took part in three sessions, over the range of two weeks, which each involved co-design workshops and open-domain dialogue interaction of an LLM-based humanoid social robot (Furhat Robotics). The findings identified ethical considerations in four main domains; i. behaviour and non-verbal cues, ii. emotional disruption and dependence, iii. deception and manipulation and iv. appearance/personality, bias and stereotypes. These domains, and respective sub-domains were critically evaluated and discussed to allow for an improved understanding of the social implications these technologies pose in society and how to move forward when designing, developing and regulating them. This research contributes to the disciplines of social robotics and AI ethics in the following ways: Firstly, by providing a critical review of ethical considerations relevant to the implementation of LLMs into social robotics; secondly, by critically examining these considerations in light of design justice principles, challenging logics of inequality and marginalisation in their design; thirdly, by empirically investigating socio-technical considerations and imaginations related to design and interaction with a LLM-based humanoid social robot, allowing for identification, confirmation or contradiction of ethical considerations previously explored in theory and practice.

Image: Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big city Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / Licenced by CC-BY 4.0

References

2024

  1. 5.gif
    An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics
    Alva Markelius
    Oxford Intersections: AI in Society. Ed. Oxford, England: Oxford University Press, forthcoming., 2024