Emotion Science

Image
slide_expl_vcampus3_ohne.jpg
Group leader
Arvid Kappas
Emotion, Cognition, and Social Context Lab (X Lab)
Specific themes and goals
  • Trust in robots: In the context of the EU-funded project ANI - MATAS (Advancing intuitive human-machine interaction with human-like social capabilities for education in schools), we studied trust in social robots. Social robots tend to make mistakes, such as not understanding a comment or not recognising a person with whom they have interacted before. What happens in the mind of a child when a robot displays such “faulty” behavior? Dr. Rebecca Stower, in collaboration with Australian colleagues, investigated the effects of different types of “errors” in a series of studies in which videos were shown to children. In other studies, robots interacted directly with children and made mistakes. Their findings help us understand the ways in which robots can be used in education in schools and at home. 
  • Confiding in robots: Will adults share their worries and concerns with a machine? Informal caregivers of chronic patients experience considerable stress and there is ongoing research to assess and support the health of these individuals. In one study in our lab, researchers conducted 10 repeated interactions between a humanoid robot and caregivers over the course of five weeks to see whether there was evidence of self-disclosure. Guy Laban, a visiting scholar from the University of Glasgow in Scot - land, showed that self-disclosure increased, suggesting that social robots could provide a valuable contribution to social support, which is known to have beneficial effects on health. 
  • Deciding on whether a robot has a self: In the DFG-funded project Reconstructing the Naïve Theory of the Self, researchers showed participants videos of little autonomous driving robots. The robots manifest several behaviors, such as moving at varying speeds or colliding with other objects. Study participants had to report whether they felt that the robot exhibited agency and was a sentient being that wanted to do some - thing or whether it was mindlessly pursuing some action. The study aimed to find the minimal conditions that give the impression of self in robots. The results of these studies will help us to better understand how individuals construct the concept of self in others, not only machines.
Highlights and impact
  • Our research has demonstrated that children and adults often interact with robots as if they were people. Children prefer robots that make mistakes, apparently because they appear more human. Even if robots make mistakes, children are willing to continue to interact with them.
  • A literature review has shown that previous research in this area is very unsystematic and characterized by statistical problems, as studies are often not large enough to draw reliable conclusions.
  • The findings are of relevance within psychology, social robotics, and artificial intelligence. The research has been presented at international meetings and published in renowned international journals.
Group composition & projects/funding

The group has received funding from the EU Marie Skłodowska-Curie Actions Innovative Research Training Network and the DFG, among others. Prof. Kappas co-supervised several doctoral candidates from across Europe, and also collaborated with international scholars during the course of his research.

Selected publications
  • Krumhuber, E.G., & Kappas, A. (2022) More what Duchenne Smiles do, less what they express. Perspectives in Psychological Science, 17(6), 1566-1575. doi: 10.1177/17456916211071083.
  • Dukes, D., Abrams, K., Adolphs, R., Ahmed, M. E., Beatty, A., Berridge, K. C., Broomhall, S., Brosch, T., Campos, J. J., Clay, Z., Clément, F., Cunningham, W. A., Damasio, A., Damasio, H., D’Arms, J., Davidson, J. W., de Gelder, B., Deonna, J., de Sousa, R., Ekman, P., Ellsworth, P. C., Fehr, E., Fischer, A., Foolen, A., Frevert, U., Grandjean, D., Gratch, J., Greenberg, L., Greenspan, P., Gross, J. J., Halperin, E., Kappas, A., Keltner, D., Knutson, B., Konstan, D., Kret, M. E., LeDoux, J. E., Lerner, J. S., Levenson, R. W., Loewenstein, G., Manstead, A. S. R., Maroney, T. A., Moors, A., Niedenthal, P., Par - kinson, B., Pavlidis, I., Pelachaud, C., Pollak, S. D., Pourtois, G., Roettger-Roessler, B., Russell, J. A., Sauter, D., Scarantino, A., Scherer, K. R., Stearns, P., Stets, J. E., Tappolet, C., Teroni, F., Tsai, J., Turner, J., Van Reekum, C., Vuilleumier, P., Wharton, T. & Sander, D. (2021). The rise of affectivism. Nature Human Behaviour, 5 (7), 816-820. doi: 10.1038/s41562-021-01130-8
  • Stower, R., Calvo-Barajas, N., Castellano, G., & Kappas, A. (2021). A meta-analysis on chil - dren’s trust in social robots. International Jour - nal of Social Robotics, 13 (8), 1979-2001. doi: 10.1007/s12369-020-00736-8 
  • Kappas, A., Stower, R. & Vanman, E. J. (2020). Communicating with robots: What we do wrong and what we do right in artificial social intelli - gence, and what we need to do better. In R. J. Sternberg & A. Kostić. (Eds.) Social Intelligence and Nonverbal Communication (pp. 233-254). Cham, Switzerland: Palgrave Macmillan. doi: 10.1007/978-3-030-34964-6_8 
  • Vanman, E. J., & Kappas, A. (2019). “Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass, 13(8), e12489.