Workshop at the ROBO-Philosophy conference 2018

Conference „Envisioning Robots in Society—Politics, Power, and Public Space“

February 14-17, 2018

Oliver hosted the Workshop „Cultural Spaces and humanoid (s)care“

Speakers:

Prof. Martina Mara
Prof. John P. Sullins
Christoph Hubatschke
and Oliver Schürer
(see all abstracts below)

Webpages Conference and Workshop

Workshop abstract

„Cultural Spaces and humanoid (s)care“

Host Oliver Schürer

In urgency to govern the emergent robot development, the European Committee published a draft report on “Civil Law Rules on Robotics”. Here privacy, general well-being, and job-loss through automation are the main issues raised. Surprisingly, there is next to nothing on changes in public, semi-public, private, and intimate spaces – that is, of cultural spaces. Human fantasies for the use of humanoid robots come in a variety of guises: from workers, soldiers and servants to entertainers, nurses and playmates, including sex partners. But policy agenda pushes the social aspect of assistive robots for care work (elderly, dementia) in the forefront. In any case, humanoid robots will not only become functional assistants and next media, but moreover also socio-cultural actors in cultural spaces, always already transgressing thresholds between culturally conditioned public, semiprivate, private and intimate spaces.

The workshop will explore the spatial and socio-political consequences threefold: The literally most visible problem in robotic care work is the human urge to antropomorphize those machines. According to the uncanny valley effect, this is prone to fail time and again, so how human-like should a humanoid be?

Besides the visual uneasiness there are all kinds of suspicions towards robots, especially in intimate spaces. But they are about to take in a completely novel position besides existing technical objects, plants and animals. How can we think relationships with those new artifacts in cultural spaces?

Any concept of space relies on the properties of the system perceiving it. Machinic and human perception systems differ harshly. But humanoid robots are about to establish relationships with spatial and social structures, act within these, and will form them to a certain extent. How can we share the same cultural space with humanoid robots?

Speaker abstracts

Between empathy and fright: The complicated issue of human-likeness in machines

Prof. Martina Mara, Linz Institute of Technology Johannes Kepler Universität and head of the RoboPsychology research division, Ars Electronica

We humans have a natural tendency to anthropomorphize objects, that is, we imbue “the imagined or real behavior of nonhuman agents with humanlike characteristics, motivations, intentions, and emotions” (Epley, Waytz, & Cacioppo, 2007). We give names to our cars, we pay compliments to computers (Reeves & Nass, 1996), and we feel with robots when they are tortured (Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj, & Eimler, 2013). At the same time, many people are afraid of robots in particular when they are humanlike. According to the “uncanny valley” phenomenon (Mori, 1970; Mara, 2015), humanoid agents that reach a certain point of high but not perfect visual realism elicit aversive user responses — or more specifically, they give us the creeps. Recent research on the “uncanny valley of the mind” (Stein & Ohler, 2017; Appel, Weber, Krause, & Mara, 2016) suggests that people even experience unease when faced with virtual chatbots that appear too humanlike, that is, too intelligent or “emotional”. People’s desire to see human-likeness in artifacts on the one hand and people’s fright of highly humanlike robots on the other hand leads to the question: Is there a right level of human-likeness in machines? — A question of increasing relevance from a technical, psychological, and ethical point of view.

 

Artificial Phronēsis, What it is and What it is Not

Prof. John P. Sullins, Professor, Sonoma State University

Artificial Phronēsis (AP) claims that phronēsis, or practical wisdom, plays a primary role in high level moral reasoning and further asks the question of whether or not a functional equivalent to phronēsis is something that can be programed into machines.  The theory is agnostic on the eventuality of machines ever achieving this ability but it does claim that achieving AP is necessary for machines to be human equivalent moral agents.  AP is not an attempt to fully describe the phronēsis described in classical ethics. AP is not attempting to derive a full account of phronēsis in humans either at the theoretical or neurological level.  AP is not a claim that machines can become perfect moral agents.  Instead AP is an attempt to describe an intentionally designed computational system that interacts ethically with human and artificial agents even in novel situations that require creative solutions.  AP is to be achieved across multiple modalities and most likely in an evolutionary fashion.  AP acknowledges that machines may only be able to simulate ethical judgement for sometime and that the danger of creating a seemingly ethical simulacrum is ever present.  This means that AP sets a very high bar to judge machine ethical reasoning and behavior against.  It is an ultimate goal but real systems will fall far short of this goal for the foreseeable future.

 

‘Konfidenz’ in Robot Companions? Towards a political understanding of human-robot-interactions

Christoph Hubatschke, scientific researcher at the Philosophy Department at the University of Vienna

Perfection, serialized doppelgangers, exact movements in incredible speed, effortless endurance, sleek designs – these and many more reasons made robots the protagonists of dreams and nightmares alike, and lie not only at the core of robot cultures, but also of coming economical and societal changes, from automation of work, to robots as supposed care workers for an ever ageing society. Critically asking for the economic, political and ethical interests this paper challenges the very idea that for humanoid robots to be accepted, trust and canniness should be evoked.

The paper asks for possibilities of a different relationship to humanoids, not built on trust or mistrust, on uncanniness or familiarity, but on a relationship which doesn’t try to solve the ambivalences: a hybrid companionship. This paper proposes to further develop Donna Haraway’s concept of “cross-species trust”, which she developed in regard to domesticated animals, also to the question of robots. For Haraway companionship is an important ground on which a deep relationship, in which the agency of the other is respected and the perspective of the other is internalized and included, is grounded.

Expanding Haraway, the paper therefore proposes the concept of Konfidenz. What does the concept of Konfidenz mean in the context of care work, which is mostly feminized work and economically as well as socially marginalized. Is something like post-work-society possible in a capitalistic system?

 

An Architecture Space-Game

Oliver Schürer, Architecture Theory and Philosophy of Technics, Vienna University of Technology ATTP, H.A.U.S.

Architecture is dedicated to spatial aspects of life. The domain understands orientation, communication and interaction rooted in perception which is always both, culturally biased and happening in material space. When bringing humanoid robots as technological assistive systems into human’s spaces of living, the consideration of culturally biased meaning gets crucial.

For this rich context a generative, synthetic understanding of space is proposed that is called here a cultural space model: multifaceted relations among meanings, objects and their localizations are subject to constant processes of negotiation. The constant negotiations of those relations are an interactive process, enacted by each participant based on the properties of her, his or its perceiving system.

But the commonly used term perception elegantly veils the fact that technical perception systems bare only metaphorical similarity to the human perception system; but their differences produce far-reaching consequences. As humanoid robots are endowed with specialized perception systems, ultimately meant to serve humans, a paradox unfolds: Humanoids are being developed for the most private and intimate spaces of human life, but they cannot participate in the negotiation process, hence cannot participate in human spaces.

In order to trigger the construction of a cultural space model, the second proposal of the talk is an Architecture Space-Game. Inspired by the philosophical language-game, the space-game uses the connection between linguistic expressions and human practices. By contrast however, the space-game resonates human spaces and is involving humanoids in everyday live by human means.

Will the model and the game provide grounds to let humanoids become satisfying systems that care for, and assist humans as elements of human cultural space?
Will the model and the game provide grounds to make humanoids evolve into everyday elements of human spaces, in being satisfying systems that care for, and assist humans?