When using assistive systems, the consideration of individual and cultural meaning is crucial for the utility and acceptance of technology. Orientation, communication and interaction are rooted in perception and therefore always take place in material space. In our understanding, a major problem lies within the differences between the human and the technological perception of space. Cultural policies are based on meanings,
their spatial situatedness and rich relationships amongst them. Therefore, we have developed an approach, where the different perception
systems share a hybrid space model generated in a joint effort by humans and assistive systems by means of an artificial intelligence. The aim of our project is to generate a spatial model of cultural meaning, which is based on the interaction between human and robot. The role of the
humanoid robots is defined as “companion”. This should allow for technical systems to include so far ungraspable human and cultural agendas
into their perception of space. In an experiment, we tested a first prototype of the communication module, allowing a humanoid to learn cultural meanings by means of a machine learning system. Interaction is done by non-verbal and natural-language interaction between the humanoid and
test persons. It leads us to further understanding on the development of a space model of cultural meaning.