Selfie with Romeo
Pictures serve as a memory of situations and experiences that people make. This project places a smartphone in the hands of the humanoid robot Romeo and transfers the act of taking a picture to a robot. The robot is no longer the object that is being photographed but in turn it is taking pictures of itself, other people and its surroundings.
This projects picks up the debate about „selfies“ in social media and develops an experiment with a humanoid robot. The website artnet.com reports about „statue selfies“. Museum visitors posted pictures of ancient Greece and Roman sculptures, by placing the camera in the statues outstretched arms such that the statues seem to be posing for a selfie.
“Selfies” have already become a topic in contemporary art. As a response to the growing selfie stick phenomenon, the artists Aric Snee and Justin Crowe address the topic of self-staging in social media. The artists created a prototype of a „selfie arm“. The cut off end of the artificial arm, which is made of fiberglass, holds a smartphone and provides the illusion of another person taking the picture. By holding on to the „selfie arm“ while taking a selfie, one no longer appears to be alone in a selfie picture.
Romeo taking a selfie
To me the interesting part of humanoid robotics is where the robot does not only do things „for“ us but „with“ us. Therefore, the idea of the project is to create an experience where one engages in a social situation with the robot.
The questions leading to the experiment were: What could be a collaborative task with a robot? How can a humanoid robot be an active part in a social situation?
A. Project Idea
Most people meeting the humanoid robot Romeo instantly get out their smartphones to take a picture or video of the robot. Similar, Romeo is now in turn taking pictures, so called “selfies”, of itself with his visitors. While most visitors take pictures only of Romeo’s performance alone, the idea of the project is that the robot engages visitors in a social interaction. Thereby, the visitors become part of the performance itself. As an incentive, the visitors have a picture to show that they have been meeting Romeo.
The smartphone has become a social object that connects one to the world beyond. Mostly only people possess smartphones and take pictures with them. Romeo using an everyday object gives the robot a “human touch”. With the narcissistic act of taking a selfie one could also attribute feelings and emotions to Romeo. While selfies usually involve some kind of facial expressions of the people in the picture, Romeo’s face always looks the same. It can not smile, look excited or surprised. Besides the interaction with other people, this project also combines the real and virtual. Romeo as a physical robot creates a digital picture which is then distributed to social networks via the internet.
Workflow from triggering the picture to Facebook profile
B. Hardware and Software
The experiment is carried out with the humanoid robot Romeo from Aldebaran located at the Vienna University of Technology, Institute of Automation and Control. The software used are Choregraphe (Version 2.17) to control the robot and two free iOS apps – “Selfie Snap“ and “IF” – to trigger the pictures and upload them to a Facebook account.
C. Steps and challenges of experiment
The challenges that had to be solved for Romeo taking a selfie were
- how to grab and hold the camera,
- how to trigger the camera,
- how the interaction with another person is designed,
- how to distribute the selfie picture to the web.
(1) How to grab and hold the camera: Trying Romeo to hold a smartphone with his hands seemed too risky as the four fingers do not provide enough stability. Also the distance from Romeo’s outstretched arm to his head is not ideal for a portrait picture. The solution was to use a selfie stick which provides a more stable grip of the smartphone and a better camera angle. The handle of the selfie stick had to be modified such that Romeo can grab the stick safely. Although the selfie stick and the smartphone are relatively light, the weight is unbalanced and requires a firm grip.
(2) How to trigger the camera: Romeo’s hands are not capable of holding the camera and pressing the trigger button neither on the smartphone nor on the selfie stick. Therefore, triggering the camera was solved by voice recognition. I used the iOS app „Selfie Snap“ which supports voice recognition and triggers a picture with the keywords „picture“, „snap“ and „take picture“. This works reliably with Romeo saying these keywords. For setup reasons, the front facing camera of the smartphone is used to take the pictures.
(3) The interaction with the other person is done via speech. The idea is that Romeo starts the conversation once it recognizes a face close by. Romeo looks at the person and asks if he or she would like to take a selfie. If the person answers with „yes“, Romeo asks the person to come next to it, raises the arm holding the selfie stick with the camera and says the keyword “picture“. This triggers the smartphone camera to take a picture.
(4) Romeo develops a social network identity and uploads selfie pictures of itself and with others to its Facebook account. The iOS app “IF” is used to automatically upload the selfies to Romeo’s Facebook account via the network of the smartphone.
Grasping objects with Romeo’s hands is a challenging task. Especially objects which are imbalanced are hard to grasp. The project shows that the interaction with people works very reliable via speech recognition when using simple yes/no answers.
While most „selfies“ in social media are pictures of real persons, the project „Selfie with Romeo“ transfers the subject of the selfie to a humanoid robot. The robot itself becomes the subject of interest. Similar to Snees and Crowes „selfie arm“ and to the “statue selfies”, this project is a critical contribution to the discussion about selfies and selfie sticks.
Selfie with Romeo
The author would like to thank Prof. Marcus Vincze and Dimitrios Prodromou for giving me the opportunity to work with the robots Romeo and Nao. This project project has also been supported by Stefan Aiglstorfer and Christoph Müller to whom the author would like to give credit.