A Robot to Guide Library Customers to Books

Futurice, for client Oodi – Helsinki's central library
2019

"How googly eyes solved one of today's trickiest UX problems" – article about the project in Fast Company

Original blog post, which is adapted here

The Oodi library

Our team at Futurice designed and built a social robot to guide people to books at Helsinki’s new central library, Oodi. Opened in 2018, Oodi is the biggest of Helsinki’s 37 public libraries. It has 10,000 visitors a day, and an estimated 2 million visitors a year (compared to Finland’s 5.5 million population, that is a significant portion).

Oodi is big on automation and robotics. It has an automatic returns system: customers set their books on a conveyor belt, which brings the books to the basement, where they get sorted into boxes, which are picked up by a mobile MiR200 robot, which brings the books to the 3rd floor. At the 3rd floor, librarians place the books back on the shelves.

At the start of our project, we brainstormed how Oodi could use social robots: helping kids learn to read, instructing people on using equipment such as 3D printers, giving information about the library in several languages, and helping people find their way at the library.

We eventually settled on a robot that would help customers find the books and book categories they want. Since Oodi is so big, customers have a hard time getting around, and library employees spend a significant amount of time advising people how to find things. But this is not the work librarians are meant to be doing, or want to be doing. Librarians are very knowledgeable about literature. Their expertise is better used in in-depth service, helping visitors find specific books that fit their needs best. This type of work can take 30–40 minutes. In comparison, “Where is the psychology section?” takes 1–3 minutes to answer. Stacked together, a whole day of 1–3 minute tasks becomes tedious, and a waste of skills.

This is where the robot steps (or rather, rolls) in. A whole day of menial tasks would not bother a robot. We realized we could re-purpose the MiR200 mobile robots that the library already had, and was using to move books between the basement and the 3rd floor.

The robot design team: Oodi librarians, Oodi’s customers, and Futurice’s roboticists

The robot would have the advantage of being able to access Oodi’s database directly, and provide real-time information on which books are currently on the shelf. The robot could be more approachable to people who have social anxiety, and are afraid to approach library employees. Additionally, it could save both the customers’ time (no need to queue for a librarian), and the librarians’ time (who can help customers with more meaningful tasks).

First draft

A Mobile Robot with (the Illusion of) a Personality

The design team, consisting of Oodi’s librarians, Oodi’s customers, and Futurice’s roboticists, defined design guidelines for the robot that would be built on top of the MiR200 robot, using these social robot co-design canvases (available as open source):

The design team decided that the robot should not be too humanoid. We wanted a more abstract form for the robot, with expressive, non-speaking forms of communication. We wanted a design with a bit of imagination and whimsy.

The team also wanted to make sure that the robot aligned with Oodi’s strategy and policies. The following ethical considerations were underlined:

We started testing the robot at the library, mapping out appropriate routes, and building the user journey. Luckily, we had some very excited testers.

Our voluntary testers, and a trial run with a laptop

Googly Eyes and an Emotion Matrix

The making of some of the googliest eyes

The mobile robot by itself was too abstract to form a social bond with people. I wanted to give users something to look at, a sort of face. I searched for mechanical googly eyes online, finding them suitably humoristic, but not too wacky.

Googly eyes bring to mind classic animation and comic book characters. There is a relation between robotics design and animation: the 12 rules of animation as defined by Disney have been previously applied to robotics (Ribeiro & Paiva, 2012). In this case, I planned the eyes to follow the rule of “Exaggeration”. It would be used here to emphasize the robot’s expressions (looking around to indicate boredom when there’s a lack of missions), and its actions (looking to the direction it starts moving toward).

Luckily, Glen Akins had posted his instructions on how to build mechanical googly eyes online https://bikerglen.com/blog/build-a-pair-of-robotic-googly-eyes/. After much fitting of wires into tiny holes, soldering, laser-cutting, and fussing with power supplies, we had a fresh pair of eyes.

The eyes immediately had an effect: I noticed myself looking and smiling at them. When they looked left, I followed their gaze. It is easy to trick a human brain into thinking something is alive if it appears to be moving with intention (Heider & Simmel, 1944).

Our team also figured the robot should have some internal drives to make it more social, perhaps even emotions. The coding of emotions, or affective computing, has been previously researched by several universities, namely MIT. MIT’s Cynthia Breazeal has researched where in a 3-dimensional matrix a robot’s emotions could reside (Breazeal, 2004).

I took inspiration from this, and situated the robot’s emotions in a 2-dimensional matrix. While a simplification, it was accurate enough for our purposes.

A crude visualization of the emotion matrix I implemented

Emotions are approximated onto two axes: one ranging from negative valence (bad feelings) to positive valence (good feelings), and one from high arousal to low arousal (level of stimulatedness).

The emotional state was initialized in the middle of the matrix, at coordinates 0,0. The robot’s emotional state moved in the matrix according to events that happened to it. If its mission succeeded, it moved toward positive valence and high arousal. If its mission failed, it moved toward negative valence. If it was unused for long periods of time, it moved toward low arousal.

The robot’s internal moods affected the way it behaved. The robot’s action functions (travel.py and idle.py) queried its emotions at certain points, and changed the robot’s sounds, lights, and eye movements accordingly. If it was happy, it would chirp happily when guiding a person to a book. If it was bored, it would try to attract attention by looking around, dancing around and flashing lights (although not too disruptively, this was a library after all).

You might wonder how a user benefits from a sad robot. The answer is, we’re not sure yet either. This emotion system is very experimental, and we wanted to explore how it would affect the interactions between the robot and user. We are still collecting data on this as the pilot continues.

The code for the emotion system, author me, copyright Futurice, license MIT https://github.com/minjaaxelsson/oodipoc/blob/master/emotions.py:

“Oh look, it’s cute!”

The robot at Oodi

We tested the robot for 3 days. Customers were able to search for a certain book or author via an open search string, or choose certain book categories to be guided to, such as travelling and cooking. The robot attracted a lot of attention: even when people hadn’t personally sent it on a mission, they started following it when it was guiding someone else, forming bigger and bigger groups of people following a small robot around the library. Several customers even wanted to pet the robot. Reactions were mainly positive, and surprised (not counting one 60-year-old woman, who declared loudly “I won’t try any robot!” and walked off in a huff).

A lot of customers were curious, asking questions about what the robot did, and how it worked. It attracted a lot of children: three girls made a game of taking turns asking the robot to take them somewhere. Others gathered around it in wonderment, exclaiming “Oh look, it’s cute!”.

Waiting for users

The abstract form of the robot received positive feedback, with its googly eyes being a specific point of admiration. The fact that it was abstract and not human made it more approachable. “It makes nice sounds, and doesn’t feel dangerous,” was one customer’s comment.

Some even talked to the robot itself. A 50-year-old woman thanked the robot after a successful mission, saying “Very nice, thank you, you’re wonderful!”. The same woman remarked how smart it was to have a robot showing people around Oodi, since it’s so big. Another customer cursed the robot out, calling it a “Creepy McCreepface”, after it managed to sneak up on her. When she encountered it a bit later, she apologized for earlier.

Visitors got involved in the future of the robot and had suggestions for improvements, such as having it help in carrying bags, or having a walker support for older people. Some hoped the robot would help them reserve a certain book if it was already checked out, or that it could help them in several languages.

The Future of the Little Robot

I am happy that Oodi’s customers had such a positive response to the robot, and adopted it as their own straight away. The robot will be developed further with Oodi. Future versions will utilize the emotion system to a further extent.

A prototype of eyebrows for the robot, which could help communicate emotions more effectively

Oodi’s librarians have sometimes encountered worries from their customers regarding robotics. Since Oodi has an automated returns system, some customers have worried that “robots are taking the librarians’ jobs”. Some research shows that people’s attitudes toward robots may be becoming more negative (Gnambs & Appel, 2019).

The librarians, however, do not share these concerns — in the case of the conveyor belt accepting returns, or our robot guiding customers. During this project I interviewed 4 librarians, who all shared the opinion that they would rather avoid the repetitive physical task of accepting returns, or the repetitive guidance tasks such as “Where are the fishing books?”. The guidance robot and its application was designed together with the librarians and the library’s customers, specifically to avoid designing unhelpful or even detrimental applications. This democratization of the design of robotics is something I hope to see more of in the future.

-

Breazeal, C. (2004). Function meets style: insights from emotion theory applied to HRI. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 34(2), 187–194.

Gnambs, T., & Appel, M. (2019). Are robots becoming unpopular? Changes in attitudes towards autonomous robotic systems in Europe. Computers in Human Behavior, 93, 53–61.

Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American journal of psychology, 57(2), 243–259.

Ribeiro, T., & Paiva, A. (2012, March). The illusion of robotic life: principles and practices of animation for robots. In 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 383–390). IEEE.

Back to Portfolio