"A playful app that brings life into a lifeless 'face' by placing AR speech balloons in the virtual world based on GPS."

 

Boom!

Team Work with Namsoo Kim (my role: Art director and Lead developer) | iOS Mobile Lab with Professor Nien Lam and Professor Sebastian Buys | July 2018  

AR development | UX Design

Tools: Swift, ARKit, Cocoapods, Sketch, Invision, Adobe creative suite

Shortlisted in IxDA Interaction Awards 2019 & Communications Arts Interactive Competition 2019

 

Synopsis

“Bringing life into a lifeless face!”

BOOM! is a location based playful AR product that captures user’s thoughts and feelings, applies them to lifeless faces in the streets such as posters and statues, and shares them with other people. It detects human face shape of any kind (faces in posters, pictures, drawings, sculptures or statues, etc) and places a speech balloon of the user’s thoughts next to the detected face in the virtual world based on GPS. It uses face detection technology to detect a face and speech recognition technology to recognize the user’s speech and convert it into text.

 

Concept Background

new-york-time-square-2-giant-poster-1.11.jpg

The beginning of this concept was inspired by our daily scene in New York City, a hub for art and culture. Everyday we are surrounded by hundreds of human faces whether they are a part of posters and pictures on the wall, statues in the park, paintings and sculptures in the museum, and whether they are alive or lifeless. Those faces, especially inanimate ones, inspired us to think about the potential of communication between people in the city through virtual augmentation of the faces. 

ssstarbucks.jpg

 

Research

Today, we are surrounded by a number of faces. No matter where we are located in the world, if we have a small smart device, we can log into “facebook”, “instagram”, or “snapchat” and encounter many faces: faces that we have seen everyday, faces that we have seen a couple of times, or faces that we have never seen before.

Living faces in 2D digital world

Living faces in 2D digital world

Lifeless faces in 3D real world

Lifeless faces in 3D real world

These faces in the digital world are flattened 2D images of the living people in the real world. They are placed within a rectangular screen and as we scroll down the feed, we unconsciously think that people with those faces are breathing somewhere in this world. In this fashion, we encounter countless faces of living people, as well as their small notes everyday. What would it look like if there were a virtual world where lifeless faces can communicate with us? What if posters, statues, sculptures, and mannequins could talk? What if there is a “facebook” in spacetime? 

Something like this? Below is a simple sketch to visualize the virtual "facebook" in 4D (spacetime).

Lifeless faces in 4D (spacetime) virtual world

Lifeless faces in 4D (spacetime) virtual world

 

Ideation

To have fun with lifeless faces around us, people need to find them in objects like posters or statues and bring them to life by giving them the ability to speak. We had a brainstorming session to examine what motivates people to interact with these lifeless faces (why), different ways they would be able to interact with the faces (how), and what messages people would want to communicate through the faces (what).

After then, we expanded on this idea and had a post-it brainstorming session to freely toss around ideas of what elements we should consider.  

boom_brainstorming.png

Finally, we decided to create a mobile app that uses AR technology to capture the moments and to write things in the virtual world next to the objects of lifeless faces in the real world. The reason we chose the AR feature is that users can have greater freedom in their creation of speech balloons in terms of design, content, and composition than when using other traditional methods like physical stickers. The AR speech balloons will live in the virtual space where the users can only see through their mobile devices, serving as magic windows. Moreover, we wanted to have the most seamless and fluid interaction, so that users can simply and handily use the app without much effort and time. Thus, we decided to implement voice recognition technology to automatically convert the speech into text. In short, users can augment "any object with a human face shape" and bring it to life by capturing and leaving their thoughts in the AR speech bubble by simply talking to their mobile devices.

 

User Scenario

 
 

Prototype

boom_prototype.jpg
0411_Boom Prototypes.png

 

Build & Iterations

Below are some pictures of play-testing the key features with various designs.

We also played with putting some various animated images (.gif) onto different face parts, i.e. eyes, eyebrows, nose, lips. Below are some screenshots of iterations of the app. 

IMG_2111.PNG
IMG_2304.PNG

And a video clip of play-testing the key features. (Sound On!) 

 

UX Flow

 

UX Design

KakaoTalk_Photo_2018-07-07-23-55-05.jpeg

 

Technology

We used face detection technology to detect a human face. 

We also used speech recognition technology to recognize speech and convert it into text.

Screen Shot 2018-05-04 at 5.05.01 PM.png

 

Everything is built in Xcode using iOS scripting language Swift. At each phase, we user-tested using Apple's TestFlight that allowed invited users to download this app and test. 

Screen Shot 2018-05-04 at 5.09.39 PM.png

 

final Demo

Case 1. Single User

Case 2. Multi User