"A playful app that brings life into a lifeless 'face' by placing AR speech balloons in the virtual world based on GPS."

 

Boom!

Team Work with Namsoo Kim (my role: lead developer and designer) | July 2018  

AR development | UX Design

Tools: Swift, ARKit, Cocoapods, Sketch, Invision, Adobe creative suite

 

Synopsis

Bringing life into a lifeless face!”

BOOM! is a location based playful AR product that captures user’s thoughts and feelings, applies them to lifeless faces in the streets such as posters and statues, and shares them with other people. It detects human faces shape of any kind (faces in posters, pictures, drawings, sculptures or statues, etc) and places a speech balloon of the user’s thoughts next to the detected face in the virtual world based on GPS. It uses face detection technology to detect a face, and speech recognition technology to recognize user’s speech and convert it into text.

 

Concept Background

new-york-time-square-2-giant-poster-1.11.jpg

The beginning of this concept rose from our daily scene in the New York City, a hub for art and culture. Everyday we are surrounded by hundreds of human faces whether they are a part of posters and pictures on the wall, statues in the park, paintings and sculptures in the museum, and whether they are alive or lifeless. Those faces, especially inanimate ones, inspired us to think about the potential of communications between people in the city by augmenting those faces. 

ssstarbucks.jpg

 

Research

Today, we are surrounded by a number of faces. No matter where we are located in the world, if we have a small smart device, we can log into “facebook”, “instagram”, or “snapchat” and encounter many faces: faces that we have seen everyday, faces that we have seen a couple of times, or faces that we have never seen in our lives.

Living faces in 2D digital world

Living faces in 2D digital world

Lifeless faces in 3D real world

Lifeless faces in 3D real world

These faces in the digital world are flattened 2D images of the living people in the real world. They are placed within a rectangular screen and as we scroll down the feed, we see those faces unconsciously thinking that people with those faces are breathing somewhere in this real world. Like this, we confront and consume countless faces of living people, as well as their small notes everyday. What would it look like if there is a virtual world where lifeless faces can communicate with us? What if posters, statues, sculptures, and mannequins can talk? What if there is a “facebook” in spacetime version? 

Something like this? Below is a simple sketch to visualize the virtual "facebook" in 4D (spacetime) version. 

Lifeless faces in 4D (spacetime) virtual world

Lifeless faces in 4D (spacetime) virtual world

 

Ideation

To have fun with lifeless faces around us, people need to find them in objects like posters or statues and bring them to life by offering a speaking ability. We had a brainstorming session to examine what motivates people to interact with these lifeless faces (why), different ways they would be able to interact with the faces (how), and what messages people would want to communicate through the faces (what).

After then, we expanded on this idea and had a post-it brainstorming session to freely toss around ideas of what elements we should consider.  

boom_brainstorming.png

Finally, we decided to create a mobile app that uses AR technology to capture the moments or to write random things in the virtual world next to the objects of lifeless faces in the real world. The reason we chose the AR feature is that users can have a lot more freedom in their creation of speech balloons in terms of design, content, and composition than when using other traditional methods like using physical stickers. The AR speech balloons will live in the virtual space where the users can only see through their mobile devices, serving as magic windows. Moreover, we wanted to have the most seamless and fluid interaction, so that users can find this app simple and handy to use without much effort and time. Thus, we decided to implement voice recognition technology to convert the speech into text automatically. In short, users can augment "any object with a human face shape" and bring it to life by capturing and leaving their thoughts into the AR speech bubble by simply talking to their mobile device.

 

User Scenario

Prototype

boom_prototype.jpg
0411_Boom Prototypes.png

 

Build & Iterations

Below are some pictures of play-testing the key features with various designs.

And a video clip of play-testing the key features. (Sound On!) 

We also played with putting some various animated images (.gif) onto different face parts - eyes, eyebrows, nose, lips. Below are some screenshots of those iterations of the app. 

IMG_2111.PNG
IMG_2304.PNG

 

UX Flow

 

UX Design

KakaoTalk_Photo_2018-07-07-23-55-05.jpeg

 

Technology

 

We used face detection technology to detect a human face. 

 

We also used speech recognition technology to recognize speech and convert it into text.

 

Everything is built in Xcode using iOS scripting language Swift. At each phase, we user-tested using Apple's TestFlight that allows invited users to download this app and test. 

Screen Shot 2018-05-04 at 5.05.01 PM.png
Screen Shot 2018-05-04 at 5.09.39 PM.png

 

final Demo

 

Case 1. Single User

Case 2. Multi User