UX Design. 2022.
Accessible social media app designed for the Deaf and Hard of Hearing community using React Native, Firebase, and Figma
Awards: Best Concept, Most Novel Idea, Greatest Social Impact, Best Poster, Second Overall Project
ALTiO is an app that translates the emotions associated with hearing into visuals to add additional context to the audio-visual landscape of social media for Deaf and hard of hearing people. Its name is a combination of the words alternate and audio to represent the main goal of the app. See the final prototype video below.
In the winter of 2021, I attended a choir concert called the “Memories of Home Concert” where families and friends are invited to return home and participate in the tradition of choral music for the holidays. This time though, at the musical peak, when nearly 300 students are singing “Memories of home, draw me back in winter. Home to the place that I love,” all 300 students began signing in American Sign Language to include the Deaf families of two students in the choir into this moment. It was an incredibly moving experience, and as I returned to school the following quarter I made it my mission to create more spaces like it.
Below is an abbreviated process in which ALTiO was born. For a more detailed look at our process, see our website:
We started our journey with the people.
Interviewing six participants who had their own unique connection to the Deaf or Hard of Hearing community, ranging from identifying as Deaf, identifying as Hard of Hearing, and identifying as a CODA (child of a deaf adult) just to name a few.
In this initial phase we asked many questions about entertainment, mobile applications, and what their experiences in a hearing dominated world were like to gauge how we could best serve these communities who are often left out of design work.
I particularly enjoyed running nearly all of these interviews, drawing on my past experience with the "Buzzword Bios" project with Mach49. We learned about the strength and closeness of the Deaf community, their frustrations with inadequate captioning, and surprisingly the additional challenges that the pandemic had placed on members of this community--by hindering the ability to lip read due to masks.
PARTICIPANT 002: “If there’s a video without captions, I’ll just skip it.”
PARTICIPANT 003: “Hearing is something that is difficult for me, visual is something that is easy.”
After these discovery research interviews, themes of visual emphasis, a lack of adequate captioning, an emphasis on being included in shared experiences, and through lines of independence, inclusion, and representation helped to propel our design forward into the brainstorming phase.
We decided to pursue an "alternative text as drawings of how music / audio make you feel" due to what we heard about visual emphasis, shared experiences, and the shortcomings of captioning. This solution formed the bases of our soon to be app, and helped propel us into the prototyping stage.
This solution formed the bases of our soon to be app, and helped propel us into the prototyping stage.
To test our key assumption that a unique audio interpretation could be effective for conveying the nuance of information and meaning in content we designed an experience prototype.
"Two interviewees take turns between: interviewee A listening to a song and drawing how it ‘feels,’ and interviewee B interpreting said drawing. After the exercise, both interviewees listen to the audio together and compare their experience and expectations based on the drawing of interviewee A."
We found that interviewees really enjoyed the process. We concluded that a unique audio interpretation could in fact be effective for conveying nuanced information in audio. Each participant was able to infer some kind of meaning from each drawing, despite the diverse and unique approaches each interviewee had to translating the audio. One participant event noted:
“She got more out of it than I even realized was there”
As an animator and video editor in previous projects, including Mixed Company and WEC Telecommunications, I was excited to take on creating our project concept video which outlines the main problem and solution of our project. Informed by our initial research, this video served as an early concept for what we wanted our app to achieve going forward.
After creating this prototype video to outline the goals of our project, we debated between several different ways of creating our product as a team. The most contested two designs were between an audio recognition plug-in that could be used across the phone and in other applications that detects what song is playing and displays interpretations (or "altios"), and secondly a native social media app that displays interpretations with posted content.
It was important for us to make sure that posted content was required to have interpretations with it–thereby ensuring that our core goal of inclusion and accessibility was present throughout our app.
To implement the next stage of the design, I lead the use Figma to layout and visualize the application. We also addressed the previously mentioned issues in this stage by adding an onboarding flow, refining and increasing the number of explanation pop-ups throughout the app, and redesigning the post flow to help users see where they were in the process. See the full app screen layout below.
We tested this prototype with three participants, including Deaf and hearing participants. We were made aware of many issues in this stage, notably users struggled with understanding the overall purpose of our app, certain actions such as clarifications were not clearly explained, and the posting flow had a lack of current status feedback for users.
This user feedback caused us to introduce a new onboarding flow to clarify the apps intention and key features such as 'clarifications'.
After developing this prototype, we collaborated with other groups in our accessible design studio and got their feedback in the form of a heuristic evaluation. This feedback was helpful to help refine more tedious details, such as progress bars, refining the onboarding flow, and standardizing components and organization throughout the app.
Using the feedback provided by our classmates, we improved and fleshed out our design as we developed our app using React Native and Expo. As a designer, I was responsible for both the creation of our apps screens and logic in Figma, as well as supporting our developers, Emily and Kyle, by creating objects and assets in React Native. These included posts, clarifications, audios, captions, and more. The video below shows the product working via screen recording.
After a 10-week long design and development process, teams presented at the final expo at the Stanford d.school where 55 teams displayed the product they had created through this design process. I presented our project to the nearly 200 attendees in the form of a 30-second pitch and then as a team we demoed and answered questions to passersby and judges.
At the end of the evening awards were distributed, and we were thrilled to be awarded with best concept, most novel idea, greatest social impact, best poster, and second overall project.
Following the conclusion of the course, three of our group members, Emily, Jared, and myself, decided to continue working on this project in a follow up course where we continued developing and designing our product.
To continue our development process in the follow up course (CS194H), we tested our high fidelity prototype with users–specifically Deaf and hard of hearing users, who remained our target users.
Before testing, we made a few changes that we were excited about in terms of how to interpret audio.
In addition to being able to draw how the music makes you feel, we also added ASL and creative captioning (using captions to tell a story) options to better support our users.
First and foremost, we wanted to make sure that Deaf and Hard of Hearing users were able to post content without being required to upload audio so these additional methods allowed them a space to contribute.
After showing users our app in our lab usability study, we had a few major takeaways that informed our next iteration.
1. Users needed help understanding the high level concept of our app. After a supplemental explanation though, users understood and were excited about the platform
2. Scalability issues during testing hindered the performance of some users. This required us to have more thorough styling for different screen sizes
3. Certain tasks (such as posting) took users longer than we had hoped. Users needed more guidance and feedback about where they were in the process
Our second version of our high fidelity prototype featured a complete design overhaul in response to feedback from users.
One challenge for our Deaf users was that switching back and forth between videos was distracting, as a result we prioritized the video size and removed the feature of switching between them as seen here.
We also introduced the login screen, increased consistency with layouts of the posting process, and a new and improved onboarding flow that presented the problem and solution of what our app addresses.
Based on another round of testing, our third and final iteration of our app featured a simplified home screen with fewer buttons and distracting colors, as well as more intentional emphasis on the posted content and captions. We refined our onboarding sequence to mimic a mobile video player social media app to better set the context of our app. We also fleshed out the designs of alternate interpretation methods, such as creative captions and American Sign Language (ASL), and fleshed out our account creation and profile screens as well. See below for key screens as well as a demo video highlighting the use of the app.
Perhaps the most rewarding part of this whole project occurred after we had implemented the ASL video upload feature. During one of our final user tests a Deaf user paused and signed,
"You could use this to teach American Sign Language"
While teaching hadn't been the primary use case outlined in our development, the flexibility of the app to meet the needs of our users was an incredible win for our team.
To conclude this two quarter long project, I presented to a panel of computer scientists and designers as invited by Professor James Landay to recap our experience and our progress. Panelists were invited to ask questions and engage with the three teams that participated in the second quarter of this class.
This is the most challenging and rewarding projects I have worked on to date. Organizing and leading interviews was one of the most enjoyable parts of the process for me, and it affirmed that I love to design for people. I learned so much about the Deaf and Hard of Hearing communities–who I had not spent a lot of time with prior to this project–and it was incredibly rewarding. Leading brainstorming sessions was also really interesting and challenging for me, and it yielded some really interesting and novel ideas that I had a lot of fun with exploring in this project, such as translating artistic experience into different mediums, and playing with artistic close-captioning as a way to further our designs going forward.
I learned a lot about what it means to be a designer, including recruiting participants, running interviews, running meetings, designing screens and flows, testing work with users, and collaborating with developers. But what I learned most about myself is how much I value group cohesion. I spent a lot of time checking in with my teammates, getting to know them, and making sure that everyone felt safe and encouraged to speak. I really believe that this level of trust and understanding is what helped us achieve the level of success that we did. I hope to keep this commitment and passion for healthy collaborative teamwork going forward as I meet new teams and join new projects.