Key word: Human-Computer Interaction (HCI), User Experience (UX) Design, Interaction Design, Usability Testing, Design Research, Educational Technology, Natural Language Processing (NLP), AI Product Development
Product Poster
Made with Figma
Promo Video
Made with Adobe Premiere Pro (Voice of Collaborator Haoze Wu)
Promo Pitch
Made with Google Slides
Non-native speakers often struggle with casual conversations due to limited exposure to conversational English and its informal dynamics. The foreign language test for non-native speakers, such as TOEFL, focuses more on the functionality of speaking English, i.e. understanding the basic conversations.
Our formative research revealed key pain points: difficulty following conversations, struggles to sound natural, and a lack of common topics to discuss. Our primary target group consisted of non-native speakers who aim to improve their small talk skills with native English speakers. Their goals aligned with our app’s intended outcomes: building confidence, improving fluency, and becoming more culturally adept. This guided our focus toward creating a practical, engaging, and adaptive tool.
Following the process of designing and implementing the design, we perform the following steps: Lo-Fi Prototyping, Hi-Fi Prototyping and Interactive Design, and lastly, the Interactive Prototype. This process is progressive and fine-grained, which advances the design and implementation step-by-step with evaluations at each step.
We began by sketching paper prototypes to explore how SmoothYapper could address users’ challenges. These prototypes focused on two main features: Free Practice, allowing users to converse with an AI partner on chosen topics, and Scenario-Based Lessons, which offered guided small-talk exercises.
By drafting the card paper delineating the basic UI and functions of the app, we gradually established the pipeline of how the user is expected to interact with the app in order to achieve the goal: learning and practicing small talk. We then also started with the first-round user interview for evaluation purposes. We interviewed several of our friends, who are currently studying in the United States or are about to do so.
Based on the feedback from the interviewees, two major issues were identified:
Lack of clarity in UI design: Users struggled to distinguish between free practice and scenario-based lessons.
Misleading labels and buttons: Certain UI elements were not intuitive
A Lo-Fi Prototype Sketch for Scenario-based Lessons
Adopting the feedback from our friends, we merged the improvements into the draft. The revised Lo-Fi prototype is really similar to what we finally implemented. Some screens including the Free Practice Selection screen and the Feedback screen are almost the same as our final product. We ultimately decided to drop the scenario-based learning features to reduce redundancy, as they closely resembled existing tools like Duolingo, which we wanted to differentiate from.
Looking back, the Lo-Fi prototyping process served as an essential foundation for developing SmoothYapper by helping us conceptualize the app’s structure and interaction flow cost-effectively and iteratively. Through sketching and constructing basic paper prototypes, we gained insights into how users might navigate the app, identify key functionalities, and engage with its primary features. Additionally, the act of creating prototypes forced us to define the app’s core goals, such as enabling intuitive interactions for small talk practice and creating clear distinctions between different practice modes.
With this valuable feedback from our friends, we proposed several potential improvements in the next-stage prototyping, the Hi-Fi prototyping.
A revised Lo-Fi Prototype
Learned from the feedback from our friends, we then started the Hi-Fi prototyping with caution. At this stage, the prototype shall contain not only the skeleton of the app but also some detailed components that would finally appear in our implementation. Thus, we used Figma, a software that can help build interface prototypes mimicking real products. Based on the revised Lo-Fi prototypes, we make the look of the prototype more similar to an iOS mobile app, while also integrating some interaction logic into the prototype by adding the relationship between screens in Figma. We ultimately decided to drop the scenario-based learning features to reduce redundancy, as they closely resembled existing tools like Duolingo, which we wanted to differentiate from.
Hi-Fi Prototype of the Free Practice Function
Along with the Hi-Fi prototyping is our second heuristic evaluation. We collected several important pieces of feedback from our interviewees, particularly focusing on the post-practice feedback feature. Based on the comments from an ESL teacher we interviewed, the single-page feedback shown above after the practice session is insufficient, as it lacks transparency and provides limited information. A finer-grained feedback is needed for the sake of practice effectiveness.
Transcripts and Improvement Suggestions added in the 2nd Hi-Fi Prototype
The suggestion of the professional is incorporated into the Transcripts screen and the Improvement screen. Users can now view the transcripted conversation history and get improvement suggestions from the LLM as we expected.
In our final prototype, we incorporated richer interaction flows, aligning closer to real-world experiences:
Screenshots of the Final Interactive Prototype
Our design process culminated into the Interactive Prototype, which progressed from conceptual designs to a fully functional application. By using React Native, this stage emphasized the learnings from the Lo-Fi and Hi-Fi prototypes and implemented them into a final interface. The app incorporated a refined workflow designed to mimic realistic small talk environments while providing comprehensive feedback to aid their learning. The features of the interactive prototypes include the following:
1. Streamlined Navigation: The final app design features an intuitive navigation system, making it easy to switch between the Free Practice and feedback screens.
2. Live Captions: To support understanding during small talk, we integrated live captions which allow users to view the conversation’s live transcript, making it easier to find errors and keep the conversation flowing.
3. Comprehensive Feedback: Based on earlier user feedback, we expanded the feedback system to include:
Transcripts: Users can review their conversations with highlighted areas indicating suboptimal performance.
Improvement Suggestions: Detailed suggestions pertinent to grammar, fluency, and lexical choice.
Try-again Options: Users can instantly redo conversations to practice improved responses and reinforce learning.
4. Topic Variety and Customization: Users are now able to select from a wide variety of topics, ranging from casual conversations to culturally relevant topics, such as sports and news headlines. This optionality allows learners to mainly practice conversations relevant to their needs or interests.
Furthermore, we evaluated a partially implemented interactive prototype that fully supported the app’s practice functions. During this phase, we identified many areas for improvement from user feedback. One common piece of feedback was the absence of a Motivational Dashboard to help users track their progress and stay engaged. To address this, we created a new page to display user metrics, such as hours practiced and topics covered, which served to motivate and retain users.
Additionally, we conducted a controlled user experiment to compare two different interface layouts for selecting conversation partners: a grid format and a card format.
Two Variants of UI Interfaces
The experiment addressed the design question: How does the format of presenting conversation partners (grid vs. card) affect the speed of selection and user satisfaction during interface interactions? Participants completed tasks using both formats, and results revealed that the card interface significantly outperformed the grid in terms of task completion time, with a paired t-test yielding a statistically significant result (p = 0.003). While there were no statistically significant differences in ease of use and satisfaction scores, qualitative feedback showed a clear preference for the card interface given its intuitive navigation and lessened distractions. These findings informed our decision to prioritize the card interface for the final project. The testing process also highlighted the importance of data-driven decisions in devising a user-friendly interface.
The iterative development process of SmoothYapper is a good demonstration of the value of continuous user feedback and a user-centered approach. Through each stage, we refined our understanding of user needs and adapted our designs accordingly.
We learned a couple of valuable lessons:
1. The Power of Iterative Prototyping
Starting with paper-based Lo-Fi prototypes allowed us to quickly test and refine ideas. Moving to Hi-Fi prototypes helped us incorporate detailed visual and functional elements, preparing us for the final development. Each iteration revealed new insights, demonstrating the importance of phased, incremental development.
2. User-Centered Design as a Guiding Principle
Engaging directly with our target audience, who are non-native English speakers, helped us identify pain points we hadn’t initially considered. Feedback about clarity, functionality, and realism in small talk scenarios was instrumental in shaping SmoothYapper into an app that truly and comprehensively addressed user needs.
3. The Role of Feedback in Learning
Both user and AI feedback played crucial roles in the app’s effectiveness. While user feedback shaped the app’s design and features, the AI’s contextual feedback on practice sessions was critical in helping users improve their small talk skills.
4. Challenges in Simulating Real-Life Conversations
One of the most challenging aspects of development was making AI interactions feel natural. Real-life conversations involve dynamic conversational turns, non-verbal cues, and nuanced lexical usage that are difficult to replicate. While we addressed some of these challenges through features like hints and transcripts, this area remains a target for future modifications.
SmoothYapper is a significant step toward bridging linguistic and cultural gaps for non-native speakers. However, there is still room for growth. Potential future directions include:
Voice-Based AI Responses: Transitioning from text-based to voice-based interactions for more natural conversations using the brand-new OpenAI Realtime API.
Non-Verbal Communication Training: Incorporating body language and tone analysis to simulate real-life interactions.
Gamification: Adding rewards, badges, or challenges to further engage users and make learning fun.
Expanded User Base: Adapting the app for other groups, such as business professionals or travelers, to broaden its impact.
Throughout the course of the semester, SmoothYapper has become much more than just a language-learning app; it has evolved into a platform that empowers non-native speakers to engage confidently and naturally in small talk. By utilizing the power of LLMs and a rigorous user-centered design process, we have created an app that addresses real challenges faced by non-native English speakers, once we have also faced ourselves.
The journey of developing SmoothYapper has been as enlightening as it has been rewarding. Each prototype, each piece of feedback, and each iteration brought us closer to a solution that balances functionality, user experience, and real-world applicability. Through collaboration and innovation, tools like SmoothYapper can help break down barriers and bring people closer together, one conversation at a time. We also extend our sincere gratitude to Professor Metaxa and the Penn HCI Group, for the time dedicated to offering suggestions and evaluating our project.