An AI assistant that helps medical students grow communication skills based on real-life conversations in the hospital.
for CMU MDes Interaction Studio II
3 months, Feb – May 2018
Research & Strategy lead
(smart watch & voice)
Interview / Survey
Usability Testing / Wizard of Oz
Today's medical students and physicians struggle to gain medical communication skills due to the suboptimal learning tools throughout their education.
Ora, a medical communication assistant that leverages AI's natural language processing capability to help medical students prepare, practice, and evaluate their communication skills based on their real conversations in the hospital.
I worked primarily on the in-hospital interactions – smart watch and voice/audio UX/UI.
In hospital: smart watch
The smart watch enables hands-free access to personalized learning materials and instant feedback in the hospital.
01 Best practice checklist
Before any conversation, the student can view a list of best practice items populated by Ora based on what the student have recently learned or felt short on.
03 Conversation critique (per convo)
After the conversation, Ora provides speech evaluations by topic and clarifies the specific terms and behaviors causing conversation breakdown.
In hospital: ear bud
The ear bud collects student’s conversation upon patient approval and provides access to Ora’s virtual assistant.
02 In-moment conversation protection
During the conversation, the ear bud sends out distinguishable notification as in-moment correction without major interruption.
04 Conversation critique (summary)
Ora’s voice assistant offers detailed feedback for any past conversation and create the student’s learning schedule upon request.
The tablet is the principle learning tool for any at-home learning.
05–08 Accumulative analysis, practice with virtual avatar, and learning modules
At home, the student practices with virtual avatar based on real (anonymized) conversations he/she had during the day. The student may also see his/her learning progress and review learning module as needed.
I provided support in CX design for the tablet but this was not my primary focus area.
How to measure success in learning?
We plan to measure success based on how well our system is able to identify errors and how much the recommended critiques and learning materials lowers the communication breakdowns. Specifically:
the number of correction decreases after students receive feedback and practice for the corrections
in-convo notifications and corrections reduce overtime
less reliance on the pre-convo checklist to accomplish good communications
Please see our final presentation for more feature-specific information.
From scoping to hypothesis
Given the prompt “design a system that is a symbiosis of learning and AI”, we used many research techniques to identify and narrow down opportunity areas.
Once we defined the domain – learning for medical students, we conducted a series of exploratory research – including interviews, surveys, literature reviews, and marketing research in the medical education field.
The synthesis of the early exploratory research was a rigorous 6-step process that started with a wall of "insights" extracted from exploratory research, ran through two rounds of affinity mapping, one round of 2x2 matrix, and the final set of design principles.
After the synthesis we were able to further scope our project by determining the learner, the learning type, the problem statement, and the design principles.
Focused Exploratory/ Early Generative Stage
The stage involved four research activities to identify more specifically what the challenges are in rotation students’ contexts. With the learning, we established personas to better capture the pain points.
We started with looking into existing communication learning methods in medical schools – what worked and what didn’t work? and for what type of students? We found that no one method can satisfy all success factors.
Text-Based Diary Study (n=8)
After defining our target design audience, we conducted a text-based diary study to eight 3rd and 4th year medical students to learn about the specific communication challenges in their current rotations.
We also created a summary for each participant based on their identified stress level with different hospital personnels. We later used these profiles as the foundation for persona building (see synthesis section).
Together, we identified common high- and low-stress scenarios and learned about medical students' own reflection on communication improvement.
Participatory Workshop (n=20)
Followed by the diary study, we launched a participatory workshop at UPMC (University of Pittsburgh Medical Center) to understand how medical students may use creative tools to come up with solutions to their specific communication challenges.
We interviewed a total of 20 medical professionals – 4 medical students, 3 residents, 3 physicians, and 10 nursing students. Feedback from non-medical students were particular insightful in helping us understand communication learning challenge from all perspectives.
Based on the previous research, we created two student personas and two corresponding AI personas. As we constructed AI personas, we designed the speech to reflect learning needs of particular students.
Based on the persona study, we started looking further into the needs and challenges to include "existing teachers (doctors and residents)" into the communication learning framework.
Focused Generative Stage
In this stage we used storyboards to get initial green lights for our concept and iterated on the products to gradually increase fidelity.
4.1 Storyboarding + Speed-dating
Based on pain points and creative ideas emerged in the generative research, we went through three sets of design iterations involving six paper storyboards and one "video storyboard".
The video storyboard was most useful in helping medical students envision and critique the concept during speed-dating. The feedback became much more concrete and actionable.
–– What is the Minimal Viable Product? ––
In the design process thus far, we thought that this learning system had to involve the rotation students as well as the residents and doctors to be successful – but is it true? We knew that most residents already didn’t have time to attend rotation students and that most rotation students felt the need to “prove” to the residents who grade them at the end of rotations.
Considering all the above, we decided that our minimal viable product would be a learning system for students only and iterate our initial designs under this system.
Iteration A: In-hospital User/AI Flow
When we initially designed the user/AI flow, we kept our options open in terms of interfaces – keeping screen and voice. We started a very sophisticated system, and worked our way to simplify the flow, clarify evaluation criteria, and identify the most suitable interfaces at each touch point.
The final design (before Wizard-of-Oz testing) featured a pre-convo checklist, simple in-convo “ding” notification, and a post-convo scoresheet .
Design Iteration B: Checklist and Scoreboard
The design principles here are (1) succinct (2) hands-free (3) useful content. Through our iterations, smart watches turned out to be most preferable for checklists and scoreboards – the watches are personal, convenient, widely adopted, and hands-free.
Design iterations on smartwatch UI focused primarily for legibility and content.
Design Iteration C: In-Convo “Ding” Notification
While or in-convo intervention in the video storyboard received lukewarm feedback, we knew from literature review that making correction right away was a key learning opportunity.
We explored a range of interfaces – haptic feedback from watch, voice (English phrases), and audio (“ding”). We knew we found the right “ding” when the user’s conversation didn’t appear disrupted when the cues were sent.
Design Iteration D: VR v. Tablet
For the at-home training module, we wanted to explore how emerging technology can mediate the “not realistic” pain point for existing solutions. We also developed a tablet mode as the controlled group.
Note: I had little involvement in the design iterations of VR and tablet UI.
Evaluative Stage – Usability Testing /Wizard of Oz
With our prototypes, we went to UPMC again and tested our concept with eight 3rd- and 4th- year medical students who individually participated in a 20-minute session.
Overall, the participants were very excited about the system and was more optimistic and open about certain privacy issues than we initially expected.
One key takeaway was that our “student-only” MVP system was actually applauded by the participants.
“[Ora] should be just for ourselves. You don’t want to make it another required item on which students get graded.”
Reflection & Next Steps
We have learned to make the most of existing resources by trying our best to harness local and remote participants and gave compensation for their time out-of-pocket. The resulting personas was very helpful to keep us in check and avoid “designer biases”.
We spent a lot of time researching AI’s capabilities and limitations. In the medical field, there exists a lot of contextual learning challenges that are difficult to to address in AI. Nonetheless, we wanted to push the boundary and leverage the existing communication framework in academia to help students follow best practices.
In our initial research process we continued to get bogged down by content in accuracy – medical professionals had a hard time knowing the content were placeholders. Once we had the content nailed down, we received more feedback specific to usability. The experience highlighted the weakness of having a team with only designers.
The critical next step would be circulating the idea with a larger group of stakeholders, including patients. We didn’t reach out to patients for ethical concerns in the design process. However, their feedback would be key to the concept validation and iteration.