An AI assistant that helps medical students grow communication skills based on real-life conversations in the hospital.
for CMU MDes Interaction Studio II
3 months, Feb – May 2018
Research & Strategy lead
UX Design (smart watch & dialogue flow)
Interview / Survey
Usability Testing / Wizard of Oz
Today's medical students and physicians struggle to gain medical communication skills due to the suboptimal learning tools throughout their education.
Ora, a medical communication assistant that leverages AI's natural language processing capability to help medical students prepare, practice, and evaluate their communication skills based on their real conversations in the hospital.
// How Ora teaches
Ora's learning is built upon three devices – a smart watch, an ear piece, and a tablet. Together, they establish the tried-and-true "prepare–practice–evaluate" learning framework that is crucial to effective communication learning.
The smart watch enables hands-free access to personalized learning materials and instant feedback in the hospital.
Before any conversation, the student can view a list of best practice items populated by Ora based on what the student have recently learned or felt short on.
After the conversation, Ora provides speech evaluations by topic and clarifies the specific terms and behaviors causing conversation breakdown.
The ear bud collects student’s conversation upon patient approval and provides access to Ora’s virtual assistant.
During the conversation, the ear bud sends out distinguishable notification for different common conversation downfalls as allows for in-moment correction without major interruption.
click to play voice notification
Ora’s voice assistant offers detailed feedback any past conversation and create the student’s learning schedule upon request.
The tablet is the principle learning tool for any at-home learning.
Upon turning on the tablet, the student first see visualizations of communication growth trajectory. This helps the student better identify communication strengths and weaknesses.
Ora provides detailed analysis of each conversation segment by topic and provides transcripts of the full conversation with the patient’s information anonymized.
Practice: Virtual Agent
The student may re-do any conversation segment and get in-moment feedback of any breakdowns and corrections. A patient avatar is used to remove any personal identifier and protect patient privacy.
Ora leverages existing teaching resources to provide teaching modules for different communication techniques.
// How Ora learns to teach
Ora's AI is built upon natural language processing training and further trained by existing communication scripts used in the education context.
As medical students use Ora, it uses students' real conversation and feedback to better identify communication breakdown and suggest useful tips.
With augmented infrastructure, Ora may include video conversations and patient feedback.
We started this project by creating a territory map to visualize opportunity areas. We initially put the patients at the center of this diagram for their frequent interactions with stakeholders.
As our research progressed, we further focused on medical students and communication learning. The highlighted text reflects our focus after the Early Exploratory stage.
Early Exploratory Stage
I drafted the questionnaire, conducted interviews, and led the synthesis stage.
2.1 Questionnaires & Interviews
We started with two conventional research methods to understand pain points in both medical learning and teaching. The methods helped us achieve:
flexibility/efficiency: medical students and doctors are very busy
engagement: identify passionate participants and establish relationship early on
2.2 Literature Review & Market Analysis
We also conducted literature review and market research to understand the recent design and technology intervention in healthcare. Five primary insights were identified and included in the synthesis.
The synthesis was a rigorous 6-step process that started with a wall of "insights" extracted from exploratory research, ran through two rounds of affinity mapping, one round of 2x2 matrix, and the final set of design principles.
After the synthesis we were able to further scope our project by determining the learner, the learning type, the problem statement, and the design principles.
Focused Exploratory/ Early Generative Stage
I designed and led both research activities and alone created all 4 personas.
3.1 Text-Based Diary Study
After defining our target design audience, we conducted a text-based diary study to eight 3rd and 4th year medical students to learn about the specific communication challenges in their current rotations.
We opted for a text-based format because:
students received reminders in an un-intrusive way
researchers could gauge engagement level and adjust questions daily
We also created a summary for each participant based on their identified stress level with different hospital personnels. We later used these profiles as the foundation for persona building (see synthesis section).
Together, we identified common high- and low-stress scenarios and learned about medical students' own reflection on communication improvement.
3.2 Participatory Workshop
Followed by the diary study, we launched a participatory workshop at UPMC (University of Pittsburgh Medical Center) to understand how medical students may use creative tools to come up with solutions to their specific communication challenges.
We interviewed a total of 20 medical professionals – 4 medical students, 3 residents, 3 physicians, and 10 nursing students. Feedback from non-medical students were particular insightful in helping us understand communication learning challenge from all perspectives.
3.3 Synthesis: Persona
Based on the previous research, we created two student personas and two corresponding AI personas. As we constructed AI personas, we designed the speech to reflect learning needs of particular students.
This study urged us to look further into the needs and challenges to include "existing teachers (doctors and residents)" into the communication learning framework.
Note upfront: our final design did not include doctors and residents due to student responses in the evaluative stage. See more in 05 Evaluative Stage.
Focused Generative Stage
I contributed to the storyboard generation and speed-dating study, constructed AI workflow, and desigend all in-moment interactions (voice + smartwatch).
4.1 Storyboarding + Speed-dating
Based on pain points and creative ideas emerged in the generative research, we went through three sets of design iterations involving six paper storyboards and one "video storyboard".
The video storyboard was most useful in helping medical students envision and critique the concept during speed-dating. The feedback became much more concrete and actionable.
4.2 Prototype Iteration: Voice, Watch, and VR
After speed-dating, we started prototyping voice notification and VR – two well-received touch points. We also experimented with smart watches for checklist delivery, as many interviewees raised distraction and privacy concerns about AR glasses.
In our initial testing session, we had participants (right two) read scripts and role play patients and medical students.
We observed how participants responded to different feedback (voice/words, voice/sound, watch/vibration) and the level of disruption to conversations.
With VR, we applied simple notification cards and pre-recorded 360 videos in the hospital setting.
4.3 Visual Design & Branding
Our visual design system inherited the look and feel of existing healthcare products, including the decision of using san-sarif typeface and using blue accent as the primary color. We named our project Ora for that it is short of "oral" .
I co-planned the research and managed all Wizard-of-Oz testing.
5.1 Usability Testing /Wizard of Oz
To understand the usability of our design, we went to UPMC again and tested our concept with eight 3rd- and 4th- year medical students, who individually participated in a 20-minute session.
5.2 Reflection & Next Steps
As the primary researcher for the design process, I have learned to make the most of existing resources. We tried our best to harness local and remote participants and gave compensation for their time out-of-pocket. We also use personas to keep our priorities in check and avoid “designer biases”.
We spent a lot of time researching AI’s capabilities and limitations. In the medical field, there ware a lot of contextual learning challenges that are difficult to to address in AI. Nonetheless, we wanted to push the boundary and leverage the existing communication framework in academia to help students follow best practices.
I primarily designed in-hospital interactions that are succinct, intuitive, and content-sensitive. The importance of content experience (copy) really surfaced in this project and I was glad that our rich research could support the process.
The critical next step to accomplish a minimal viable product would be building the AI model and the supporting hardware (ear bud). We didn’t circulate the idea with patients for ethical concerns in the design process. However, it is crucial to get feedback from all stakeholders in the design system.