top of page


AI Teacher 

AI Teacher is a human-in-the-loop explainable AI (XAI) framework for explaining robot behavior. It includes a policy summarization algorithm based on Bayesian user modeling, an interactive user interface allowing users to ask user-specific questions, and a detailed user study showing the effect of interactive teaching techniques for XAI. 

Related Papers:

  • AI Teacher 1.0:  Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents (AAMAS '22) [pdf]

  • AI Teacher 2.0: Interactively Explaining Robot Policies to Humans in Integrated Virtual and Physical Training Environments (HRI'24 LBR) [pdf] [poster]

Related Video Demos:

Robot-assisted Nursing / Robot Tutor

Robot Tutor is an Intelligent Robot Tutoring system developed to train and assess novice nurses on medical procedures. The system includes planning, perception, and speech capabilities. 

Related Papers:

  • Robotic Tutors for Nurse Training: Opportunities for HRI Researchers (RO-MAN'23) [pdf]

In the demo above, Stretch performs simple routine tasks in a hospital room such as fetching and placing objects, and pushing the bedhead button to adjust the hospital bedhead. The geometric shapes can be replaced with hospital-related objects such as blood samples, masks, gloves, and tool kits.

I implemented Bayesian Knowledge Tracing (BKT) to evaluate a nurse's skills for maintaining sterilized (in this example, the hands need to be always above the waistline). The original BKT is not designed for continuous assessment so I modified it based on this paper, essentially by running BKT every 0.1 seconds. Two versions are implemented:


1. BKT w/o learning assumes the user is not gaining new knowledge during this assessment, thus using the initial prior for every step of the assessment;

2. BKT w/ learning assumes the user continues to learn during the assessment, thus using an updated prior.

* The vision system (left side of the screen) is built by my summer intern student Filip Bajraktari


One of the biggest challenges faced by AI researchers is the lack of interpretability of deep neural networks - the machine learning model worked, but why? or the machine learning model made an error, why? To address this problem, my co-authors and I propose a framework, titled I-CEE, that utilizes human cognition modeling to help human users make sense of the machine learning model. This interdisciplinary work marks novel contributions toward generating personalized explanations for users. 

Related Papers:

  • I-CEE: Tailoring Explanations of Image Classifications Models to User Expertise (AAAI'24) [pdf]

  • Literature Review: Towards Human-centered Explainable AI: User Studies for Model Explanations (IEEE TPAMI) [pdf]

bottom of page