![]() | ![]() | ![]() | ![]() |
---|---|---|---|
![]() | ![]() |
Research Vision
​
The need to train new workers effectively and upskill the existing workforce is a challenge faced by almost every industry across the globe. The healthcare industry, in particular, is confronting a crisis. The World Health Organization (WHO) projects a shortage of 10 million healthcare workers by 2030. Although no country is exempt from this growing problem, the greatest gaps are found in countries in Africa, Southeast Asia, the Mediterranean Region, and parts of Latin America. This problem is further compounded by workers leaving their home countries to pursue opportunities elsewhere. A shortage of experienced healthcare workers and faculty to teach is a limiting factor that leads to enrollment limitations, fewer resources, fewer students, and a decline in the overall quality of patient care. To protect the health of the world’s population, we must investigate transformative solutions to achieve efficient, effective, resilient, and sustainable local and global healthcare systems.
​
Recognizing this pressing need, we create AI + robotic teachers that assist human learners in the acquisition of new skills, with applications primarily in healthcare.

Meet the Team

Prof. Peizhu Qian, Principal Investigator
Assistant Professor of Computer Science
​
Ph.D., Computer Science, Rice University
B.S., Mathematics & Computer Science, Simmons College
​
Email: pqian@uh.edu

Ph.D. Student, Postdoc, Research Engineer
Interested in joiningour lab? Check out available positions!
Current Projects
​
Research Question 1: Explain tasks that traditionally cannot be modeled as Markovian models or neural networks.
-
Research Gap: In existing XAI literature, tasks are most commonly represented using Markovian models (e.g., reinforcement learning) or neural networks (e.g., image classification). However, many human tasks, like those in healthcare, may not be best represented using these models. How to extend existing XAI methods to explain diverse task models is an open and interesting research question.
​​
-
Proposal: First, we need to understand how humans represent tasks. In healthcare, step-by-step checklists are frequently used. However, converting checklists into computational models is not trivial because steps in checklists include complex environment observations (e.g., move the trashcan next to the patient bed), states of medical tools (e.g., keep the sterile field dry), and human-object interaction (e.g., scrub the insertion site for 30 seconds using CHG). Some steps are composed of multiple actions (e.g., use sterile techniques to open the dressing kit) – sterile techniques are not specified and may vary based on the tasks and environment. To explain tasks that involve complicated environmental observations and human-object interactions, I will first extend existing task models in robotics, such as Planning Domain Definition Language (PDDL) and Hierarchical Task Networks (HTN), as useful starting points. In the long run, I will investigate novel techniques that capture the diversity in healthcare tasks and hospital-specific practices. I will explore the use of Large Language Models (LLM) to enable healthcare experts to translate their domain knowledge into robot-interpretable computational models, and then generate high-quality explanations. Leveraging the benefits of personalized learning, these explanations will be tailored to individual learners.
​
​
​
​
Research Question 2: Automate skill evaluation in challenging real-world domains.
-
Research Gap: Existing Knowledge Tracing methods, traditionally used for discrete observations such as multiple choices, struggle with continuous observations. Unlike discrete domains where the observation-to-skill mapping is relatively straightforward, evidence of skill mastery in continuous, real-world domains is often implicit and largely depends on the context.
​​
-
Proposal: To tackle this fundamental question, it is important to look at how human experts evaluate skills in such domains. For rule-based skills, such as sterile techniques, we can track the number of times a student breaks each rule using multimodal data – vision, language, and interaction. For knowledge-based skills that depend on context, such as treatment for dropping blood sugar levels, I propose to use physical robot intervention to create realistic scenarios for students to practice. Simulating realistic scenarios is a common practice that experienced nurses use to train new nurses but it varies from nurse to nurse and from hospital to hospital. A standardized simulation can be realized through robot teachers. I also propose to extend existing methods to create new Knowledge Tracing methods for broad real-world domains.