top of page

Research Vision

​

The need to train new workers effectively and upskill the existing workforce is a challenge faced by almost every industry across the globe. The healthcare industry, in particular, is confronting a crisis. The World Health Organization (WHO) projects a shortage of 10 million healthcare workers by 2030. Although no country is exempt from this growing problem, the greatest gaps are found in countries in Africa, Southeast Asia, the Mediterranean Region, and parts of Latin America. This problem is further compounded by workers leaving their home countries to pursue opportunities elsewhere. A shortage of experienced healthcare workers and faculty to teach is a limiting factor that leads to enrollment limitations, fewer resources, fewer students, and a decline in the overall quality of patient care. To protect the health of the world’s population, we must investigate transformative solutions to achieve efficient, effective, resilient, and sustainable local and global healthcare systems. 

​

Recognizing this pressing need, we create AI + robotic teachers that assist human learners in the acquisition of new skills, with applications primarily in healthcare. 

Meet the Team

IMG_2974_edited_edited.jpg

Prof. Peizhu Qian, Principal Investigator

Assistant Professor of Computer Science

​

Ph.D., Computer Science, Rice University (2025)

B.S., Mathematics & Computer Science, Simmons College (2019)

​

Email: pqian@uh.edu

Ummey Profile_edited.jpg

Ummey Tanin, PhD Student

Ummey Tanin is a PhD student in Computer Science at the University of Houston. She earned her master's in computer science with a specialization in Data Science from Carleton University in Ontario, Canada, in 2022. Her research focuses on developing advanced deep learning techniques for surgical workflow analysis, multimodal data integration and performance evaluation in healthcare contexts. She is passionate about creating human-centered and explainable AI systems, with particular interests in real-time surgical video analysis, automated skill assessment and interactive robotic tutors designed to enhance medical training.

​

Email: utanin@cougarnet.uh.edu

Gopi_edited.jpg

Gopi Trinadh Maddikunta, MS Researcher

Gopi Trinadh Maddikunta is a Master’s student in Engineering Data Science at the University of Houston and a 2025 Google Summer of Code contributor with the Scala Center. His work focuses on retrieval-augmented generation (RAG), embeddings, and vector search, with applications in information retrieval and healthcare AI. He builds end-to-end, evaluation-driven systems that move from prototype to deployment, emphasizing reliability and explainability. He is especially interested in rigorous benchmarking, latency/throughput trade-offs, and making AI tools usable beyond demos.

 

Email: gmaddiku@cougarnet.uh.edu

Nandika_edited.jpg

Nandika Kohli, Undergraduate Researcher

Hi! I’m Nandika a Computer Engineering undergrad at Georgia Tech getting a minor in robotics. I love to learn about different aspects of robotics and am particularly interested in exploring ways that robots can improve healthcare especially in rural areas. I’ve moved a couple times but most recently have been in Cupertino, California. In my free time I love to watch movies & TV shows (please feel free to reach out if you have any recommendations :)), love to drink rose green tea or boba and enjoy dancing. Feel free to reach out if you’re ever in the Atlanta area or just wanna chat about robots :) 

​

Email: nandikakohli05@gmail.com

Current Projects

​

Research Question 1: Explain tasks that traditionally cannot be modeled as Markovian models or neural networks. 

 

  • Research Gap: In existing XAI literature, tasks are most commonly represented using Markovian models (e.g., reinforcement learning) or neural networks (e.g., image classification). However, many human tasks, like those in healthcare, may not be best represented using these models. How to extend existing XAI methods to explain diverse task models is an open and interesting research question. 

​​

  • Proposal: First, we need to understand how humans represent tasks. In healthcare, step-by-step checklists are frequently used. However, converting checklists into computational models is not trivial because steps in checklists include complex environment observations (e.g., move the trashcan next to the patient bed), states of medical tools (e.g., keep the sterile field dry), and human-object interaction (e.g., scrub the insertion site for 30 seconds using CHG). Some steps are composed of multiple actions (e.g., use sterile techniques to open the dressing kit) – sterile techniques are not specified and may vary based on the tasks and environment. To explain tasks that involve complicated environmental observations and human-object interactions, I will first extend existing task models in robotics, such as Planning Domain Definition Language (PDDL) and Hierarchical Task Networks (HTN), as useful starting points. In the long run, I will investigate novel techniques that capture the diversity in healthcare tasks and hospital-specific practices. I will explore the use of Large Language Models (LLM) to enable healthcare experts to translate their domain knowledge into robot-interpretable computational models, and then generate high-quality explanations. Leveraging the benefits of personalized learning, these explanations will be tailored to individual learners. 

​

​

​

​

Research Question 2: Automate skill evaluation in challenging real-world domains.

 

  • Research Gap: Existing Knowledge Tracing methods, traditionally used for discrete observations such as multiple choices, struggle with continuous observations. Unlike discrete domains where the observation-to-skill mapping is relatively straightforward, evidence of skill mastery in continuous, real-world domains is often implicit and largely depends on the context.  

​​

  • Proposal: To tackle this fundamental question, it is important to look at how human experts evaluate skills in such domains. For rule-based skills, such as sterile techniques, we can track the number of times a student breaks each rule using multimodal data – vision, language, and interaction. For knowledge-based skills that depend on context, such as treatment for dropping blood sugar levels, I propose to use physical robot intervention to create realistic scenarios for students to practice. Simulating realistic scenarios is a common practice that experienced nurses use to train new nurses but it varies from nurse to nurse and from hospital to hospital. A standardized simulation can be realized through robot teachers. I also propose to extend existing methods to create new Knowledge Tracing methods for broad real-world domains. 

© 2023-2025 by Peizhu Qian.

  • google scholar icon
  • linkedIn Icon

Visit

Philip Guthrie Hoffman Hall (PGH) 550 A

University of Houston

Write

bottom of page