EngageSense: a hybrid approach for real time engagement detection for virtual classrooms

Irfan, Muhammad, Patel, Preeti and Hassan, Bilal (2025) EngageSense: a hybrid approach for real time engagement detection for virtual classrooms. In: IEEE EDUCON 2025 - 16th Global Engineering Education Conference, 22nd April - 25th April 2025, Queen Mary University of London. (Unpublished)

Abstract

Advancements in digital education have revolutionized traditional learning environments, driving the widespread adoption of virtual and hybrid classrooms. Engagement, a vital factor for effective learning, necessitates continuous monitoring and assessment to optimize outcomes. This study introduces EngageSense, a hybrid real-time engagement detection system leveraging facial biometrics, computer vision, and deep learning. First, a new dataset is created via user eye images taken from webcam of laptop. Then, Dlib’s HOG + Linear SVM for face detection, a CNN model trained on 4,453 eye images dataset(classified into left, right, and center gaze directions), and OpenPose MobileNetV1 for body pose estimation are used. By fusing gaze direction (99.50% accuracy) and pose features, EngageSense classifies engagement into three levels: fully engaged, partially engaged, and not engaged with an accuracy of 90%. By providing actionable real-_me insights, EngageSense empowers educators to foster meaningful interactions and enhance learning experiences in virtual environments.

Documents
10102:51592
[thumbnail of EDUCON 2025 EngageSense.pdf]
Preview
EDUCON 2025 EngageSense.pdf - Accepted Version
Available under License Creative Commons Attribution 4.0.

Download (1MB) | Preview
Details
Record
View Item View Item