Eye-Guard Research Documentation

Conference-ready technical summary for methodology, novelty, evaluation, and deployment architecture.

1. Problem Statement

Digital eye strain is increasingly prevalent among students, developers, clinicians, and remote workers. Existing tools are mostly static timer-based reminders that ignore individualized physiological responses and temporal fatigue dynamics.

2. Methodology

Eye-Guard captures webcam video, extracts ocular biomarkers (EAR, blink rate, closure duration, gaze variance), and computes fatigue through both a weighted physiological score and temporal deep-learning models. We include adaptive baseline calibration per user.

3. ML Pipeline

The pipeline fuses four public datasets with real-world JisTech cohort sessions. Features are engineered in sliding windows, labels are generated through hybrid supervision (model + self-report), and an LSTM/GRU sequence classifier predicts fatigue levels (Normal/Mild/Moderate/Severe).

4. Dataset Strategy

We aggregate MRL Eye, Blink Detection, MRL+CEW, and Open/Closed datasets, then standardize metadata and class mappings. New participant sessions are appended as telemetry records, enabling continual learning and cross-domain robustness checks.

5. Novelty Contribution

First, personalized baseline adaptation reduces population-bias error. Second, temporal modeling captures fatigue progression missed by frame-level methods. Third, hybrid labeling uses AI confidence + human self-report to construct richer ground truth.

6. Results

Evaluation includes confusion matrix, precision, recall, F1-score, and 5-fold cross-validation. We benchmark against EAR-threshold heuristics and CNN baseline models, and map improvements against published ocular-fatigue studies.

7. Future Work

Next phases include multimodal fusion (posture, ambient light), uncertainty-aware calibration, federated personalization, and prospective clinical validation for occupational health deployment.