254 lines
8.3 KiB
Markdown
254 lines
8.3 KiB
Markdown
# Project Report: Multimodal Driver State Analysis
|
|
|
|
## 1) Project Scope
|
|
|
|
This repository implements an end-to-end workflow for multimodal driver-state analysis in a simulator setup.
|
|
The system combines:
|
|
- Facial Action Units (AUs)
|
|
- Eye-tracking features (fixations, saccades, blinks, pupil behavior)
|
|
|
|
It covers:
|
|
- Data ingestion and conversion
|
|
- Sliding-window feature generation
|
|
- Exploratory analysis
|
|
- Model training experiments
|
|
- Real-time inference from SQLite
|
|
- MQTT publishing
|
|
- Optional Linux `systemd` scheduling
|
|
|
|
## 2) End-to-End Workflow
|
|
|
|
### 2.1 Data Ingestion and Conversion
|
|
|
|
Main scripts:
|
|
- `dataset_creation/create_parquet_files_from_owncloud.py`
|
|
- `dataset_creation/parquet_file_creation.py`
|
|
|
|
Purpose:
|
|
- Read source recordings (`.h5` and/or ownCloud-fetched files)
|
|
- Keep relevant simulator/physiology columns
|
|
- Filter invalid samples (e.g., invalid level segments)
|
|
- Export subject-level parquet files
|
|
|
|
### 2.2 Feature Engineering (Offline)
|
|
|
|
Main script:
|
|
- `dataset_creation/combined_feature_creation.py`
|
|
|
|
Behavior:
|
|
- Builds fixed-size sliding windows over subject time series
|
|
- Aggregates AU statistics per window (e.g., `FACE_AUxx_mean`)
|
|
- Computes eye-feature aggregates (fix/sacc/blink/pupil metrics)
|
|
- Produces training-ready feature tables
|
|
|
|
### 2.3 Online Camera + Eye + AU Feature Extraction
|
|
|
|
Main scripts:
|
|
- `dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py`
|
|
- `dataset_creation/camera_handling/eyeFeature_new.py`
|
|
- `dataset_creation/camera_handling/db_helper.py`
|
|
|
|
Runtime behavior:
|
|
- Captures webcam stream with OpenCV
|
|
- Extracts gaze/iris-based signals via MediaPipe
|
|
- Records overlapping windows (`VIDEO_DURATION=50s`, `START_INTERVAL=5s`, `FPS=25`)
|
|
- Runs AU extraction (`py-feat`) from recorded video segments
|
|
- Computes eye-feature summary from generated gaze parquet
|
|
- Writes merged rows to SQLite table `feature_table`
|
|
|
|
Operational note:
|
|
- `DB_PATH` and other paths are currently code-configured and must be adapted per deployment.
|
|
|
|
### 2.4 Model Training
|
|
|
|
Location:
|
|
- `model_training/` (primarily notebook-driven)
|
|
|
|
Included model families:
|
|
- CNN variants (different fusion strategies)
|
|
- XGBoost
|
|
- Isolation Forest
|
|
- OCSVM
|
|
- DeepSVDD
|
|
|
|
Supporting utilities:
|
|
- `model_training/tools/scaler.py`
|
|
- `model_training/tools/performance_split.py`
|
|
- `model_training/tools/mad_outlier_removal.py`
|
|
- `model_training/tools/evaluation_tools.py`
|
|
|
|
### 2.5 Real-Time Prediction and Messaging
|
|
|
|
Main script:
|
|
- `predict_pipeline/predict_sample.py`
|
|
|
|
Pipeline:
|
|
- Loads runtime config (`predict_pipeline/config.yaml`)
|
|
- Pulls latest row from SQLite (`database.path/table/key`)
|
|
- Replaces missing values using `fallback` map
|
|
- Optionally applies scaler (`.pkl`/`.joblib`)
|
|
- Loads model (`.keras`, `.pkl`, `.joblib`) and predicts
|
|
- Publishes JSON payload to MQTT topic
|
|
|
|
Expected payload form:
|
|
```json
|
|
{
|
|
"valid": true,
|
|
"_id": 123,
|
|
"prediction": 0
|
|
}
|
|
```
|
|
|
|
### 2.6 Scheduled Prediction (Linux)
|
|
|
|
Files:
|
|
- `predict_pipeline/predict.service`
|
|
- `predict_pipeline/predict.timer`
|
|
- `predict_pipeline/predict_service_timer_documentation.md`
|
|
|
|
Role:
|
|
- Run inference repeatedly without manual execution
|
|
- Timer/service configuration can be customized per target machine
|
|
|
|
## 3) Runtime Configuration
|
|
|
|
Primary config file:
|
|
- `predict_pipeline/config.yaml`
|
|
|
|
Sections:
|
|
- `database`: SQLite location + table + sort key
|
|
- `model`: model path
|
|
- `scaler`: scaler usage + path
|
|
- `mqtt`: broker and publish format
|
|
- `sample.columns`: expected feature order
|
|
- `fallback`: default values for NaN replacement
|
|
|
|
Important:
|
|
- The repository currently uses environment-specific absolute paths in some scripts/configs.
|
|
- Paths should be normalized before deployment to a new machine.
|
|
|
|
## 4) Data and Feature Expectations
|
|
|
|
Prediction expects SQLite rows containing:
|
|
- `_Id`
|
|
- `start_time`
|
|
- All configured model features (AUs + eye metrics)
|
|
|
|
Common feature groups:
|
|
- `FACE_AUxx_mean` columns
|
|
- Fixation counters and duration statistics
|
|
- Saccade count/amplitude/duration statistics
|
|
- Blink count/duration statistics
|
|
- Pupil mean and IPA
|
|
|
|
## 5) Installation and Dependencies
|
|
|
|
Install base requirements:
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
Typical key packages in this project:
|
|
- `numpy`, `pandas`, `scikit-learn`, `scipy`, `pyarrow`, `pyyaml`, `joblib`
|
|
- `opencv-python`, `mediapipe`, `torch`, `py-feat`, `pygazeanalyser`
|
|
- `paho-mqtt`
|
|
- optional data access stack (`pyocclient`, `h5py`, `tables`)
|
|
|
|
## 6) Repository File Inventory
|
|
|
|
### 6.1 Root
|
|
|
|
- `.gitignore` - Git ignore rules
|
|
- `readme.md` - minimal quickstart documentation
|
|
- `project_report.md` - full technical documentation (this file)
|
|
- `requirements.txt` - Python dependencies
|
|
|
|
### 6.2 Dataset Creation
|
|
|
|
- `dataset_creation/parquet_file_creation.py` - local source to parquet conversion
|
|
- `dataset_creation/create_parquet_files_from_owncloud.py` - ownCloud download + parquet conversion
|
|
- `dataset_creation/combined_feature_creation.py` - sliding-window multimodal feature generation
|
|
- `dataset_creation/maxDist.py` - helper/statistical utility script
|
|
|
|
#### AU Creation
|
|
- `dataset_creation/AU_creation/AU_creation_service.py` - AU extraction service workflow
|
|
- `dataset_creation/AU_creation/pyfeat_docu.ipynb` - py-feat exploratory notes
|
|
|
|
#### Camera Handling
|
|
- `dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py` - current camera + AU + eye online pipeline
|
|
- `dataset_creation/camera_handling/eyeFeature_new.py` - eye-feature extraction from gaze parquet
|
|
- `dataset_creation/camera_handling/db_helper.py` - SQLite helper functions (camera pipeline)
|
|
- `dataset_creation/camera_handling/camera_stream_AU_and_ET.py` - older pipeline variant
|
|
- `dataset_creation/camera_handling/camera_stream.py` - baseline camera streaming script
|
|
- `dataset_creation/camera_handling/db_test.py` - DB test utility
|
|
|
|
### 6.3 EDA
|
|
|
|
- `EDA/EDA.ipynb` - main EDA notebook
|
|
- `EDA/distribution_plots.ipynb` - distribution visualization
|
|
- `EDA/histogramms.ipynb` - histogram analysis
|
|
- `EDA/researchOnSubjectPerformance.ipynb` - subject-level analysis
|
|
- `EDA/owncloud_file_access.ipynb` - ownCloud exploration/access notebook
|
|
- `EDA/calculate_replacement_values.ipynb` - fallback/median computation notebook
|
|
- `EDA/login.yaml` - local auth/config artifact for EDA workflows
|
|
|
|
### 6.4 Model Training
|
|
|
|
#### CNN
|
|
- `model_training/CNN/CNN_simple.ipynb`
|
|
- `model_training/CNN/CNN_crossVal.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_EarlyFusion.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_EarlyFusion_Filter.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_EarlyFusion_Test_Eval.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_faceAUs.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_faceAUs_eyeFeatures.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_HybridFusion.ipynb`
|
|
- `model_training/CNN/CNN_crossVal_HybridFusion_Test_Eval.ipynb`
|
|
- `model_training/CNN/deployment_pipeline.ipynb`
|
|
|
|
#### XGBoost
|
|
- `model_training/xgboost/xgboost.ipynb`
|
|
- `model_training/xgboost/xgboost_groupfold.ipynb`
|
|
- `model_training/xgboost/xgboost_new_dataset.ipynb`
|
|
- `model_training/xgboost/xgboost_regulated.ipynb`
|
|
- `model_training/xgboost/xgboost_with_AE.ipynb`
|
|
- `model_training/xgboost/xgboost_with_MAD.ipynb`
|
|
|
|
#### Isolation Forest
|
|
- `model_training/IsolationForest/iforest_training.ipynb`
|
|
|
|
#### OCSVM
|
|
- `model_training/OCSVM/ocsvm_with_AE.ipynb`
|
|
|
|
#### DeepSVDD
|
|
- `model_training/DeepSVDD/deepSVDD.ipynb`
|
|
|
|
#### MAD Outlier Removal
|
|
- `model_training/MAD_outlier_removal/mad_outlier_removal.ipynb`
|
|
- `model_training/MAD_outlier_removal/mad_outlier_removal_median.ipynb`
|
|
|
|
#### Shared Training Tools
|
|
- `model_training/tools/scaler.py`
|
|
- `model_training/tools/performance_split.py`
|
|
- `model_training/tools/mad_outlier_removal.py`
|
|
- `model_training/tools/evaluation_tools.py`
|
|
|
|
### 6.5 Prediction Pipeline
|
|
|
|
- `predict_pipeline/predict_sample.py` - runtime prediction + MQTT publish
|
|
- `predict_pipeline/config.yaml` - runtime database/model/scaler/mqtt config
|
|
- `predict_pipeline/fill_db.ipynb` - helper notebook for DB setup/testing
|
|
- `predict_pipeline/predict.service` - systemd service unit
|
|
- `predict_pipeline/predict.timer` - systemd timer unit
|
|
- `predict_pipeline/predict_service_timer_documentation.md` - Linux service/timer guide
|
|
|
|
### 6.6 Generic Tools
|
|
|
|
- `tools/db_helpers.py` - common SQLite utilities used by prediction path
|
|
|
|
## 7) Known Technical Notes
|
|
|
|
- Several paths are hardcoded for a specific runtime environment and should be parameterized for portability.
|
|
- Camera and AU processing are resource-intensive; version pinning and hardware validation are recommended.
|
|
|