Fahrsimulator_MSY2526_AI/project_report.md

12 KiB

Project Report: Multimodal Driver State Analysis

1) Project Scope

This repository implements an end-to-end workflow for multimodal driver-state analysis in a simulator setup.
The system combines:

  • Facial Action Units (AUs)
  • Eye-tracking features (fixations, saccades, blinks, pupil behavior)

Apart from this, several machine learning model architectures are presented and evaluated.

Content:

  • Dataset generation
  • Exploratory data analysis
  • Model training experiments
  • Real-time inference with SQlite, systemd and MQTT
  • Repository file inventory
  • Additional nformation

2) Dataset generation

2.1 Data Access, Filtering, and Data Conversion

Main scripts:

  • dataset_creation/create_parquet_files_from_owncloud.py
  • dataset_creation/parquet_file_creation.py

Purpose:

  • Download and/or access dataset files (either download first via EDA/owncloud_file_access.ipynb or all in one with dataset_creation/create_parquet_files_from_owncloud.py
  • Keep relevant columns (FACE_AUs and eye-tracking raw values)
  • Filter invalid samples (e.g., invalid level segments): Make sure not to drop rows where NaN is necessary for later feature creation, therefore use subset argument in dropNa()!
  • Export subject-level parquet files
  • Before running the scripts: be aware that the whole dataset contains 30 files with around 900 Mbytes each, provide enough storage and expect this to take a while.

2.2 Feature Engineering (Offline)

Main script:

  • dataset_creation/combined_feature_creation.py

Behavior:

  • Builds fixed-size sliding windows over subject time series (window size and step size can be adjusted)
  • Uses prepared parquet files from 2.1
  • Aggregates AU statistics per window (e.g., FACE_AUxx_mean)
  • Computes eye-feature aggregates (fix/sacc/blink/pupil metrics)
  • Produces training-ready feature tables = dataset
  • Parameter MIN_DUR_BLINKS can be adjusted, although this value needs to make sense in combination with your sampling frequency
  • With low videostream rates, consider to reevaluate the meaningfulness of some eye-tracking features, especially the fixations
  • running the script requires a manual installation of pygaze Analyser library from github

2.3 Online Camera + Eye + AU Feature Extraction

Main scripts:

  • dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
  • dataset_creation/camera_handling/eyeFeature_new.py
  • dataset_creation/camera_handling/db_helper.py

Runtime behavior:

  • Captures webcam stream with OpenCV
  • Extracts gaze/iris-based signals via MediaPipe
  • Records overlapping windows (VIDEO_DURATION=50s, START_INTERVAL=5s, FPS=25)
  • Runs AU extraction (py-feat) from recorded video segments
  • Computes eye-feature summary from generated gaze parquet
  • Writes merged rows to SQLite table feature_table

Operational note:

  • DB_PATH and other paths are currently code-configured and must be adapted per deployment.

3) EDA

The directory EDA provides several files to get insights into both the raw data from AdaBase and your own dataset.

  • EDA.ipynb - main EDA notebook: recreates the plot from AdaBase documentation, lists all experiments and in general serves as a playground for you to get to know the files.
  • distribution_plots.ipynb - This notebook aimes to visualize the data distributions for each experiment - the goal is the find out, whether the split of experiments into high and low cognitive load is clearer if some experiments are dropped.
  • histogramms.ipynb - Histogram analysis of low load vs high load per feature. Additionaly, scatter plots per feature are available.
  • researchOnSubjectPerformance.ipynb - This noteboooks aims to see how the performance values range for the 30 subjects. The code creates and saves a table in csv-format, which will later be used as the foundation of the performance based split in model_training/tools/performance_based_split
  • owncloud_file_access.ipynb - Get access to the files via owncloud and safe them as .h5 files, in correspondence to the parquet file creation script
  • login.yaml - used to store URL and password to access files from owncloud, used in previous notebook
  • calculate_replacement_values.ipynb - fallback / median computation notebook for deployment, creation of yaml syntax embedding

General information:

  • Due to their size, its absolutely recommended to download and save the dataset files once in the beginning
  • For better data understanding, read the AdaBase publication

4) Model Training

Included model families:

  • CNN variants (different fusion strategies)
  • XGBoost
  • Isolation Forest*
  • OCSVM*
  • DeepSVDD*

* These trainings are unsupervised, which means only low cognitive load samples are used for training. Validation then also considers high low samples.

Supporting utilities in model_training/tools:

  • scaler.py: Functions to fit, transform, save and load either MinMaxScaler or StandardScaler, subject-wise and globally - for new subjects, a fallback scaler (using mean of all subjects scaling parameters) is used
  • performance_split.py: Provides a function to split a group of subjects based on their performance in the AdaBase experiments, based on the results created in researchOnSubjectPerformance.ipynb
  • mad_outlier_removal.py: Functions to fit and transform data with MAD outlier removal
  • evaluation_tools.py: Especially used for Isolation Forest, Functions for ROC curve as well as confusion matrix

4.1 CNNs

4.2 XGBoost

4.3 Isolation Forest

4.4 OCSVM

4.5 DeepSVDD

5) Real-Time Prediction and Messaging

Main script:

  • predict_pipeline/predict_sample.py

Pipeline:

  • Loads runtime config (predict_pipeline/config.yaml)
  • Pulls latest row from SQLite (database.path/table/key)
  • Replaces missing values using fallback map
  • Optionally applies scaler (.pkl/.joblib)
  • Loads model (.keras, .pkl, .joblib) and predicts
  • Publishes JSON payload to MQTT topic

Expected payload form:

{
  "valid": true,
  "_id": 123,
  "prediction": 0
}

5.1 Scheduled Prediction (Linux)

Files:

  • predict_pipeline/predict.service
  • predict_pipeline/predict.timer
  • predict_pipeline/predict_service_timer_documentation.md

Role:

  • Run inference repeatedly without manual execution
  • Timer/service configuration can be customized per target machine

5.2 Runtime Configuration

Primary config file:

  • predict_pipeline/config.yaml

Sections:

  • database: SQLite location + table + sort key
  • model: model path
  • scaler: scaler usage + path
  • mqtt: broker and publish format
  • sample.columns: expected feature order
  • fallback: default values for NaN replacement

Important:

  • The repository currently uses environment-specific absolute paths in some scripts/configs.
  • Paths should be normalized before deployment to a new machine.

5.3) Data and Feature Expectations

Prediction expects SQLite rows containing:

  • _Id
  • start_time
  • All configured model features (AUs + eye metrics)

Common feature groups:

  • FACE_AUxx_mean columns
  • Fixation counters and duration statistics
  • Saccade count/amplitude/duration statistics
  • Blink count/duration statistics
  • Pupil mean and IPA

6) Installation and Dependencies

Due to unsolvable dependency conflicts, several environemnts need to be used in the same time.

6.1 Environemnt for camera handling

TO DO

6.2 Environment for predictions

Install base requirements:

pip install -r requirements.txt

Typical key packages in this project:

  • numpy, pandas, scikit-learn, scipy, pyarrow, pyyaml, joblib
  • opencv-python, mediapipe, torch, py-feat, pygazeanalyser
  • paho-mqtt
  • optional data access stack (pyocclient, h5py, tables)

7) Repository File Inventory

Root

  • .gitignore - Git ignore rules
  • readme.md - minimal quickstart documentation
  • project_report.md - full technical documentation (this file)
  • requirements.txt - Python dependencies

Dataset Creation

  • dataset_creation/parquet_file_creation.py - local source to parquet conversion
  • dataset_creation/create_parquet_files_from_owncloud.py - ownCloud download + parquet conversion
  • dataset_creation/combined_feature_creation.py - sliding-window multimodal feature generation
  • dataset_creation/maxDist.py - helper/statistical utility script

AU Creation

  • dataset_creation/AU_creation/AU_creation_service.py - AU extraction service workflow
  • dataset_creation/AU_creation/pyfeat_docu.ipynb - py-feat exploratory notes

Camera Handling

  • dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py - current camera + AU + eye online pipeline
  • dataset_creation/camera_handling/eyeFeature_new.py - eye-feature extraction from gaze parquet
  • dataset_creation/camera_handling/db_helper.py - SQLite helper functions (camera pipeline)
  • dataset_creation/camera_handling/camera_stream_AU_and_ET.py - older pipeline variant
  • dataset_creation/camera_handling/camera_stream.py - baseline camera streaming script
  • dataset_creation/camera_handling/db_test.py - DB test utility

EDA

  • EDA/EDA.ipynb - main EDA notebook
  • EDA/distribution_plots.ipynb - distribution visualization
  • EDA/histogramms.ipynb - histogram analysis
  • EDA/researchOnSubjectPerformance.ipynb - subject-level analysis
  • EDA/owncloud_file_access.ipynb - ownCloud exploration/access notebook
  • EDA/calculate_replacement_values.ipynb - fallback/median computation notebook
  • EDA/login.yaml - local auth/config artifact for EDA workflows

Model Training

CNN

  • model_training/CNN/CNN_simple.ipynb
  • model_training/CNN/CNN_crossVal.ipynb
  • model_training/CNN/CNN_crossVal_EarlyFusion.ipynb
  • model_training/CNN/CNN_crossVal_EarlyFusion_Filter.ipynb
  • model_training/CNN/CNN_crossVal_EarlyFusion_Test_Eval.ipynb
  • model_training/CNN/CNN_crossVal_faceAUs.ipynb
  • model_training/CNN/CNN_crossVal_faceAUs_eyeFeatures.ipynb
  • model_training/CNN/CNN_crossVal_HybridFusion.ipynb
  • model_training/CNN/CNN_crossVal_HybridFusion_Test_Eval.ipynb
  • model_training/CNN/deployment_pipeline.ipynb

XGBoost

  • model_training/xgboost/xgboost.ipynb
  • model_training/xgboost/xgboost_groupfold.ipynb
  • model_training/xgboost/xgboost_new_dataset.ipynb
  • model_training/xgboost/xgboost_regulated.ipynb
  • model_training/xgboost/xgboost_with_AE.ipynb
  • model_training/xgboost/xgboost_with_MAD.ipynb

Isolation Forest

  • model_training/IsolationForest/iforest_training.ipynb

OCSVM

  • model_training/OCSVM/ocsvm_with_AE.ipynb

DeepSVDD

  • model_training/DeepSVDD/deepSVDD.ipynb

MAD Outlier Removal

  • model_training/MAD_outlier_removal/mad_outlier_removal.ipynb
  • model_training/MAD_outlier_removal/mad_outlier_removal_median.ipynb

Shared Training Tools

  • model_training/tools/scaler.py
  • model_training/tools/performance_split.py
  • model_training/tools/mad_outlier_removal.py
  • model_training/tools/evaluation_tools.py

Prediction Pipeline

  • predict_pipeline/predict_sample.py - runtime prediction + MQTT publish
  • predict_pipeline/config.yaml - runtime database/model/scaler/mqtt config
  • predict_pipeline/fill_db.ipynb - helper notebook for DB setup/testing
  • predict_pipeline/predict.service - systemd service unit
  • predict_pipeline/predict.timer - systemd timer unit
  • predict_pipeline/predict_service_timer_documentation.md - Linux service/timer guide

Generic Tools

  • tools/db_helpers.py - common SQLite utilities used to get newest sample for prediction

8) Additional Information

  • Several paths are hardcoded on purpose to ensure compability with the jetsonboard at the OHM-UX driving simulator.
  • Camera and AU processing are resource-intensive; version pinning and hardware validation are recommended.