1.8 KiB

Multimodal Driver State Analysis

Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals.

For full documentation, see project_report.md.

Quickstart

1) Setup

Activate the conda-repository "".

    conda activate

Make sure, another environment that fulfills prediction_env.yaml is available, matching with predict_pipeline/predict.service See predict_pipeline/predict_service_timer_documentation.md to get an overview over all available conda environments on your device, use this command in anaconda prompt terminal:

    conda info --envs

Optionally, create a new environment based on the yaml-file:

    conda env create -f prediction_env.yaml

Ohm-UX driving simulator jetson board only: The conda-environment p310_FS_TF is used for predictions.

2) Camera AU + Eye Pipeline (camera_stream_AU_and_ET_new.py)

  1. Open dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py and adjust:
  • DB_PATH
  • CAMERA_INDEX
  • OUTPUT_DIR (optional)
  1. Start camera capture and feature extraction:
python dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
  1. Stop with q in the camera window.

3) Predict Pipeline (predict_pipeline/predict_sample.py)

  1. Edit predict_pipeline/config.yaml and set:
  • database.path, database.table, database.key
  • model.path
  • scaler.path (if use_scaling: true)
  • MQTT settings under mqtt
  1. Run one prediction cycle:
python predict_pipeline/predict_sample.py
  1. Use predict_service_timer_documentation.md to see how to use the service and timer for automation. On Ohm-UX driving simulator's jetson board, the service runs starts automatically when the device is booting.