Multimodal Driver State Analysis
Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals.
For full documentation, see project_report.md.
Quickstart
1) Setup
Activate the conda-repository "".
conda activate
Make sure, another environment that fulfills prediction_env.yaml is available, matching with predict_pipeline/predict.service
See predict_pipeline/predict_service_timer_documentation.md
to get an overview over all available conda environments on your device, use this command in anaconda prompt terminal:
conda info --envs
Optionally, create a new environment based on the yaml-file:
conda env create -f prediction_env.yaml
Ohm-UX driving simulator jetson board only: The conda-environment p310_FS_TF is used for predictions.
2) Camera AU + Eye Pipeline (camera_stream_AU_and_ET_new.py)
- Open
dataset_creation/camera_handling/camera_stream_AU_and_ET_new.pyand adjust:
DB_PATHCAMERA_INDEXOUTPUT_DIR(optional)
- Start camera capture and feature extraction:
python dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
- Stop with
qin the camera window.
3) Predict Pipeline (predict_pipeline/predict_sample.py)
- Edit
predict_pipeline/config.yamland set:
database.path,database.table,database.keymodel.pathscaler.path(ifuse_scaling: true)- MQTT settings under
mqtt
- Run one prediction cycle:
python predict_pipeline/predict_sample.py
- Use predict_service_timer_documentation.md to see how to use the service and timer for automation. On Ohm-UX driving simulator's jetson board, the service runs starts automatically when the device is booting.
Description
Languages
Jupyter Notebook
97.9%
Python
2.1%