Multimodal Driver State Analysis
Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals.
For full documentation, see project_report.md.
Quickstart
1) Setup
Activate the conda-repository "".
conda activate
Make sure, another repository that fulfills requirement.txt is available, matching with predict_pipeline/predict.service - See predict_pipeline/predict_service_timer_documentation.md
2) Camera AU + Eye Pipeline (camera_stream_AU_and_ET_new.py)
- Open
dataset_creation/camera_handling/camera_stream_AU_and_ET_new.pyand adjust:
DB_PATHCAMERA_INDEXOUTPUT_DIR(optional)
- Start camera capture and feature extraction:
python dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
- Stop with
qin the camera window.
3) Predict Pipeline (predict_pipeline/predict_sample.py)
- Edit
predict_pipeline/config.yamland set:
database.path,database.table,database.keymodel.pathscaler.path(ifuse_scaling: true)- MQTT settings under
mqtt
- Run one prediction cycle:
python predict_pipeline/predict_sample.py
- Use
predict_service_timer_documentation.mdto see how to use the service and timer for automation.
Description
Languages
Jupyter Notebook
97.9%
Python
2.1%