1.6 KiB
1.6 KiB
Multimodal Driver State Analysis
Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals.
For full documentation, see project_report.md.
Quickstart
1) Setup
Activate the conda-repository "".
conda activate
Make sure, another environment that fulfills requirement.txt is available, matching with predict_pipeline/predict.service - See predict_pipeline/predict_service_timer_documentation.md
To get an overview over all available conda environments on your device, use this command in anaconda prompt terminal:
conda info --envs
2) Camera AU + Eye Pipeline (camera_stream_AU_and_ET_new.py)
- Open
dataset_creation/camera_handling/camera_stream_AU_and_ET_new.pyand adjust:
DB_PATHCAMERA_INDEXOUTPUT_DIR(optional)
- Start camera capture and feature extraction:
python dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
- Stop with
qin the camera window.
3) Predict Pipeline (predict_pipeline/predict_sample.py)
- Edit
predict_pipeline/config.yamland set:
database.path,database.table,database.keymodel.pathscaler.path(ifuse_scaling: true)- MQTT settings under
mqtt
- Run one prediction cycle:
python predict_pipeline/predict_sample.py
- Use
predict_service_timer_documentation.mdto see how to use the service and timer for automation. On Ohm-UX driving simulator's jetson board, the service runs starts automatically when the device is booting.