54 lines
1.8 KiB
Markdown

# Multimodal Driver State Analysis
Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals.
For full documentation, see `project_report.md`.
## Quickstart
### 1) Setup
Activate the conda-repository "".
```bash
conda activate
```
**Make sure, another environment that fulfills prediction_env.yaml is available**, matching with predict_pipeline/predict.service
See `predict_pipeline/predict_service_timer_documentation.md`
to get an overview over all available conda environments on your device, use this command in anaconda prompt terminal:
```bash
conda info --envs
```
Optionally, create a new environment based on the yaml-file:
```bash
conda env create -f prediction_env.yaml
```
Ohm-UX driving simulator jetson board only: The conda-environment `p310_FS_TF` is used for predictions.
### 2) Camera AU + Eye Pipeline (`camera_stream_AU_and_ET_new.py`)
1. Open `dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py` and adjust:
- `DB_PATH`
- `CAMERA_INDEX`
- `OUTPUT_DIR` (optional)
2. Start camera capture and feature extraction:
```bash
python dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
```
3. Stop with `q` in the camera window.
### 3) Predict Pipeline (`predict_pipeline/predict_sample.py`)
1. Edit `predict_pipeline/config.yaml` and set:
- `database.path`, `database.table`, `database.key`
- `model.path`
- `scaler.path` (if `use_scaling: true`)
- MQTT settings under `mqtt`
2. Run one prediction cycle:
```bash
python predict_pipeline/predict_sample.py
```
3. Use `predict_service_timer_documentation.md` to see how to use the service and timer for automation. On Ohm-UX driving simulator's jetson board, the service runs starts automatically when the device is booting.