fixed typos and added clickable links to doc files

This commit is contained in:
Michael Weig 2026-03-18 12:29:44 +01:00
parent 4df1187f84
commit eba9b07487
2 changed files with 8 additions and 8 deletions

View File

@ -47,7 +47,7 @@ Behavior:
- Produces training-ready feature tables = dataset - Produces training-ready feature tables = dataset
- Parameter ```MIN_DUR_BLINKS``` can be adjusted, although this value needs to make sense in combination with your sampling frequency - Parameter ```MIN_DUR_BLINKS``` can be adjusted, although this value needs to make sense in combination with your sampling frequency
- With low videostream rates, consider to reevaluate the meaningfulness of some eye-tracking features, especially the fixations - With low videostream rates, consider to reevaluate the meaningfulness of some eye-tracking features, especially the fixations
- running the script requires a manual installation of pygaze Analyser library from [github](https://github.com/esdalmaijer/PyGazeAnalyser.git) - running the script requires a manual installation of [pygaze Analyser library](https://github.com/esdalmaijer/PyGazeAnalyser.git) from github
### 2.3 Online Camera + Eye + AU Feature Extraction ### 2.3 Online Camera + Eye + AU Feature Extraction
@ -71,13 +71,13 @@ Operational note:
## 3) EDA ## 3) EDA
The directory EDA provides several files to get insights into both the raw data from AdaBase and your own dataset. The directory EDA provides several files to get insights into both the raw data from AdaBase and your own dataset.
- `EDA.ipynb` - main EDA notebook: recreates the plot from AdaBase documentation, lists all experiments and in general serves as a playground for you to get to know the files. - `EDA.ipynb` - Main EDA notebook: recreates the plot from AdaBase documentation, lists all experiments and in general serves as a playground for you to get to know the files.
- `distribution_plots.ipynb` - This notebook aimes to visualize the data distributions for each experiment - the goal is the find out, whether the split of experiments into high and low cognitive load is clearer if some experiments are dropped. - `distribution_plots.ipynb` - This notebook aimes to visualize the data distributions for each experiment - the goal is the find out, whether the split of experiments into high and low cognitive load is clearer if some experiments are dropped.
- `histogramms.ipynb` - Histogram analysis of low load vs high load per feature. Additionaly, scatter plots per feature are available. - `histogramms.ipynb` - Histogram analysis of low load vs high load per feature. Additionaly, scatter plots per feature are available.
- `researchOnSubjectPerformance.ipynb` - This noteboooks aims to see how the performance values range for the 30 subjects. The code creates and saves a table in csv-format, which will later be used as the foundation of the performance based split in ```model_training/tools/performance_based_split``` - `researchOnSubjectPerformance.ipynb` - This noteboooks aims to see how the performance values range for the 30 subjects. The code creates and saves a table in csv-format, which will later be used as the foundation of the performance based split in ```model_training/tools/performance_based_split```
- `owncloud_file_access.ipynb` - Get access to the files via owncloud and safe them as .h5 files, in correspondence to the parquet file creation script - `owncloud_file_access.ipynb` - Get access to the files via owncloud and safe them as .h5 files, in correspondence to the parquet file creation script
- `login.yaml` - used to store URL and password to access files from owncloud, used in previous notebook - `login.yaml` -Used to store URL and password to access files from owncloud, used in previous notebook
- `calculate_replacement_values.ipynb` - fallback / median computation notebook for deployment, creation of yaml syntax embedding - `calculate_replacement_values.ipynb` -Fallback / median computation notebook for deployment, creation of yaml syntax embedding
General information: General information:
- Due to their size, its absolutely recommended to download and save the dataset files once in the beginning - Due to their size, its absolutely recommended to download and save the dataset files once in the beginning
@ -93,7 +93,7 @@ Included model families:
- OCSVM* - OCSVM*
- DeepSVDD* - DeepSVDD*
\* These trainings are unsupervised, which means only low cognitive load samples are used for training. Validation then also considers high low samples. \* These training strategies are unsupervised, which means only low cognitive load samples are used for training. Validation then also considers high low samples.
Supporting utilities in ```model_training/tools```: Supporting utilities in ```model_training/tools```:
@ -244,7 +244,7 @@ Role:
- Run inference repeatedly without manual execution - Run inference repeatedly without manual execution
- Timer/service configuration can be customized - Timer/service configuration can be customized
More information on how to use and interact with the system service and timer can be found in `predict_service_timer_documentation.md` More information on how to use and interact with the system service and timer can be found in [predict_service_timer_documentation.md](/predict_pipeline/predict_service_timer_documentation.md)
## 5.2 Runtime Configuration ## 5.2 Runtime Configuration

View File

@ -2,7 +2,7 @@
Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals. Short overview: this repository contains the data, feature, training, and inference pipeline for multimodal driver-state analysis using facial AUs and eye-tracking signals.
For full documentation, see `project_report.md`. For full documentation, see [project_report.md](project_report.md).
## Quickstart ## Quickstart
@ -51,4 +51,4 @@ python dataset_creation/camera_handling/camera_stream_AU_and_ET_new.py
python predict_pipeline/predict_sample.py python predict_pipeline/predict_sample.py
``` ```
3. Use `predict_service_timer_documentation.md` to see how to use the service and timer for automation. On Ohm-UX driving simulator's jetson board, the service runs starts automatically when the device is booting. 3. Use [predict_service_timer_documentation.md](/predict_pipeline/predict_service_timer_documentation.md) to see how to use the service and timer for automation. On Ohm-UX driving simulator's jetson board, the service runs starts automatically when the device is booting.