diff --git a/project_report.md b/project_report.md index c1bdf67..babb24d 100644 --- a/project_report.md +++ b/project_report.md @@ -132,9 +132,9 @@ To establish a performance baseline, a classical Extreme Gradient Boosting (XGBo | Metric / Model | Classical XGBoost | | --- | --- | -| Accuracy | | -| AUC | | -| F1-Score | | +| Accuracy | 0.581 | +| AUC | 0.562 | +| F1-Score | 0.652 | ### 4.2.2 XGBoost with GroupKFold Validation @@ -142,9 +142,9 @@ To address the challenge of inter-subject variability, the validation strategy w | Metric / Model | XGBoost (GroupKFold) | | --- | --- | -| Accuracy | | -| AUC | | -| F1-Score | | +| Accuracy | 0.586 | +| AUC | 0.573 | +| F1-Score | 0.651 | ### 4.2.3 Hybrid XGBoost with Autoencoder @@ -152,9 +152,9 @@ To improve feature quality, a hybrid approach was introduced by pre-training a d | Metric / Model | XGBoost + Autoencoder | | --- | --- | -| Accuracy | | -| AUC | | -| F1-Score | | +| Accuracy | 0.589 | +| AUC | 0.575 | +| F1-Score | 0.650 | ### 4.2.4 Robust XGBoost with MAD Outlier Removal @@ -162,9 +162,9 @@ Recognizing that physiological and AU data often contain sensor artifacts, a rob | Metric / Model | XGBoost + MAD | | --- | --- | -| Accuracy | | -| AUC | | -| F1-Score | | +| Accuracy | 0.641 | +| AUC | 0.610 | +| F1-Score | 0.733 | ### 4.2.5 Combined Dataset of Action Units and EyeTracking @@ -174,9 +174,9 @@ By applying performance-based subject splitting, we ensured that the training an | Metric / Model | Final Combined Model | | --- | --- | -| Accuracy | | -| AUC | | -| F1-Score | | +| Accuracy | 0.659 | +| AUC | 0.621 | +| F1-Score | 0.715 | ### 4.2.6 Regularized XGBoost with Complexity Control @@ -186,9 +186,9 @@ By penalizing large weights and promoting feature sparsity, the model is forced | Metric / Model | Regularized XGBoost | | --- | --- | -| Accuracy | | -| AUC | | -| F1-Score | | +| Accuracy | 0.665 | +| AUC | 0.646 | +| F1-Score | 0.727 | ### 4.3 Isolation Forest To start with unsupervised learning techniques, `IsolationForest.ipynb`was created to research how well a simple ensemble classificator performs on the created dataset.