- added accuracy results for xgboost models
This commit is contained in:
parent
145a5ecf78
commit
3701d11c77
@ -132,9 +132,9 @@ To establish a performance baseline, a classical Extreme Gradient Boosting (XGBo
|
|||||||
|
|
||||||
| Metric / Model | Classical XGBoost |
|
| Metric / Model | Classical XGBoost |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| Accuracy | |
|
| Accuracy | 0.581 |
|
||||||
| AUC | |
|
| AUC | 0.562 |
|
||||||
| F1-Score | |
|
| F1-Score | 0.652 |
|
||||||
|
|
||||||
### 4.2.2 XGBoost with GroupKFold Validation
|
### 4.2.2 XGBoost with GroupKFold Validation
|
||||||
|
|
||||||
@ -142,9 +142,9 @@ To address the challenge of inter-subject variability, the validation strategy w
|
|||||||
|
|
||||||
| Metric / Model | XGBoost (GroupKFold) |
|
| Metric / Model | XGBoost (GroupKFold) |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| Accuracy | |
|
| Accuracy | 0.586 |
|
||||||
| AUC | |
|
| AUC | 0.573 |
|
||||||
| F1-Score | |
|
| F1-Score | 0.651 |
|
||||||
|
|
||||||
### 4.2.3 Hybrid XGBoost with Autoencoder
|
### 4.2.3 Hybrid XGBoost with Autoencoder
|
||||||
|
|
||||||
@ -152,9 +152,9 @@ To improve feature quality, a hybrid approach was introduced by pre-training a d
|
|||||||
|
|
||||||
| Metric / Model | XGBoost + Autoencoder |
|
| Metric / Model | XGBoost + Autoencoder |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| Accuracy | |
|
| Accuracy | 0.589 |
|
||||||
| AUC | |
|
| AUC | 0.575 |
|
||||||
| F1-Score | |
|
| F1-Score | 0.650 |
|
||||||
|
|
||||||
### 4.2.4 Robust XGBoost with MAD Outlier Removal
|
### 4.2.4 Robust XGBoost with MAD Outlier Removal
|
||||||
|
|
||||||
@ -162,9 +162,9 @@ Recognizing that physiological and AU data often contain sensor artifacts, a rob
|
|||||||
|
|
||||||
| Metric / Model | XGBoost + MAD |
|
| Metric / Model | XGBoost + MAD |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| Accuracy | |
|
| Accuracy | 0.641 |
|
||||||
| AUC | |
|
| AUC | 0.610 |
|
||||||
| F1-Score | |
|
| F1-Score | 0.733 |
|
||||||
|
|
||||||
### 4.2.5 Combined Dataset of Action Units and EyeTracking
|
### 4.2.5 Combined Dataset of Action Units and EyeTracking
|
||||||
|
|
||||||
@ -174,9 +174,9 @@ By applying performance-based subject splitting, we ensured that the training an
|
|||||||
|
|
||||||
| Metric / Model | Final Combined Model |
|
| Metric / Model | Final Combined Model |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| Accuracy | |
|
| Accuracy | 0.659 |
|
||||||
| AUC | |
|
| AUC | 0.621 |
|
||||||
| F1-Score | |
|
| F1-Score | 0.715 |
|
||||||
|
|
||||||
### 4.2.6 Regularized XGBoost with Complexity Control
|
### 4.2.6 Regularized XGBoost with Complexity Control
|
||||||
|
|
||||||
@ -186,9 +186,9 @@ By penalizing large weights and promoting feature sparsity, the model is forced
|
|||||||
|
|
||||||
| Metric / Model | Regularized XGBoost |
|
| Metric / Model | Regularized XGBoost |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| Accuracy | |
|
| Accuracy | 0.665 |
|
||||||
| AUC | |
|
| AUC | 0.646 |
|
||||||
| F1-Score | |
|
| F1-Score | 0.727 |
|
||||||
|
|
||||||
### 4.3 Isolation Forest
|
### 4.3 Isolation Forest
|
||||||
To start with unsupervised learning techniques, `IsolationForest.ipynb`was created to research how well a simple ensemble classificator performs on the created dataset.
|
To start with unsupervised learning techniques, `IsolationForest.ipynb`was created to research how well a simple ensemble classificator performs on the created dataset.
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user