Model Tuner Library Instructions

This notebook provides a guide on how to install and use the model_tuner library in a notebook environment like Google Colab.

Model Tuner Description

The model_tuner library is designed to streamline the process of hyperparameter tuning and model optimization for machine learning algorithms. It provides an easy-to-use interface for defining, tuning, and evaluating models.

Documentation

For detailed documentation and advanced usage of the model_tuner library, please refer to the model_tuner documentation.

By following these steps, you should be able to install and use the model_tuner library effectively in your notebook environment. If you encounter any issues or have further questions, feel free to reach out for support.

Installation

To install the model_tuner library, use the following command:

In [1]:
! pip install model_tuner
! pip install sns
Collecting model_tuner
  Downloading model_tuner-0.0.20a0-py3-none-any.whl.metadata (5.7 kB)
Collecting joblib==1.3.2 (from model_tuner)
  Downloading joblib-1.3.2-py3-none-any.whl.metadata (5.4 kB)
Collecting tqdm==4.66.4 (from model_tuner)
  Downloading tqdm-4.66.4-py3-none-any.whl.metadata (57 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 1.9 MB/s eta 0:00:00
Collecting catboost==1.2.7 (from model_tuner)
  Downloading catboost-1.2.7-cp310-cp310-manylinux2014_x86_64.whl.metadata (1.2 kB)
Collecting pip==24.2 (from model_tuner)
  Downloading pip-24.2-py3-none-any.whl.metadata (3.6 kB)
Requirement already satisfied: setuptools==75.1.0 in /usr/local/lib/python3.10/dist-packages (from model_tuner) (75.1.0)
Collecting wheel==0.44.0 (from model_tuner)
  Downloading wheel-0.44.0-py3-none-any.whl.metadata (2.3 kB)
Requirement already satisfied: numpy<2.0.0,>=1.19.5 in /usr/local/lib/python3.10/dist-packages (from model_tuner) (1.26.4)
Requirement already satisfied: pandas<2.2.3,>=1.3.5 in /usr/local/lib/python3.10/dist-packages (from model_tuner) (2.2.2)
Collecting scikit-learn<1.4.0,>=1.0.2 (from model_tuner)
  Downloading scikit_learn-1.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Collecting scipy<1.11,>=1.6.3 (from model_tuner)
  Downloading scipy-1.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (58 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.9/58.9 kB 846.9 kB/s eta 0:00:00
Collecting scikit-optimize==0.10.2 (from model_tuner)
  Downloading scikit_optimize-0.10.2-py2.py3-none-any.whl.metadata (9.7 kB)
Requirement already satisfied: imbalanced-learn==0.12.4 in /usr/local/lib/python3.10/dist-packages (from model_tuner) (0.12.4)
Requirement already satisfied: xgboost==2.1.2 in /usr/local/lib/python3.10/dist-packages (from model_tuner) (2.1.2)
Requirement already satisfied: graphviz in /usr/local/lib/python3.10/dist-packages (from catboost==1.2.7->model_tuner) (0.20.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from catboost==1.2.7->model_tuner) (3.8.0)
Requirement already satisfied: plotly in /usr/local/lib/python3.10/dist-packages (from catboost==1.2.7->model_tuner) (5.24.1)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from catboost==1.2.7->model_tuner) (1.16.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from imbalanced-learn==0.12.4->model_tuner) (3.5.0)
Collecting pyaml>=16.9 (from scikit-optimize==0.10.2->model_tuner)
  Downloading pyaml-24.9.0-py3-none-any.whl.metadata (11 kB)
Requirement already satisfied: packaging>=21.3 in /usr/local/lib/python3.10/dist-packages (from scikit-optimize==0.10.2->model_tuner) (24.2)
Requirement already satisfied: nvidia-nccl-cu12 in /usr/local/lib/python3.10/dist-packages (from xgboost==2.1.2->model_tuner) (2.23.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas<2.2.3,>=1.3.5->model_tuner) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas<2.2.3,>=1.3.5->model_tuner) (2024.2)
Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.10/dist-packages (from pandas<2.2.3,>=1.3.5->model_tuner) (2024.2)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from pyaml>=16.9->scikit-optimize==0.10.2->model_tuner) (6.0.2)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->catboost==1.2.7->model_tuner) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->catboost==1.2.7->model_tuner) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->catboost==1.2.7->model_tuner) (4.55.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->catboost==1.2.7->model_tuner) (1.4.7)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->catboost==1.2.7->model_tuner) (11.0.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->catboost==1.2.7->model_tuner) (3.2.0)
Requirement already satisfied: tenacity>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from plotly->catboost==1.2.7->model_tuner) (9.0.0)
Downloading model_tuner-0.0.20a0-py3-none-any.whl (24 kB)
Downloading catboost-1.2.7-cp310-cp310-manylinux2014_x86_64.whl (98.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.7/98.7 MB 6.8 MB/s eta 0:00:00
Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.2/302.2 kB 10.1 MB/s eta 0:00:00
Downloading pip-24.2-py3-none-any.whl (1.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 35.3 MB/s eta 0:00:00
Downloading scikit_optimize-0.10.2-py2.py3-none-any.whl (107 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 107.8/107.8 kB 7.2 MB/s eta 0:00:00
Downloading tqdm-4.66.4-py3-none-any.whl (78 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.3/78.3 kB 4.3 MB/s eta 0:00:00
Downloading wheel-0.44.0-py3-none-any.whl (67 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.1/67.1 kB 2.0 MB/s eta 0:00:00
Downloading scikit_learn-1.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.8/10.8 MB 26.5 MB/s eta 0:00:00
Downloading scipy-1.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 34.4/34.4 MB 14.6 MB/s eta 0:00:00
Downloading pyaml-24.9.0-py3-none-any.whl (24 kB)
Installing collected packages: wheel, tqdm, scipy, pyaml, pip, joblib, scikit-learn, scikit-optimize, catboost, model_tuner
  Attempting uninstall: wheel
    Found existing installation: wheel 0.45.0
    Uninstalling wheel-0.45.0:
      Successfully uninstalled wheel-0.45.0
  Attempting uninstall: tqdm
    Found existing installation: tqdm 4.66.6
    Uninstalling tqdm-4.66.6:
      Successfully uninstalled tqdm-4.66.6
  Attempting uninstall: scipy
    Found existing installation: scipy 1.13.1
    Uninstalling scipy-1.13.1:
      Successfully uninstalled scipy-1.13.1
  Attempting uninstall: pip
    Found existing installation: pip 24.1.2
    Uninstalling pip-24.1.2:
      Successfully uninstalled pip-24.1.2
  Attempting uninstall: joblib
    Found existing installation: joblib 1.4.2
    Uninstalling joblib-1.4.2:
      Successfully uninstalled joblib-1.4.2
  Attempting uninstall: scikit-learn
    Found existing installation: scikit-learn 1.5.2
    Uninstalling scikit-learn-1.5.2:
      Successfully uninstalled scikit-learn-1.5.2
Successfully installed catboost-1.2.7 joblib-1.3.2 model_tuner-0.0.20a0 pip-24.2 pyaml-24.9.0 scikit-learn-1.3.2 scikit-optimize-0.10.2 scipy-1.10.1 tqdm-4.66.4 wheel-0.44.0
Collecting sns
  Downloading sns-0.1.tar.gz (2.1 kB)
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: sns
  Building wheel for sns (setup.py) ... done
  Created wheel for sns: filename=sns-0.1-py3-none-any.whl size=2639 sha256=979803962eac384c040031c77c47cb1a59b62a1446f88c49500bcf135109d02a
  Stored in directory: /root/.cache/pip/wheels/76/1a/47/c3b6a8b9d3ae47b1488f4be13c86586327c07e0ac1bb5b3337
Successfully built sns
Installing collected packages: sns
Successfully installed sns-0.1

Importing the Library

After installation, you can import the necessary components from the model_tuner library as shown below:

In [2]:
from model_tuner import Model
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer

Binary Classification with the titanic dataset and Pipeline

In [3]:
titanic = sns.load_dataset('titanic')
titanic.head()
Out[3]:
survived pclass sex age sibsp parch fare embarked class who adult_male deck embark_town alive alone
0 0 3 male 22.0 1 0 7.2500 S Third man True NaN Southampton no False
1 1 1 female 38.0 1 0 71.2833 C First woman False C Cherbourg yes False
2 1 3 female 26.0 0 0 7.9250 S Third woman False NaN Southampton yes True
3 1 1 female 35.0 1 0 53.1000 S First woman False C Southampton yes False
4 0 3 male 35.0 0 0 8.0500 S Third man True NaN Southampton no True
In [4]:
X = titanic[[col for col in titanic.columns if col != "survived"]]
### Removing repeated data
X = X.drop(columns=['alive', 'class', 'embarked'])
y = titanic['survived']
In [5]:
rf = RandomForestClassifier(class_weight="balanced")

estimator_name = "rf"

rf_pipeline_hyperparams_grid = {
    f"{estimator_name}__max_depth": [3, 5, 10, None],
    f"{estimator_name}__n_estimators": [10, 100, 200],
    f"{estimator_name}__max_features": [1, 3, 5, 7],
    f"{estimator_name}__min_samples_leaf": [1, 2, 3],
}

Defining pipeline steps

Here we look at the columns of the data and work out what data points need what sort of preprocessing, for example we may want to scale the continuous input data. The ordinal data will need converting to appropriate numbers e.g. A-> 0 B-> 1, C-> 3. Or the otherway around. The other categorical data needs one hot encoding.

This can be done easily through the pipeline so that we can ensure there is no data leakage.

This also allows us to handle missing data when it comes to predicting. Using the OneHotEncoder with handle_unknown set to ignore will generate a new empty column if we have missing data.

We also set impute to True this helps us handle missing data by automatically imputing it with the mean. This step can be removed and a custom imptuer can be used through the pipeline_steps if necessary.

In [6]:
X.head()
Out[6]:
pclass sex age sibsp parch fare who adult_male deck embark_town alone
0 3 male 22.0 1 0 7.2500 man True NaN Southampton False
1 1 female 38.0 1 0 71.2833 woman False C Cherbourg False
2 3 female 26.0 0 0 7.9250 woman False NaN Southampton True
3 1 female 35.0 1 0 53.1000 woman False C Southampton False
4 3 male 35.0 0 0 8.0500 man True NaN Southampton True
In [7]:
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import MinMaxScaler
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline

# Define columns
ohcols = [
    "embark_town",
    "who",
    "sex",
    "adult_male"
]

ordcols = [
    "deck"
]

scalercols = [
    "parch",
    "fare",
    "age",
    "pclass"
]

# Create the pipeline for categorical features
categorical_transformer = Pipeline(
    steps=[
        ("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
        ("onehot", OneHotEncoder(handle_unknown="ignore")),
    ]
)

# Create the pipeline for ordinal features
ordinal_transformer = Pipeline(
    steps=[
        ("imputer", SimpleImputer(strategy="most_frequent")),
        ("ordinal", OrdinalEncoder())
    ]
)

# Create the pipeline for numeric features (imputation followed by scaling)
numeric_transformer = Pipeline(
    steps=[
        ("imputer", SimpleImputer(strategy="mean")),
        ("scaler", MinMaxScaler())
    ]
)

# Define the ColumnTransformer
ct = ColumnTransformer(
    transformers=[
        ("OneHotEncoder", categorical_transformer, ohcols),
        ("OrdinalEncoder", ordinal_transformer, ordcols),
        ("Numeric", numeric_transformer, scalercols),
    ],
    remainder='passthrough'  # Keep other columns unchanged
)
In [8]:
# Initialize titanic_model
titanic_model_rf = Model(
    name="RandomForest_Titanic",
    estimator_name=estimator_name,
    calibrate=True,
    model_type="classification",
    estimator=rf,
    kfold=False,
    pipeline_steps=[("Preproccesor", ct)],
    stratify_y=True,
    grid=rf_pipeline_hyperparams_grid,
    randomized_grid=True,
    n_iter=5,
    scoring=["roc_auc"],
    random_state=42,
    n_jobs=-1,
)
In [9]:
titanic_model_rf.grid_search_param_tuning(X, y, f1_beta_tune=True)
Pipeline Steps:

┌──────────────────────────────────────────────────────┐
│ Step 1: preprocess_column_transformer_Preproccesor   │
│ ColumnTransformer                                    │
└──────────────────────────────────────────────────────┘
                           │
                           ▼
┌──────────────────────────────────────────────────────┐
│ Step 2: rf                                           │
│ RandomForestClassifier                               │
└──────────────────────────────────────────────────────┘

100%|██████████| 5/5 [00:01<00:00,  3.10it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:01<00:00,  1.58it/s]
Best score/param set found on validation set:
{'params': {'rf__max_depth': 5,
            'rf__max_features': 5,
            'rf__min_samples_leaf': 1,
            'rf__n_estimators': 200},
 'score': 0.8780080213903744}
Best roc_auc: 0.878 


In [10]:
X_train, y_train = titanic_model_rf.get_train_data(X, y)
X_valid, y_valid = titanic_model_rf.get_valid_data(X, y)
X_test, y_test = titanic_model_rf.get_test_data(X, y)

titanic_model_rf.fit(X_train, y_train)
In [11]:
prob_uncalibrated = titanic_model_rf.predict_proba(X_test)[:, 1]

if titanic_model_rf.calibrate == True:
  titanic_model_rf.calibrateModel(X, y)
Confusion matrix on validation set:
--------------------------------------------------------------------------------
          Predicted:
            Pos  Neg
--------------------------------------------------------------------------------
Actual: Pos 51 (tp)  17 (fn)
        Neg 11 (fp)  99 (tn)
--------------------------------------------------------------------------------

              precision    recall  f1-score   support

           0       0.85      0.90      0.88       110
           1       0.82      0.75      0.78        68

    accuracy                           0.84       178
   macro avg       0.84      0.82      0.83       178
weighted avg       0.84      0.84      0.84       178

--------------------------------------------------------------------------------
In [12]:
metrics = titanic_model_rf.return_metrics(X_test, y_test)
Confusion matrix on set provided: 
--------------------------------------------------------------------------------
          Predicted:
             Pos   Neg
--------------------------------------------------------------------------------
Actual: Pos  50 (tp)   19 (fn)
        Neg  10 (fp)  100 (tn)
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
{'AUC ROC': 0.8617918313570486,
 'Average Precision': 0.8524617332964686,
 'Brier Score': 0.12723114064261673,
 'Precision/PPV': 0.8333333333333334,
 'Sensitivity': 0.7246376811594203,
 'Specificity': 0.9090909090909091}
--------------------------------------------------------------------------------

              precision    recall  f1-score   support

           0       0.84      0.91      0.87       110
           1       0.83      0.72      0.78        69

    accuracy                           0.84       179
   macro avg       0.84      0.82      0.82       179
weighted avg       0.84      0.84      0.84       179

--------------------------------------------------------------------------------
In [13]:
titanic_model_rf.threshold
Out[13]:
{'roc_auc': 0.32}

Calibrating Model

In [14]:
from matplotlib import pyplot as plt
from sklearn.calibration import calibration_curve
In [15]:
## Get the predicted probabilities for the validation data from calibrated model
y_prob_calibrated = titanic_model_rf.predict_proba(X_test)[:, 1]

## Compute the calibration curve for the calibrated model
prob_true_calibrated, prob_pred_calibrated = calibration_curve(
    y_test,
    y_prob_calibrated,
    n_bins=4,
)
prob_true_uncalibrated, prob_pred_uncalibrated = calibration_curve(
    y_test,
    prob_uncalibrated,
    n_bins=4,
)

## Plot the calibration curves
plt.figure(figsize=(5, 5))
plt.plot(
    prob_pred_uncalibrated,
    prob_true_uncalibrated,
    marker="o",
    label="Uncalibrated XGBoost",
)
plt.plot(
    prob_pred_calibrated,
    prob_true_calibrated,
    marker="o",
    label="Calibrated XGBoost",
)
plt.plot([0, 1], [0, 1], linestyle="--", label="Perfectly calibrated")
plt.xlabel("Predicted probability")
plt.ylabel("True probability in each bin")
plt.title("Calibration plot (reliability curve)")
plt.legend()
plt.show()

KFold?

If we want to use KFold we can simply set the kfold parameter to True this will automatically split the data accordingly.

In [16]:
## Initialize titanic_model

titanic_model_kf = Model(
    name="RandomForest_Titanic",
    estimator_name=estimator_name,
    calibrate=True,
    model_type="classification",
    estimator=rf,
    kfold=True,
    pipeline_steps=[("ColumnTransformer", ct)],
    stratify_y=False,
    n_splits=10,
    grid=rf_pipeline_hyperparams_grid,
    randomized_grid=True,
    n_iter=5,
    scoring=["roc_auc"],
    random_state=42,
    n_jobs=-1,
)
In [17]:
#### When using KFold, X and y are passed as a whole to the fit method as they
#### are split within this into the separate folds.
#### The metrics are assessed over each fold and averaged.

titanic_model_kf.grid_search_param_tuning(X, y, f1_beta_tune=True)
Pipeline Steps:

┌───────────────────────────────────────────────────────────┐
│ Step 1: preprocess_column_transformer_ColumnTransformer   │
│ ColumnTransformer                                         │
└───────────────────────────────────────────────────────────┘
                             │
                             ▼
┌───────────────────────────────────────────────────────────┐
│ Step 2: rf                                                │
│ RandomForestClassifier                                    │
└───────────────────────────────────────────────────────────┘

# Tuning hyper-parameters for roc_auc
Fitting 10 folds for each of 5 candidates, totalling 50 fits

Best score/param set found on development set:
{0.8733055025293572: {'rf__max_depth': 5,
                      'rf__max_features': 5,
                      'rf__min_samples_leaf': 1,
                      'rf__n_estimators': 200}}

Grid scores on development set:
0.850 (+/-0.100) for {'rf__n_estimators': 10, 'rf__min_samples_leaf': 1, 'rf__max_features': 3, 'rf__max_depth': None}
0.862 (+/-0.093) for {'rf__n_estimators': 100, 'rf__min_samples_leaf': 1, 'rf__max_features': 5, 'rf__max_depth': 3}
0.868 (+/-0.086) for {'rf__n_estimators': 100, 'rf__min_samples_leaf': 1, 'rf__max_features': 3, 'rf__max_depth': 10}
0.873 (+/-0.090) for {'rf__n_estimators': 100, 'rf__min_samples_leaf': 3, 'rf__max_features': 5, 'rf__max_depth': 10}
0.873 (+/-0.092) for {'rf__n_estimators': 200, 'rf__min_samples_leaf': 1, 'rf__max_features': 5, 'rf__max_depth': 5}
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  2.05it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.30it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.28it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.29it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.27it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.20it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.23it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.28it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.33it/s]
Fitting model with best params and tuning for best threshold ...
100%|██████████| 2/2 [00:00<00:00,  3.30it/s]
In [18]:
#### When using KFold, X and y are passed as a whole to the fit method as they
#### are split within this into the separate folds.
#### The metrics are assessed over each fold and averaged.

titanic_model_kf.fit(X, y)
In [19]:
titanic_model_kf.threshold
Out[19]:
{'roc_auc': 0.28400000000000003}
In [20]:
titanic_model_kf.return_metrics(X, y)
Detailed classification report for RandomForest_Titanic:

Confusion Matrix Average Across 10 Folds for roc_auc:
--------------------------------------------------------------------------------
          Predicted:
            Pos  Neg
--------------------------------------------------------------------------------
Actual: Pos 25 (tp)   8 (fn)
        Neg  6 (fp)  48 (tn)
--------------------------------------------------------------------------------

Classification Report Averaged Across All Folds for roc_auc:
              precision    recall  f1-score   support

           0       0.85      0.88      0.86       549
           1       0.79      0.75      0.77       342

    accuracy                           0.83       891
   macro avg       0.82      0.81      0.82       891
weighted avg       0.83      0.83      0.83       891

--------------------------------------------------------------------------------
The model is trained on the full development set.
The scores are computed on the full evaluation set.

--------------------------------------------------------------------------------
Average performance across 10 Folds:
{'AUC ROC': 0.8708905824481441,
 'Average Precision': 0.8571712206838351,
 'Brier Score': 0.12943665899615878,
 'Precision/PPV': 0.6572898324552856,
 'Sensitivity': 0.8731093599580442,
 'Specificity': 0.7161151820915013}
--------------------------------------------------------------------------------