Hands-on Machine Learning: Medical Insurance Data

A practical exercise to hone your machine learning skills using real-world data.
Author

Julien Fouret

Introduction

This exercise immerses you in a real-world scenario.

  • exploring the nuances of the dataset
  • apply machine learning techniques
  • gain a tangible feel for data manipulation and modeling.
  • understand ML pipeline

1. Dataset Familiarization

1.1 Import and quick exploration

Your first task is to get familiar with the practical steps of data handling.

Context of the dataset

Insurance companies leverage data to make informed decisions about policy pricing. Predicting future insurance charges for new subscriptions is vital. By understanding features like BMI (Body Mass Index), sex, and age, companies can adjust pricing strategies to stay competitive while managing risks and profitability.

Given our dataset, which likely represents insurance charges over a year, our goal is to predict these charges for potential new subscribers.

Attributes:

The dataset we’ll be working with has been sourced from Kaggle. It provides various attributes of individuals and their corresponding medical insurance charges.

Dataset Link

License: Open

  • age: Age of the insured

  • sex: Gender of the insured

  • bmi: Body Mass Index

  • children: Number of children/dependents

  • smoker: Smoking status

  • region: Residential region

  • charges: Medical insurance cost

  • (Optionally the datasets is available on e-campus.)

  • Load the Dataset into a colab session

  • Import the datasets

    Hints

    import pandas as pd
    data = pd.read_csv('insurance.csv')

Dive into the dataset briefly to recognize its structure:

  • Data Types

    Hints

    use the dtypes attribute

  • Dataset Size

    Hints

    use the shape attribute

  • Column names

    Hints

    use the columns attribute

  • Look at the first lines

    Hints

    use the .head() attribute

  • Summarize the columns

    Hints

    use the describe() method.

    use the include argument with respect to dtypes that are not in the first output

  • Identify the key features and the target variable:

Target charges
Discrete features age, sex, children, smoker, region
Continuous features bmi
  • What are the possible values for children, region, or smoker ?

2.2 Data Vizualisation

  • Use dataviz techniques to explore potential relations between the target and features.
Hints

use a combination of panda.melt and seaborn.relplot for dataviz.

keep the target as id in the long format

use kind="scatter" in relplot

Free the plot scales with facet_kws = {"sharex": False}

Other relplot option of interest: alpha, col_wrap.

2. Basic Linear Modeling

2.1 Data Encoding

The linear model does not accept string as input. That is where encoding helps.

  • Encoding Categorical Variables

    Hints

    use pd.get_dummies

    decide whether to set drop_first True or False

  • Feature-Target Split Set up your X and Y.

    Hints

    Look at drop method for dataframes.

    Panda DataFrame or Numpy Array is usually fine.

2.2 Model Training

Apply your knowledge of linear models to train on the dataset:

  1. Train the Model
    Use linear algebra techniques or ML libraries as you see fit.

    Hints

    Use linear algebra: @ for matrix multiplication and scipy.linalg.inv to invert a matrix.

    Alternatively setup a score function that take a vector of parameters as argument and use scipy.optimize.minimize

  2. Predictions
    Generate predictions based on the model you trained.

2.3 Diagnostics

Quickly gauge the performance of your model:

  • Inspect Residuals A histogram can provide insights.

    Hints

    Use seaborn.histplot

  • Evaluation the regression using the R2 score

    Hints

    Use sklear.metrics.r2_score

  • Plot predicted values vs targets

    Hints

    Use seaborn.regplot or seaborn.scatterplot

  • Use Dataviz to plot residuals vs features

    Hints

    Use seaborn.relplot after pd.melt

    The hue argument might be good.

3. Some Feature Engineering

3.1 A derived feature ?

Challenge yourself: Can you derive any new features that might be relevant for the model given your previous observations ?

Hints

How BMI is used in real life ?

Sometime features holds too much non-essential information and are better used being reduced and categorized.

3.2 Interaction Term

Given your observation is there an interaction that might be more of interest ? Considering all interactions at once might add too much complexity to the model.

Hints Add the new feature that should be binary.
  • Encoding the interaction

    Hints

    In linear regression an interaction between \(x_1\) and \(x_2\) is modelled as follows: \(y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2\), \(x_1 x_2\) being the interaction term and \(\beta_3\) the associated parameter.

3.3 Retrain with the new model

  • Train the model and make predictions

  • Evaluate the performance

4. Leveraging the statsmodels package

4.1 OLS with Formula API

import statsmodels.formula.api as smf
results1 = smf.ols('charges ~ age + sex + bmi +  children + smoker + region + obesity', data=df2).fit()
results1.summary()

4.2 Model selection

  • add a model with the interaction term + obesity:smoker

  • Compare models with AIC, BIC and LRT test

    Hints

    compute the stat: D = -2 * (results1.llf - results2.llf)

    compute the difference in complexity: df = results2.df_model - results1.df_model

    compute the p-value as area under the pdf more extreme in comparison the the observed statistic.

    use scipy.stats.chi2

    “area under the pdf more extreme in comparison the the observed statistic” ==> 1-cdf

    cdf: cumulative distribution function

    pdf: probability distribution function

    Do not forget that df is the agrument of the chi2 law here.

5. In-sample and out-sample errors

5.1 North vs South

  • Split the dataset between south and north based on region.
  • Train a model with south data
  • Compare its performance using sum of squared error and R2 between the 2 datasets.
  • What is the difference ? Comment.

5.2 Sample 1 vs Sample 2

  • Do the same with 2 sample of the data frame

    Hints

    use the sample() method of the pd dataframe object.

  • start with 2 sample of 100

  • try, 20 and 50

  • Each time look at the sum of squared errors and r2

  • Comment.

6. Full ML workflow

  • Familiarize yourself with the terms
    • training dataset
    • validation dataset
    • test dataset
    • adversarial test dataset
  • Let us make a proper setup:

We expect df2 to be the dataframe, not encoded with the new feature.

from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.metrics import mean_squared_error

df_adv = pd.get_dummies(df2[df2["region"] == "southeast"].drop(columns="region"), columns=['sex', 'smoker', "obesity"], drop_first=True)
df_trn = pd.get_dummies(df2[df2["region"] != "southeast"].drop(columns="region"), columns=['sex', 'smoker', "obesity"], drop_first=True)

X_adv = df_adv.drop(columns='charges')
y_adv = df_adv['charges']

# Splitting the data
X = df_trn.drop(columns='charges')
y = df_trn['charges']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Let’s define a full ML pipeline

# Create the preprocessing steps using ColumnTransformer
preprocessor = ColumnTransformer(
    transformers=[
        ('poly', PolynomialFeatures(include_bias=False), X.columns)
    ])

# Create the pipeline with SequentialFeatureSelector
pipeline = Pipeline([
    ('preprocessor', preprocessor),
    ('feature_selection', SelectKBest(f_regression, k='all')),
    ('regressor', LinearRegression())
])

# Grid search including a parameter for number of features to select
param_grid = [
    {
        'regressor': [LinearRegression()],
        'preprocessor__poly__degree': [], # TO COMPLETE
        'feature_selection__k': [], # TO COMPLETE
    },
    {
        'regressor': [], # TO COMPLETE find an alternative regressor model on sklearn
        'preprocessor__poly__degree': [], # TO COMPLETE
        'feature_selection__k': [5,10,"all"], # TO COMPLETE
    }
]
  • use GridSearchCV to find the best hyper parameters
    Hints Create the object and use the fit method with training dataset.
  • Gather the best pipeline with the best parameters
    Hints best pipeline: best_estimator_ attribute best parameters: best_estimator_ attribute
  • Refit the best pipeline with the training datasets. And comment why retraining is necessary.
  • Evaluate the prediction on the different datasets, training, Test, adversarial test with the MSE (mean squared error) and the R2 scores.