Hands-on Machine Learning: Medical Insurance Data
Introduction
This exercise immerses you in a real-world scenario.
- exploring the nuances of the dataset
- apply machine learning techniques
- gain a tangible feel for data manipulation and modeling.
- understand ML pipeline
1. Dataset Familiarization
1.1 Import and quick exploration
Your first task is to get familiar with the practical steps of data handling.
Context of the dataset
Insurance companies leverage data to make informed decisions about policy pricing. Predicting future insurance charges for new subscriptions is vital. By understanding features like BMI (Body Mass Index), sex, and age, companies can adjust pricing strategies to stay competitive while managing risks and profitability.
Given our dataset, which likely represents insurance charges over a year, our goal is to predict these charges for potential new subscribers.
Attributes:
The dataset we’ll be working with has been sourced from Kaggle. It provides various attributes of individuals and their corresponding medical insurance charges.
License: Open
age
: Age of the insuredsex
: Gender of the insuredbmi
: Body Mass Indexchildren
: Number of children/dependentssmoker
: Smoking statusregion
: Residential regioncharges
: Medical insurance cost(Optionally the datasets is available on e-campus.)
Load the Dataset into a colab session
Import the datasets
Hints
import pandas as pd = pd.read_csv('insurance.csv') data
Dive into the dataset briefly to recognize its structure:
Data Types
Hints
use the
dtypes
attributeDataset Size
Hints
use the
shape
attributeColumn names
Hints
use the
columns
attributeLook at the first lines
Hints
use the
.head()
attributeSummarize the columns
Hints
use the
describe()
method.use the
include
argument with respect to dtypes that are not in the first outputIdentify the key features and the target variable:
Target
charges
Discrete features
age
, sex
, children
, smoker
, region
Continuous features
bmi
- What are the possible values for
children
,region
, orsmoker
?
2.2 Data Vizualisation
- Use dataviz techniques to explore potential relations between the target and features.
Hints
use a combination of panda.melt
and seaborn.relplot
for dataviz.
keep the target as id in the long format
use kind="scatter"
in relplot
Free the plot scales with facet_kws = {"sharex": False}
alpha
, col_wrap
.
2. Basic Linear Modeling
2.1 Data Encoding
The linear model does not accept string as input. That is where encoding helps.
Encoding Categorical Variables
Hints
use
pd.get_dummies
decide whether to set
drop_first
True
orFalse
Feature-Target Split Set up your
X
andY
.
Hints
Look at
drop
method for dataframes.Panda DataFrame or Numpy Array is usually fine.
2.2 Model Training
Apply your knowledge of linear models to train on the dataset:
Train the Model
Use linear algebra techniques or ML libraries as you see fit.Hints
Use linear algebra:
@
for matrix multiplication andscipy.linalg.inv
to invert a matrix.Alternatively setup a score function that take a vector of parameters as argument and use
scipy.optimize.minimize
Predictions
Generate predictions based on the model you trained.
2.3 Diagnostics
Quickly gauge the performance of your model:
Inspect Residuals A histogram can provide insights.
Hints
Use
seaborn.histplot
Evaluation the regression using the R2 score
Hints
Use
sklear.metrics.r2_score
Plot predicted values vs targets
Hints
Use
seaborn.regplot
orseaborn.scatterplot
Use Dataviz to plot residuals vs features
Hints
Use
seaborn.relplot
afterpd.melt
The
hue
argument might be good.
3. Some Feature Engineering
3.1 A derived feature ?
Challenge yourself: Can you derive any new features that might be relevant for the model given your previous observations ?
Hints
How BMI is used in real life ?
Sometime features holds too much non-essential information and are better used being reduced and categorized.3.2 Interaction Term
Given your observation is there an interaction that might be more of interest ? Considering all interactions at once might add too much complexity to the model.
Hints
Add the new feature that should be binary.Encoding the interaction
Hints
In linear regression an interaction between \(x_1\) and \(x_2\) is modelled as follows: \(y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2\), \(x_1 x_2\) being the interaction term and \(\beta_3\) the associated parameter.
3.3 Retrain with the new model
Train the model and make predictions
Evaluate the performance
4. Leveraging the statsmodels package
4.1 OLS with Formula API
import statsmodels.formula.api as smf
results1 = smf.ols('charges ~ age + sex + bmi + children + smoker + region + obesity', data=df2).fit()
results1.summary()
4.2 Model selection
add a model with the interaction term
+ obesity:smoker
Compare models with AIC, BIC and LRT test
Hints
compute the stat:
D = -2 * (results1.llf - results2.llf)
compute the difference in complexity:
df = results2.df_model - results1.df_model
compute the p-value as area under the pdf more extreme in comparison the the observed statistic.
use
scipy.stats.chi2
“area under the pdf more extreme in comparison the the observed statistic” ==> 1-cdf
cdf: cumulative distribution function
pdf: probability distribution function
Do not forget that df is the agrument of the chi2 law here.
5. In-sample and out-sample errors
5.1 North vs South
- Split the dataset between south and north based on region.
- Train a model with south data
- Compare its performance using sum of squared error and R2 between the 2 datasets.
- What is the difference ? Comment.
5.2 Sample 1 vs Sample 2
Do the same with 2 sample of the data frame
Hints
use the
sample()
method of the pd dataframe object.start with 2 sample of 100
try, 20 and 50
Each time look at the sum of squared errors and r2
Comment.
6. Full ML workflow
- Familiarize yourself with the terms
- training dataset
- validation dataset
- test dataset
- adversarial test dataset
- Let us make a proper setup:
We expect df2
to be the dataframe, not encoded with the new feature.
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.metrics import mean_squared_error
= pd.get_dummies(df2[df2["region"] == "southeast"].drop(columns="region"), columns=['sex', 'smoker', "obesity"], drop_first=True)
df_adv = pd.get_dummies(df2[df2["region"] != "southeast"].drop(columns="region"), columns=['sex', 'smoker', "obesity"], drop_first=True)
df_trn
= df_adv.drop(columns='charges')
X_adv = df_adv['charges']
y_adv
# Splitting the data
= df_trn.drop(columns='charges')
X = df_trn['charges']
y = train_test_split(X, y, test_size=0.2, random_state=42) X_train, X_test, y_train, y_test
Let’s define a full ML pipeline
# Create the preprocessing steps using ColumnTransformer
= ColumnTransformer(
preprocessor =[
transformers'poly', PolynomialFeatures(include_bias=False), X.columns)
(
])
# Create the pipeline with SequentialFeatureSelector
= Pipeline([
pipeline 'preprocessor', preprocessor),
('feature_selection', SelectKBest(f_regression, k='all')),
('regressor', LinearRegression())
(
])
# Grid search including a parameter for number of features to select
= [
param_grid
{'regressor': [LinearRegression()],
'preprocessor__poly__degree': [], # TO COMPLETE
'feature_selection__k': [], # TO COMPLETE
},
{'regressor': [], # TO COMPLETE find an alternative regressor model on sklearn
'preprocessor__poly__degree': [], # TO COMPLETE
'feature_selection__k': [5,10,"all"], # TO COMPLETE
} ]
- use
GridSearchCV
to find the best hyper parametersHints
Create the object and use thefit
method with training dataset. - Gather the best pipeline with the best parameters
Hints
best pipeline:best_estimator_
attribute best parameters:best_estimator_
attribute - Refit the best pipeline with the training datasets. And comment why retraining is necessary.
- Evaluate the prediction on the different datasets, training, Test, adversarial test with the MSE (mean squared error) and the R2 scores.