Clustering using K-Means with Titanic Dataset¶

Python Programming Tutorials

In [1]:
#https://pythonprogramming.net/static/downloads/machine-learning-data/titanic.xls
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
import numpy as np
from sklearn.cluster import KMeans
from sklearn import preprocessing
import pandas as pd

'''
Pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
survival Survival (0 = No; 1 = Yes)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare (British pound)
cabin Cabin
embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
boat Lifeboat
body Body Identification Number
home.dest Home/Destination
'''

df = pd.read_excel('titanic.xls')
#print(df.head())
df.drop(['body','name'], 1, inplace=True)
df.convert_objects(convert_numeric=True)
df.fillna(0, inplace=True)
#print(df.head())
C:\Users\tateno\AppData\Local\conda\conda\envs\py35\lib\site-packages\ipykernel_launcher.py:30: FutureWarning: convert_objects is deprecated.  Use the data-type specific converters pd.to_datetime, pd.to_timedelta and pd.to_numeric.
In [2]:
df.head()
Out[2]:
pclass survived sex age sibsp parch ticket fare cabin embarked boat home.dest
0 1 1 female 29.0000 0 0 24160 211.3375 B5 S 2 St Louis, MO
1 1 1 male 0.9167 1 2 113781 151.5500 C22 C26 S 11 Montreal, PQ / Chesterville, ON
2 1 0 female 2.0000 1 2 113781 151.5500 C22 C26 S 0 Montreal, PQ / Chesterville, ON
3 1 0 male 30.0000 1 2 113781 151.5500 C22 C26 S 0 Montreal, PQ / Chesterville, ON
4 1 0 female 25.0000 1 2 113781 151.5500 C22 C26 S 0 Montreal, PQ / Chesterville, ON

名義尺度を数値に置き換える。¶

In [3]:
def handle_non_numerical_data(df):
    columns = df.columns.values

    for column in columns:
        text_digit_vals = {}
        def convert_to_int(val):
            return text_digit_vals[val]

        if df[column].dtype != np.int64 and df[column].dtype != np.float64:
            column_contents = df[column].values.tolist()
            unique_elements = set(column_contents)
            x = 0
            for unique in unique_elements:
                if unique not in text_digit_vals:
                    text_digit_vals[unique] = x
                    x+=1

            df[column] = list(map(convert_to_int, df[column]))

    return df

df = handle_non_numerical_data(df)
In [4]:
df.head()
Out[4]:
pclass survived sex age sibsp parch ticket fare cabin embarked boat home.dest
0 1 1 0 29.0000 0 0 764 211.3375 24 1 1 328
1 1 1 1 0.9167 1 2 525 151.5500 181 1 3 154
2 1 0 0 2.0000 1 2 525 151.5500 181 1 0 154
3 1 0 1 30.0000 1 2 525 151.5500 181 1 0 154
4 1 0 0 25.0000 1 2 525 151.5500 181 1 0 154

From here, we can right away do the clustering:

KMeans¶

In [2]:
X = np.array(df.drop(['survived'], 1).astype(float))
y = np.array(df['survived'])

clf = KMeans(n_clusters=2)
clf.fit(X)
Out[2]:
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
    n_clusters=2, n_init=10, n_jobs=1, precompute_distances='auto',
    random_state=None, tol=0.0001, verbose=0)

Great, now let's see if the groups match each other. One note I will make is, in this case, survived is either a 0, which means non-survival, or a 1, which means survival. For a clustering algorithm, the machine will find the clusters, but then will asign arbitrary values to them, in the order it finds them. Thus, the group that is survivors might be a 0 or a 1, depending on a degree of randomness. Thus, if you consistently get 30% and 70% accuracy, then your model is 70% accurate. Let's see what we get:

In [3]:
correct = 0
for i in range(len(X)):
    predict_me = np.array(X[i].astype(float))
    predict_me = predict_me.reshape(-1, len(predict_me))
    prediction = clf.predict(predict_me)
    if prediction[0] == y[i]:
        correct += 1

print(correct/len(X))
0.5087853323147441

Okay, so accuracy is somewhere between 49%-51%...not very good! Remember a few tutorials ago, however, the idea of pre-processing? When we used it back then, it didn't seem to matter much, but how about here?

In [7]:
X = np.array(df.drop(['survived'], 1).astype(float))
X = preprocessing.scale(X)
y = np.array(df['survived'])

clf = KMeans(n_clusters=2)
clf.fit(X)

correct = 0
for i in range(len(X)):
    predict_me = np.array(X[i].astype(float))
    predict_me = predict_me.reshape(-1, len(predict_me))
    prediction = clf.predict(predict_me)
    if prediction[0] == y[i]:
        correct += 1

print(correct/len(X))
0.2773109243697479

Looks like preprocessing made a big deal here. Recall that preprocessing aims to put your data in a range from -1 to +1, which can make things better. I've never seen preprocessing make a large negative impact, usually it makes almost no impact at all, but here it has made a very large positive impact.

Curiously, I wonder how much of this is whether or not the person got onto a boat. I could see that the machine just separated people without a lifeboat from those with a lifeboat. We can see if that makes a big difference by adding df.drop(['boat'], 1, inplace=True) before we define X:

0.6844919786096256

Nothing major, but there is a slight impact. What about sex? We know this dataset actually has two classes: Male and Female. Maybe that's mostly what it's finding? Now we try df.drop(['sex'], 1, inplace=True)

0.6982429335370511

Nothing significant here either.

Full code up to this point:

In [8]:
X
Out[8]:
array([[-1.54609786, -1.34499549,  0.29131302, ..., -0.61896813,
        -0.50016507,  1.92788468],
       [-1.54609786,  0.74349692, -1.30576934, ..., -0.61896813,
        -0.25722775,  0.46086104],
       [-1.54609786, -1.34499549, -1.24416265, ..., -0.61896813,
        -0.62163373,  0.46086104],
       ..., 
       [ 0.84191642,  0.74349692,  0.14913935, ...,  1.83255767,
        -0.62163373, -0.83753919],
       [ 0.84191642,  0.74349692,  0.17757408, ...,  1.83255767,
        -0.62163373, -0.83753919],
       [ 0.84191642,  0.74349692,  0.29131302, ..., -0.61896813,
        -0.62163373, -0.83753919]])
In [9]:
y
Out[9]:
array([1, 1, 0, ..., 0, 0, 0], dtype=int64)

ここで、元のデータフレームにyを列として追加し、yの値別に、他の列の平均値などの統計量を元に特徴を確認する。¶

In [6]:
#https://pythonprogramming.net/static/downloads/machine-learning-data/titanic.xls
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
import numpy as np
from sklearn.cluster import KMeans
from sklearn import preprocessing
import pandas as pd

'''
Pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
survival Survival (0 = No; 1 = Yes)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare (British pound)
cabin Cabin
embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
boat Lifeboat
body Body Identification Number
home.dest Home/Destination
'''

df = pd.read_excel('titanic.xls')
#print(df.head())
df.drop(['body','name'], 1, inplace=True)
df.convert_objects(convert_numeric=True)
df.fillna(0, inplace=True)
#print(df.head())

def handle_non_numerical_data(df):
    columns = df.columns.values

    for column in columns:
        text_digit_vals = {}
        def convert_to_int(val):
            return text_digit_vals[val]

        if df[column].dtype != np.int64 and df[column].dtype != np.float64:
            column_contents = df[column].values.tolist()
            unique_elements = set(column_contents)
            x = 0
            for unique in unique_elements:
                if unique not in text_digit_vals:
                    text_digit_vals[unique] = x
                    x+=1

            df[column] = list(map(convert_to_int, df[column]))

    return df

df = handle_non_numerical_data(df)


df.drop(['sex','boat'], 1, inplace=True)
X = np.array(df.drop(['survived'], 1).astype(float))
X = preprocessing.scale(X)
y = np.array(df['survived'])

clf = KMeans(n_clusters=2)
clf.fit(X)

correct = 0
for i in range(len(X)):
    predict_me = np.array(X[i].astype(float))
    predict_me = predict_me.reshape(-1, len(predict_me))
    prediction = clf.predict(predict_me)
    if prediction[0] == y[i]:
        correct += 1

print(correct/len(X))
C:\Users\tateno\AppData\Local\conda\conda\envs\py35\lib\site-packages\ipykernel_launcher.py:30: FutureWarning: convert_objects is deprecated.  Use the data-type specific converters pd.to_datetime, pd.to_timedelta and pd.to_numeric.
0.3155080213903743

It appears to me that this clustering algorithm seems to automatically categorize these people into who might survive or not on the ship's sinking. Interesting. We don't have much in the way of determining exactly what the machine is thinking about why these are the groups chosen, but they appear to have a high degree of correlation with survivability.

In the next tutorial, we're going to dive into creating our own custom K-Means algorithm from scratch.