K Nearest Neighbours Practice: Breast Cancer Classifier¶

Practice of Logistic Regression to predict breast cancer from a set of patient data¶

Import Libraries¶

In [1]:
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import numpy as np

The breast cancer dataset is inbuilt in the sklearn library¶

In [2]:
from sklearn.datasets import load_breast_cancer

breast_cancer_data = load_breast_cancer()

Inspect the Data¶

In [3]:
print(breast_cancer_data.feature_names)
print(breast_cancer_data.data[0])
print('')
print(breast_cancer_data.target_names)
print(breast_cancer_data.target[0])
['mean radius' 'mean texture' 'mean perimeter' 'mean area'
 'mean smoothness' 'mean compactness' 'mean concavity'
 'mean concave points' 'mean symmetry' 'mean fractal dimension'
 'radius error' 'texture error' 'perimeter error' 'area error'
 'smoothness error' 'compactness error' 'concavity error'
 'concave points error' 'symmetry error' 'fractal dimension error'
 'worst radius' 'worst texture' 'worst perimeter' 'worst area'
 'worst smoothness' 'worst compactness' 'worst concavity'
 'worst concave points' 'worst symmetry' 'worst fractal dimension']
[1.799e+01 1.038e+01 1.228e+02 1.001e+03 1.184e-01 2.776e-01 3.001e-01
 1.471e-01 2.419e-01 7.871e-02 1.095e+00 9.053e-01 8.589e+00 1.534e+02
 6.399e-03 4.904e-02 5.373e-02 1.587e-02 3.003e-02 6.193e-03 2.538e+01
 1.733e+01 1.846e+02 2.019e+03 1.622e-01 6.656e-01 7.119e-01 2.654e-01
 4.601e-01 1.189e-01]

['malignant' 'benign']
0

Split the data into training and test data¶

In [4]:
training_data, test_data, training_labels, test_labels = train_test_split(breast_cancer_data.data,breast_cancer_data.target,test_size = 0.2, random_state = 100)

Compare the accuracy of the training/test data using a value of K between 1 and 100¶

This model is giving most accuracy with a K value of 24¶

In [9]:
accuracies = []
for k in range(1,101):
  classifier = KNeighborsClassifier(n_neighbors = k)
  classifier.fit(training_data,training_labels)
  accuracies.append(classifier.score(test_data, test_labels))

k_list = range(1,101)

plt.plot(k_list, accuracies)
plt.axvline(x=24, color='r', linestyle='--')
plt.xlabel('k (Number of Neighbours)')
plt.ylabel('Accuracy')
plt.title('Classifier Accuracy using K Nearest Neighbour Algorithm')
plt.show()
In [6]:
classifier = KNeighborsClassifier(n_neighbors = 24)
classifier.fit(training_data,training_labels)
Out[6]:
KNeighborsClassifier(n_neighbors=24)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KNeighborsClassifier(n_neighbors=24)

Collect Data From a New Patient¶

In [7]:
NewPatientData = [[0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1,0.11,0.12,0.13,0.14,0.15,0.16,0.17,0.18,0.19,0.20,0.21,0.22,0.23,0.24,0.25,0.26,0.27,0.28,0.29,30.0]]

Run the Model on this New Patient Data¶

In [8]:
predicted_label = classifier.predict(NewPatientData)

if predicted_label[0] == 0:
    print("You may not have breast cancer.")
else:
    print("You may have breast cancer.")
You may have breast cancer.