scikit-learn Classification Classification using Logistic Regression


Example

In LR Classifier, he probabilities describing the possible outcomes of a single trial are modeled using a logistic function. It is implemented in the linear_model library

from sklearn.linear_model import LogisticRegression

The sklearn LR implementation can fit binary, One-vs- Rest, or multinomial logistic regression with optional L2 or L1 regularization. For example, let us consider a binary classification on a sample sklearn dataset

from sklearn.datasets import make_hastie_10_2

X,y = make_hastie_10_2(n_samples=1000)

Where X is a n_samples X 10 array and y is the target labels -1 or +1.

Use train-test split to divide the input data into training and test sets (70%-30%)

from sklearn.model_selection import train_test_split 
#sklearn.cross_validation in older scikit versions

data_train, data_test, labels_train, labels_test = train_test_split(X,y, test_size=0.3)

Using the LR Classifier is similar to other examples

# Initialize Classifier. 
LRC = LogisticRegression()
LRC.fit(data_train, labels_train)

# Test classifier with the test data
predicted = LRC.predict(data_test)

Use Confusion matrix to visualise results

from sklearn.metrics import confusion_matrix

confusion_matrix(predicted, labels_test)