# Sensitivity and Specificity in Machine Learning

To understand these concepts you must first know about the fundamentals of a confusion matrix. Assuming that you are already familiar with the concepts, let us deep dive into sensitivity, specificity, ROC curve.

In the above confusion matrix, we can observe that the TP = 100, TN = 50 and FP = 10, FN = 5

## Sensitivity

Definition: Out of all the times the real class was positive, how many times were we correct.

Formula = TP/(TP+FN)

This is same as RECALL for positive class.

## Specificity

Definition: Out of all the times the real class was negative, how many times were we correct.

Formula = TN/(TN+FP)

This is same as RECALL for negative class.

For sake of easier understanding, Assume that your confusion matrix is

[[0,0]
[0,165]]

here the Sensitivity = 165/(165+0) = 1, Specificity = 0/0+165 = 0

In the above matrix, our model has predicted all the samples to be of a single class, Hence Sensitivity is not equal to specificity and also both are at the extreme level. but the ideal matrix should be something like,

[[60, 0]
[0, 105]]

here the Sensitivity = 105/(105+0) = 1, Specificity = 60/(60+0)= 1. We can observe that both Sensitivity and specificity are equal. Thus we can say that we have the perfect model that gives perfect score (Which is not ideal because of overfitting problem)

For the very first matrix shown in this article, we can calculate the sensitivity and specificity values and play around with the model to get the best scores possible, without any overfitting.

Note: We can also modify the model as sometimes we might have have the luxury to afford for any false positives but not for false negetives.

--

--

## More from Sairam Penjarla

Looking for my next opportunity to make change in a BIG way