This is the Part 3 of my series of tutorials about the math behind Support Vector Machine.
If you did not read the previous articles, you might want to start the serie at the beginning by reading this article: an overview of Support Vector Machine.
What is this article about?
The main focus of this article is to show you the reasoning allowing us to select the optimal hyperplane.
Here is a quick summary of what we will see:
- How can we find the optimal hyperplane ?
- How do we calculate the distance between two hyperplanes ?
- What is the SVM optimization problem ?
How to find the optimal hyperplane ?
At the end of Part 2 we computed the distance between a point and a hyperplane. We then computed the margin which was equal to .
However, even if it did quite a good job at separating the data it was not the optimal hyperplane.
Figure 1: The margin we calculated in Part 2 is shown as M1
This is Part 2 of my series of tutorial about the math behind Support Vector Machines.
If you did not read the previous article, you might want to start the serie at the beginning by reading this article: an overview of Support Vector Machine.
In the first part, we saw what is the aim of the SVM. Its goal is to find the hyperplane which maximizes the margin.
But how do we calculate this margin?
SVM = Support VECTOR Machine
In Support Vector Machine, there is the word vector.
That means it is important to understand vector well and how to use them.
Here a short sum-up of what we will see today:
- What is a vector?
- How to add and subtract vectors ?
- What is the dot product ?
- How to project a vector onto another ?
Once we have all these tools in our toolbox, we will then see:
- What is the equation of the hyperplane?
- How to compute the margin?