SVM - Understanding the math - Part 2

svm tutorial math

This is Part 2 of my series of tutorial about the math behind Support Vector Machines.
If you did not read the previous article, you might want to take a look before reading this one :

SVM - Understanding the math

Part 1: What is the goal of the Support Vector Machine (SVM)?
Part 2: How to compute the margin?
Part 3: How to find the optimal hyperplane?
Part 4: Unconstrained minimization
Part 5: Convex functions
Part 6: Duality and Lagrange multipliers


In the first part, we saw what is the aim of the SVM. Its goal is to find the hyperplane which maximizes the margin.

But how do we calculate this margin?

SVM = Support VECTOR Machine

In Support Vector Machine, there is the word vector.
That means it is important to understand vector well and how to use them.

Here a short sum-up of what we will see today:

  • What is a vector?
    • its norm
    • its direction
  • How to add and subtract vectors ?
  • What is the dot product ?
  • How to project a vector onto another ?

Once we have all these tools in our toolbox, we will then see:

  • What is the equation of the hyperplane?
  • How to compute the margin?

What is a vector?

If we define a point A (3,4) in \mathbb{R}^2 we can plot it like this.

a point in the plane

Figure 1: a point

Definition: Any point x = (x_1, x_2), x\neq0, in \mathbb{R}^2 specifies a vector in the plane, namely the vector starting at the origin and ending at x.

This definition means that there exists a vector between the origin and A.

02-vector

Figure 2 - a vector

If we say that the point at the origin is the point O (0,0) then the vector above is the vector \vec{OA}. We could also give it an arbitrary name such as  \mathbf{u}.

Note: You can notice that we write vector either with an arrow on top of them, or in bold, in the rest of this text I will use the arrow when there is two letters like \vec{OA} and the bold notation otherwise.

Ok so now we know that there is a vector, but we still don't know what IS a vector.

Definition: A vector is an object that has both a magnitude and a direction.

We will now look at these two concepts.

1) The magnitude

The magnitude or length of a vector x is written \|x\|  and is called its norm.
For our vector \vec{OA},   \|OA\| is the length of the segment OA

03-norm

Figure 3

From Figure 3 we can easily calculate the distance OA using Pythagoras' theorem:

OA^2 = OB^2 + AB^2

OA^2 = 3^2 + 4^2

OA^2 = 25

OA = \sqrt{25}

\|OA\| =OA=5

2) The direction

The direction is the second component of a vector.

Definition : The direction of a vector \mathbf{u} (u_1,u_2) is the vector  \mathbf{w}(\frac{u_1}{\|u\|}, \frac{u_2}{\|u\|})

Where does the coordinates of  \mathbf{w}  come from ?

Understanding the definition

To find the direction of a vector, we need to use its angles.

03-direction-angle

Figure 4 - direction of a vector

Figure 4 displays the vector \mathbf{u} (u_1,u_2) with u_1=3 and u_2=4

We could say that :

Naive definition 1: The direction of the vector \mathbf{u} is defined by the angle \theta with respect to the horizontal axis, and with the angle \alpha with respect to the vertical axis.

This is tedious. Instead of that we will use the cosine of the angles.

In a right triangle, the cosine of an angle \beta is defined by :

cos(\beta)=\frac{adjacent}{hypotenuse}

In Figure 4 we can see that we can form two right triangles, and in both case the adjacent side will be on one of the axis. Which means that the definition of the cosine implicitly contains the axis related to an angle. We can rephrase our naïve definition to :

Naive definition 2: The direction of the vector \mathbf{u} is defined by the cosine of the angle \theta and the cosine of the angle \alpha.

Now if we look at their values :

cos(\theta)=\frac{u_1}{\|u\|}

cos(\alpha)=\frac{u_2}{\|u\|}

Hence the original definition of the vector \mathbf{w} . That's why its coordinates are also called direction cosine.

Computing the direction vector

We will now compute the direction of the vector \mathbf{u}  from Figure 4.:

cos(\theta)=\frac{u_1}{\|u\|}=\frac{3}{5} =0.6

and

cos(\alpha)=\frac{u_2}{\|u\|}=\frac{4}{5}=0.8

The direction of \mathbf{u}(3,4) is the vector \mathbf{w}(0.6,0.8)

If we draw this vector we get Figure 5:

direction vector

Figure 5: the direction of u

We can see that \mathbf{w} as indeed the same look as \mathbf{u} except it is smaller. Something interesting about direction vectors like \mathbf{w} is that their norm is equal to 1. That's why we often call them unit vectors.

The sum of two vectors

04-two-vectors

Figure 6: two vectors u and v

Given two vectors \mathbf{u} (u_1, u_2) and \mathbf{v} (v_1, v_2) then :

\mathbf{u}+\mathbf{v}= (u_1+v_1, u_2+v_2)

Which means that adding two vectors gives us a third vector whose coordinate are the sum of the coordinates of the original vectors.

You can convince yourself with the example below:

05-sum-of-two-vectors

Figure 7: the sum of two vectors

The difference between two vectors

The difference works the same way :

\mathbf{u}-\mathbf{v}= (u_1-v_1, u_2-v_2)

07-difference-of-two-vectors-2

Figure 8: the difference of two vectors

Since the subtraction is not commutative, we can also consider the other case:

\mathbf{v}-\mathbf{u}= (v_1-u_1, v_2-u_2)

09-difference-of-two-vectors-4

Figure 9: the difference v-u

The last two pictures describe the "true" vectors generated by the difference of \mathbf{u} and \mathbf{v}.

However, since a vector has a magnitude and a direction, we often consider that parallel translate of a given vector (vectors with the same magnitude and direction but with a different origin) are the same vector, just drawn in a different place in space.

So don't be surprised if you meet the following :

08-difference-of-two-vectors-3

Figure 10: another way to view the difference v-u

and

06-difference-of-two-vectors-1

Figure 11: another way to view the difference u-v

If you do the math, it looks wrong, because the end of the vector \mathbf{u-v} is not in the right point, but it is a convenient way of thinking about vectors which you'll encounter often.

The dot product

One very important notion to understand SVM is the dot product.

Definition: Geometrically, it is the product of the Euclidian magnitudes of the two vectors and the cosine of the angle between them

Which means if we have two vectors \mathbf{x} and \mathbf{y} and there is an angle \theta  (theta) between them, their dot product is :

 \mathbf{x} \cdot \mathbf{y} = \|x\| \|y\|cos(\theta)

Why ?

To understand let's look at the problem geometrically.

dot product

Figure 12

In the definition, they talk about cos(\theta), let's see what it is.

By definition we know that in a right-angled triangle:

 cos(\theta)=\frac{adjacent}{hypotenuse}

In our example, we don't have a right-angled triangle.

However if we take a different look Figure 12 we can find two right-angled triangles formed by each vector with the horizontal axis.

dot product

Figure 13

and

dot product

Figure 14

So now we can view our original schema like this:

dot product

Figure 15

We can see that

 \theta = \beta - \alpha

So computing cos(\theta) is like computing cos(\beta - \alpha)

There is a special formula called the difference identity for cosine which says that:

cos(\beta - \alpha) = cos(\beta)cos(\alpha) + sin(\beta)sin(\alpha)

(if you want you can read  the demonstration here)

Let's use this formula!

 cos(\beta) =\frac{adjacent}{hypotenuse} =\frac{x_1}{\|x\|}

 sin(\beta) =\frac{opposite}{hypotenuse} =\frac{x_2}{\|x\|}

 cos(\alpha) =\frac{adjacent}{hypotenuse} =\frac{y_1}{\|y\|}

 sin(\alpha) =\frac{opposite}{hypotenuse} = \frac{y_2}{\|y\|}

So if we replace each term

cos(\theta) = cos(\beta - \alpha) = cos(\beta)cos(\alpha) + sin(\beta)sin(\alpha)

cos(\theta) = \frac{x_1}{\|x\|}\frac{y_1}{\|y\|}+ \frac{x_2}{\|x\|}\frac{y_2}{\|y\|}

cos(\theta) = \frac{x_1y_1 + x_2y_2}{\|x\|\|y\|}\

If we multiply both sides by \|x\|\|y\| we get:

\|x\|\|y\|cos(\theta) = x_1y_1 + x_2y_2

Which is the same as :

\|x\|\|y\|cos(\theta) = \mathbf{x} \cdot \mathbf{y}

We just found the geometric definition of the dot product ! 

Eventually from the two last equations we can see that :

\mathbf{x} \cdot \mathbf{y} =x_1y_1 + x_2y_2 = \sum_{i=1}^{2}(x_iy_i)

This is the algebraic definition of the dot product !

 A few words on notation

The dot product is called like that because we write a dot between the two vectors.
Talking about the dot product \mathbf{x} \cdot \mathbf{y} is the same as talking about

  • the inner product  \langle x,y \rangle (in linear algebra)
  • scalar product because we take the product of two vectors and it returns a scalar (a real number)

The orthogonal projection of a vector

Given two vectors \mathbf{x} and \mathbf{y}, we would like to find the orthogonal projection of \mathbf{x} onto \mathbf{y}.

projection of a vector

Figure 16

To do this we project the vector \mathbf{x} onto \mathbf{y}

14-projection-1

Figure 17

This give us the vector \mathbf{z}

z is the projection of x onto y

Figure 18 : z is the projection of x onto y

By definition :

cos(\theta)= \frac{\|z\|}{\|x\|}

\|z\|=\|x\|cos(\theta)

We saw in the section about the dot product that

 cos(\theta) = \frac{\mathbf{x} \cdot \mathbf{y}}{\|x\|\|y\|}

So we replace cos(\theta) in our equation:

\|z\|=\|x\|\frac{\mathbf{x} \cdot \mathbf{y}}{\|x\|\|y\|}

\|z\|=\frac{\mathbf{x} \cdot \mathbf{y}}{\|y\|}

If we define the vector \mathbf{u} as the direction of \mathbf{y} then

\mathbf{u}=\frac{\mathbf{y}}{\|y\|}

and

\|z\|=\mathbf{u} \cdot \mathbf{x}

We now have a simple way to compute the norm of the vector \mathbf{z}.
Since this vector is in the same direction as \mathbf{y} it has the direction  \mathbf{u}

\mathbf{u}=\frac{\mathbf{z}}{\|z\|}

\mathbf{z}=\|z\|\mathbf{u}

And we can say :

The vector \mathbf{z} = (\mathbf{u} \cdot \mathbf{x})\mathbf{u} is the orthogonal projection of \mathbf{x} onto \mathbf{y}.

Why are we interested by the orthogonal projection ? Well in our example, it allows us to compute the distance between \mathbf{x} and the line which goes through \mathbf{y}.

14-projection-3

Figure 19

We see that this distance is \|x-z\|

\|x-z\| = \sqrt{(3-4)^2 + (5-1)^2}=\sqrt{17}

The SVM hyperplane

Understanding the equation of the hyperplane

You probably learnt that an equation of a line is : y = ax + b. However when reading about hyperplane, you will often find that the equation of an hyperplane is defined by :

\mathbf{w}^T\mathbf{x} = 0

How does these two forms relate ?
In the hyperplane equation you can see that the name of the variables are in bold. Which means that they are vectors !  Moreover, \mathbf{w}^T\mathbf{x} is how we compute the inner product of two vectors, and if you recall, the inner product is just another name for the dot product !

Note that

 y = ax + b

is the same thing as

y - ax - b= 0

Given two vectors  \mathbf{w}\begin{pmatrix}-b\\-a\\1\end{pmatrix} and \mathbf{x}\begin{pmatrix}1\\x\\y\end{pmatrix}

\mathbf{w}^T\mathbf{x} = -b\times (1) + (-a)\times x + 1 \times y

\mathbf{w}^T\mathbf{x} = y - ax - b

The two equations are just different ways of expressing the same thing.

It is interesting to note that w_0 is -b, which means that this value determines the intersection of the line with the vertical axis.

Why do we use the hyperplane equation \mathbf{w}^T\mathbf{x} instead of  y = ax + b ?

For two reasons:

  • it is easier to work in more than two dimensions with this notation,
  • the vector \mathbf{w} will always be normal to the hyperplane

And this last property will come in handy to compute the distance from a point to the hyperplane.

Compute the distance from a point to the hyperplane

In Figure 20 we have an hyperplane, which separates two group of data.

svm hyperplane

Figure 20

To simplify this example, we have set w_0 = 0.

As you can see on the Figure 20, the equation of the hyperplane is :

x_2 = -2x_1

which is equivalent to

\mathbf{w}^T\mathbf{x}=0

with \mathbf{w}\begin{pmatrix}2 \\1\end{pmatrix}  and \mathbf{x} \begin{pmatrix}x_1 \\ x_2\end{pmatrix}

Note that the vector \mathbf{w} is shown on the Figure 20. (w is not a data point)

We would like to compute the distance between the point A(3,4) and the hyperplane.

This is the distance between A and its projection onto the hyperplane

svm hyperplane

Figure 21

We can view the point A as a vector from the origin to A.
If we project it onto the normal vector \mathbf{w}

projection of a onto w

Figure 22 : projection of a onto w

We get the vector \mathbf{p}

p is the projection of a onto w

Figure 23: p is the projection of a onto w

Our goal is to find the distance between the point A(3,4) and the hyperplane.
We can see in Figure 23 that this distance is the same thing as \|p\|.
Let's compute this value.

We start with two vectors, \mathbf{w}=(2,1) which is normal to the hyperplane, and \mathbf{a} = (3,4) which is the vector between the origin and A.

\|w\|=\sqrt{2^2+1^2}=\sqrt{5}

Let the vector \mathbf{u} be the direction of \mathbf{w}

\mathbf{u} = (\frac{2}{\sqrt{5}},\frac{1}{\sqrt{5}})

\mathbf{p} is the orthogonal projection of \mathbf{a} onto \mathbf{w} so :

\mathbf{p} = (\mathbf{u} \cdot \mathbf{a})\mathbf{u}

\mathbf{p} = ( 3 \times \frac{2}{\sqrt{5}} + 4 \times \frac{1}{\sqrt{5}}) \mathbf{u}

\mathbf{p} = (\frac{6}{\sqrt{5}} + \frac{4}{\sqrt{5}})\mathbf{u}

\mathbf{p} = \frac{10}{\sqrt{5}}\mathbf{u}

\mathbf{p} = (\frac{10}{\sqrt{5}}\times\frac{2}{\sqrt{5}},\frac{10}{\sqrt{5}}\times\frac{1}{\sqrt{5}})

\mathbf{p} = (\frac{20}{5},\frac{10}{5})

\mathbf{p} = (4,2)

\|p\| =\sqrt{4^2+2^2} = 2\sqrt{5}

Compute the margin of the hyperplane

Now that we have the distance \|p\| between A and the hyperplane, the margin is defined by :

margin = 2\|p\| = 4\sqrt{5}

We did it ! We computed the margin of the hyperplane !

Conclusion

This ends the Part 2 of this tutorial about the math behind SVM.
There was a lot more of math, but I hope you have been able to follow the article without problem.

What's next?

Now that we know how to compute the margin, we might want to know how to select the best hyperplane, this is described in Part 3 of the tutorial : How to find the optimal hyperplane ?

I am passionate about machine learning and Support Vector Machine. When I am not writing this blog, you can find me on Kaggle participating in some competition.

70 thoughts on “SVM - Understanding the math - Part 2

  1. Oleg Prutz

    Are you planning to tell about support vectors, non-linear kernels and optimization (I mean finding the minimum of the distance from the hyperplane to the suport vectors) in this tutorial? It seems that one need to know optimization theory in depth to understand this algorithm. It would be nice to see the simple explanation of what the algorithm is doing actually.

    Reply
    1. Alexandre KOWALCZYK Post author

      Yes that is what I am planning to do. However optimization theory is indeed very important to understand the algorithm and I am still figuring out how to explain SVM without going too deep into details.

      Reply
  2. Franck Berthuit

    Very clear article, Alexandre... and enjoyable for a poor mathematician like me.
    I'm eager to read then next one.
    Bye

    Reply
      1. Shivani Bhardwaj

        I was trying to understand SVM from a very long time. your blog really helped me a lot and now I know what I am dealing with. your tutorial not only helped in understanding the mathematical jargon but also give me the clear perspective of what I am doing.
        Thanks a lot!!

        Reply
    1. Alexandre KOWALCZYK Post author

      Thanks for the comment Shyam. I am afraid that recently I have spent most of my time on kaggle competitions and playing with convolutionnal neural networks. I will try to write the following part in the coming weeks in order to no achieve this work.

      Reply
  3. Kunal

    Of all the links I found while doing a google search on SVM this is by far the best one in terms of simplicity of language in which it is explained...Thanks Alex

    Reply
  4. dragon518

    This is the best blog about SVM I have seen ever, help me so much, thank you very much, look forward to excellent part 3. BTW, "To simplify this example, we have set w_0=0", do you mean that setting the start point of vector \mathbf{w} at origin?

    Reply
    1. Alexandre KOWALCZYK Post author

      Thanks for your kind comment. No this does not mean setting the start point of the vector \mathbf{w} at the origin. We could place it somewhere else because we often consider that the parallel translate of a given vector is the same vector (this is illustrated in the section about the difference of two vectors)In the definition of the equation of a hyperplane the \mathbf{w} vector is a 3-dimensional vector \mathbf{w}(w_0,w_1,w_2). By setting w_0 to 0 we can do the remaining calculations with a 2-dimensional \mathbf{w}(w_1,w_2) vector. Because the definition says that w_0 = -b and we use w_0 = 0 instead, it removes the intercept term from the equation. As a result the hyperplane passes through the origin. In the Part 3 I wrote in more details about the hyperplane equation, things should be easier to understand.

      Reply
    1. Alexandre KOWALCZYK Post author

      Thanks. The part 3 is now online. (I added the link at the end of the article)

      Reply
  5. Gabriel B. Théberge

    That is effectively crystal clear! I have read a lot of papers on this topic but nothing was as clear and accessible as your presentation Alexandre!

    Reply
  6. Felicia

    This is the most useful blog about SVM I've seen so far, especially for people like who don't have much knowledge in linear algebra.

    A dumb question: why is the direction of \mathbf{w} perpendicular to the hyper-plane?

    Reply
  7. Subha MG

    Hi Alexandre..Your blog is simply superb! The way you've explained concepts!! I saw several videos on SVMs..but I didn't get a clear picture..Your articles have made it super-clear!! Super-like!!

    Reply
  8. Md. Asadur Rahman

    No Word to thank you, brother! I was very worried and eager to learn about SVM, You have solved my problem. Be blessed by Almighty.

    Reply
  9. Sameer Panna

    Great Blog.. and excuse my ignorance but can you please explain how one arrives here w(−b,−a,1) and x(1,x,y)

    Reply
    1. Alexandre KOWALCZYK Post author

      Thank you. You find these two vectors by continuing the reasoning.
      We want to express the equation y-ax−b=0 with a dot product between two vectors.
      The dot product is the sum of several products. In our case there is two minus signs, so there is three elements being summed together. Our vectors will have three elements each. Then we transform the equation to display these products: y−ax−b=0 is equivalent to y*1−a*x−b*1=0 and then we transform the differences into sums : y*1+(−a)*x+(−b)*1=0

      Reply
  10. Koushik

    Wonderful post.....it gives me clear understanding even in some of Linear Algebra concepts. Thanks....keep this good work up..... 🙂

    Reply
  11. adhi21

    I am understand now why the equation is w^(T) . x + b. I have another question, what does "T" mean in that equation?
    Thank you

    Reply
  12. omar

    thank you very much about that useful tutorial , can you write an article about dealing SVM with non_linear dataset

    Reply
  13. chansungpark

    Thank you for very explicit explanation.
    I have a question since I have no background knowledge about cosine, sine, etc.
    Shouldn't Direction of vector be just angle of the triangle? I am just curious what cos(β)= adjacent/hypotenuse formula fundamentally means?

    Reply
    1. Alexandre KOWALCZYK Post author

      No because you can have another vector with the same angle between the axis and itself but with the vectors pointing in another direction. By using cosine, we use the length of the adjacent and hypotenuse and as we are using coordinates we obtain a vector pointing in the same direction.

      Reply
  14. Brandon

    Thank for you blog, that is great. However, i have a question about the W(-b,-a,1) and X(1,x,y),
    the transpose of W is a column vector and X is a row vector, the result of [ column * row] is a matrix that size is (3,3), can you tell me where i missing?

    Reply
    1. Alexandre KOWALCZYK Post author

      Good catch. Indeed both w and x needs to be column vectors so that the transpose of w is a row and we do [row * column]. I updated the article. Thanks!

      Reply
  15. Shawn

    So far the best explanation of SVM in the net for those who do not have the required math background. Fantastic job! Please post non-linear SVMs and Kernel explanation. Thanks a lot !

    Reply
  16. Ashutosh Srivastava

    Very Nice and crystal clear explaination i have ever found on internet.
    It will be very helpful if you give some practical demostration of how SVM and other
    learning algorithms can be implemented and interpreted on various platforms like weka and orange.What is confusion matrix and ROI. How that Wt-b equation is generated etc.
    Giving practical demonstration will be very helpful.

    Thanks and Regards...

    Reply
  17. Everaldo Aguiar

    Phenomenal explanation of SVMs. Thanks a lot for taking the time to write and publish this. I was wondering if you would mind if I used brief excerpts of your content. I am preparing a few slides for a course that I will be teaching and found some of your images and explanations very helpful. I'll be sure to include citations and a reference to your blog posts.

    Reply
    1. Alexandre KOWALCZYK Post author

      Thanks for your comment. No problem you can use some excerpts. For which course is it?

      Reply
  18. Farai Leboho

    Speechless, this is downright simple to understand. This makes SVM move from very hard to simply understandably, thanks a lot mate. At least now i have an idea of what's happening behind the scenes of svm.SVC().fit(),
    Great work.

    Reply
  19. rssoni

    The best explanation I can think of. I made the concept clear. The writing style is lucid and understandable. Nothing else can be easier than this explanation of SVM. Awesome, thanks Alex

    Reply
  20. A Logical Geek

    You are amazing. Thanks a lot for this.. Because of lack of enough maths background i have having difficulty reaching here . You helped a lot. Is it possible for you to explain the justification of langrange's multipliers as well as further explanation of SVM.

    Reply
  21. Ganecian

    I'm still confused in determining normal vector w. If the equation of the hyperplane is x2 = 1/3 * x1 + 1, what is the normal vector w? How to calculate it when w0 is not zero. Thanks

    Reply
    1. Alexandre KOWALCZYK Post author

      In your example: x2 = 1/3 * x1 + 1, is in the form: x2 = a * x1 + b. To get the normal vector you just get the vector w(a,-1). So in this case, we define w(1/3,-1), x(x1,x2) and b = 1. And you can see that wx+b=0 is equivalent to x2 = 1/3 * x1 + 1.

      Now if we plot, the vector w(1/3,-1), we can start to draw it where we want. I could start drawing at the origin x(0,0) or I can start drawing it directly on the hyperplane. I choose to do so, and I start drawing it at x(1,1+1/3).
      As you can see in the figure: it is normal to the hyperplane.
      W is normal

      Where I chose to start drawing it, does not change the fact that it is normal to the hyperplane.

      Reply
  22. Beungeut Boloho

    Hi, can you explain what is the bias b visually? Is it the distance of vertical axis to the origin or the distance of the hyperplane to the origin?

    Reply
    1. Alexandre KOWALCZYK Post author

      Given an hyperplane having the equation wx+b=0 with vectors w(w0,w1) and x(x0,x1). b is the distance between the vertical axis and the origin only when the value w1 of the weight vector is equal -1. Indeed, when we transform this hyperplane equation to a line equation of the form y=ax+c we get a = -w0/w1 and c = -b/w1. Some books represent b as being the distance between the origin and the hyperplane, but I think this is true only under certain conditions, at least that is what I found when trying to verify it by myself using the first formula of this article using formulas from this page.

      Reply
  23. Bahareh Moradi

    Thank you a thousand times............You explained Lagrange multipliers in the best way in the world.....
    can you introduce me some useful books which I can read and get more information about classification?

    Reply
  24. Jeetendra Ahuja

    Awesome tutorial, TAL man!
    Just a small suggestion, when you give a link like for "cumulative", "dot product" , can you change a code of your site such that after clicking on this link, it gets open in different tab instead of opening in current tab.

    Reply
    1. Alexandre KOWALCZYK Post author

      Hello. Thank you for your comment. I thought all my links were opening in a new tab but indeed it was not the case. I updated all the problematic links in this article. Thank for the remark !

      Reply
  25. Sonia

    Sir,

    x+.w+b= +1
    x_.w+b = -1
    why it is always equal to +1 and -1 for positive and negative support vector respectively.?. How to normalize this distance of hyperplane to support vectors?. Once we normalize, it always remains same for any kind of data. could you explain?. I am not clear about the distance between hyperplane and support vectors

    Reply
    1. Alexandre KOWALCZYK Post author

      It is always equal to +1 and -1 because we are free to select w for which it will be the case (we can rescale w and b and keep the same hyperplane). So we decide arbitrarily to select among the ones for which it is equal to +1 and -1 because it will make the following computation easier.

      Reply

Leave a Reply