SVM - Understanding the math - Duality and Lagrange multipliers

This is the Part 6 of my series of tutorials about the math behind Support Vector Machines.
Today we will learn about duality, optimization problems and Lagrange multipliers.

If you did not read the previous articles, you might want to start the serie at the beginning by reading this article: an overview of Support Vector Machine.

 Duality

In mathematical optimization theory, duality means that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem (the duality principle). The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.  (Wikipedia)

The concept of duality is pretty simple to understand if you know what a lower bound is.

What is a lower bound?

If you have a partially ordered set K (a set having comparable elements, where the relation "less than or equal" can be used), the lower bound is an element of K which is less than or equal to every element of S.

To be less abstract. If you pick a real number (from the partially ordered set \mathbb{R}) and it is less than or equal to every element of a subset of \mathbb{R}, then you can call this element a lower bound.

Example:

Let us consider the subset of \mathbb{R}:

 S = \{2, 4, 8, 12\}

  • Because 1 is less than or equal to 2, 4 ,8 and 12, I can say that 1 is a lower bound of S.
  • The same is true for -3 for instance.
  • And even if it is in S we can also call 2 a lower bound of S.

Moreover, because 2 is larger than any other lower bounds, we can give it a special name, we call it the infimum (or greatest lower bound).

So in our example, you can find an infinity of lower bound, but there is only one infimum.

Note: The same logic apply with the relation "greater than or equal" and we have the concept of upper-bound and supremum.

Coming back to duality

Now that we know what a lower bound is, what do we understand about the definition of duality? Well, this definition means that if you have a minimization problem, you can also see it as a maximization problem.  And when you find the maximum of this problem, it will be a lower bound to the solution of the minimization problem, i.e. it will always be less than or equal to the minimum of the minimization problem.

Why do we care about duality?

It turns out that sometimes, solving the dual problem is simpler than solving the primal problem. From what we saw about lower bounds, we can see that for some problems solving the dual problem gives us the same result as solving the primal problem!   But when?

Let us look at a visual illustration.

Duality gap

In the schema above, imagine that in our primal problem, we are trying to minimize the function at the top of the graph. Its minimum is the point P. If we search for a dual function, we could end up with the one at the bottom of the graph, whose maximum is the point D. In this case, we clearly see that D is a lower bound. We defined the value P-D and call it the duality gap. In this example, P-D > 0 and we say that weak duality holds.

In the schema below, we see that P-D=0, there is no duality gap, and we say that strong duality holds.

No duality gap


Optimization problems with constraints

Notation

An optimization problem is typically written:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)=0,\quad i=1,\dots ,p\\&&&h_{i}(x)\leq 0,\quad i=1,\dots ,m\end{aligned}

This notation is called the standard form. You should know that there are others notations as well.

In this notation, f is called the objective function (it is also sometimes called the cost function). By changing x (the optimization variable) we wish to find a value x^* for which f is at its minimum.

There is also p functions g_i which define equality constraints and m functions  h_i which define inequality constraints.

The value we find MUST respect these constraints!

What does it mean to respect the constraints?

Imagine you try to solve the following optimization problem:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&x^2\end{aligned}

There is no constraint, so finding the minimum is easy, the function x^2 is 0 when x = 0. This is shown with a red star on the graph below:

When there is no constraint the minimum is zero

When there is no constraint the minimum is zero

Equality constraints

However, what if we try to add an equality constraint? For instance, we want to find the minimum, but we must ensure that x=1. It means we need to solve this optimization problem:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&x^2\\&\operatorname {subject\;to} &&x=1\end{aligned}

This time, when we try x=0 we see that the function returns its minimum, however, we cannot say this is the solution of our optimization problem. Indeed, the constraint x=1 is violated. In this example, our only choice is to use x=1 and this is the solution.

With an equality constraint x=1, the minimum is 1

With an equality constraint x=1, the minimum is 1

Looking at this example, you might feel like equality constraints are useless. This is not the case because most of the time optimization problems are performed in more than one dimension. So you could try to minimize a function f(x,y) with only an equality constraint on $x$ for instance.

Inequality constraint

What if we now use an inequality constraint? It gives us a problem of the form:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&x^2\\&\operatorname {subject\;to} &&x\geq1\end{aligned}

This time, we can try more value of x. For instance, x=2 respects the constraint, so it could potentially be a solution to the problem. In the end, we find that the function f has once again its minimum at x=1 under the constraint.

Inequality constraint

Inequality constraint

On the graph above, the feasible region is shown in black bold it is a set of values of x we are authorized to use. It is also called, the feasible set.

In mathematical notation we can write it:

R=\{x \in \mathbb{R} | x \geq 1 \}

R is the set of values of x for which the constraints are satisfied.

Combining constraints

It is possible to add several constraints to an optimization problem. Here is an example with two inequality constraints and its visual representation:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&x^2\\&\operatorname {subject\;to} &&x>=1\\&&&x<=2\end{aligned}

Combining constraints restrict the feasible region

Combining constraints restrict the feasible region

Note that we can also mix equality and inequality constraints together. The only restriction is that if we use contradictory constraints, we can end up with a problem which does not have a feasible set:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&x^2\\&\operatorname {subject\;to} &&x=1\\&&&x<=0\end{aligned}

Indeed, for a constraint to be respected, it must be true. When there is several constraints, they all must be true.  In the example above, it is impossible for x to equal 1 and to be less than zero at the same time.

For a function f:\mathcal{X}\to\mathcal{Y} with one equality constraint g(x) and one inequality constraint h(x) the feasible set is :

R=\{x \in \mathcal{X}\;|\;g(x)=0,\;h(x) \leq 0 \}

What is the solution of an optimization problem?

The optimal value x^* of the optimization problem:

\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)=0,\quad i=1,\dots ,p\\&&&h_{i}(x)\leq 0,\quad i=1,\dots ,m\end{aligned}

is:

x^* = \text{inf}\{f(x) \mid g_i(x)=0,\; i=1,\dots,p,\;h_{i}(x)\leq 0, i=1,\dots,m\}

(Source (page 2))

Basically, it says in mathematical notation that the optimal value is the infimum of f(x) with all the constraints respected.

How do we find the solution to an optimization problem with constraints?

We will be using the Lagrange Multipliers. It is a method invented by the Italian mathematician, Joseph-Louis Lagrange around 1806.

Joseph Louis Lagrange

Joseph Louis Lagrange

Lagrange multipliers

As often, we can find a pretty clear definition on Wikipedia:

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints. (Wikipedia)

The critical thing to note from this definition is that the method of Lagrange multipliers only works with equality constraints. So we can use it to solve some optimization problems: those having one or several equality constraints.

But don't worry, the Lagrange multipliers will be the basis used for solving problems with inequality constraints as well, so it is worth understanding this simpler case 🙂

Before explaining what the idea behind Lagrange multipliers is, let us refresh our memory about contour lines.

Contour lines

It is important to understand contour lines to appreciate better Lagrange multipliers.

A contour plot is a way to visualize a three-dimensional function into two dimensions.

For instance, the picture below shows the function x+y displayed in 3D (left) and with a contour plot (right):

contours

Key concepts regarding contour lines:

  • for each point on a line, the function returns the same value
  • the darker the area is, the smallest the value of the function is

You can see the last point illustrated in the two figures below. Even if they have the same lines, the change of color allows us to draw a 3D picture of the function in our mind.
contours_4

contours_1

 

Moreover, the gradient of a function can be visualized as a vector field, with the arrow pointing in the direction where the function is increasing:

gradient_vector_field

The gradient of the function is projected as a vector field (Wikipedia)

It turns out we can easily draw gradient vectors on a contour plot:

  • they are perpendicular to a contour line
  • they should point in the direction where the function is increasing

Here is a representation on two contours plots:

gradient_and_contour

Back to Lagrangian multipliers

Let us consider the following optimization problem:

\begin{aligned}&{\underset {x,y}{\operatorname {minimize} }}&&f(x,y)=x^2+y^2\\&\operatorname {subject\;to} &&g_{i}(x,y)=x+y-1=0\end{aligned}

The objective function f(x,y) and the constraint function g(x,y) can be visualized as contour in the figure below:

It is interesting to note that we can combine both contour plot to visualize how the two functions behave on one plot. Below you can see the constraint function depicted by blue lines.
Also, I draw some gradient vectors of the objective function (in black) and some gradient vectors of the constraint function (in white).

function_and_constraint_3

However, we are not interested in the whole constraint function. We are only interested in points where the constraint is satisfied, i.e. g(x,y)=0.

It means that we want points where:

x+y-1=0

x+y=1

y=1-x

In the graph below we plot the line y=1-x on top of the objective function. We saw earlier that this line is also called the feasible set.

function_and_constraint_4

What did Lagrange find? He found that the minimum of f(x,y) under the constraint g(x,y)=0 is obtained when their gradients point in the same direction. Let us look at how we can find a similar conclusion.

In the figure below, we can see the point where the objective function and the feasible set are tangent. I also added some objective function gradient vectors in black and some constraint gradient vectors in white.

We can see that the is only one point where two vectors point in the same direction: it is the minimum of the objective function, under the constraint.

Imagine you put your finger on the first point with arrows at the top of the figure. Here you evaluate the objective function, and you find out it returns some value v. Now, you wish to know if there is a smaller value. You must stay on the blue line, or the constraint will be violated. So you can either go to the left or the right.

You would like to go left, but then you see that the gradient of the objective function is pointing towards the left (remember it is always pointing towards greater values) so it might not be a good idea.  To be sure, you try and go to the left. You find out that the objective function returns v+1, so it is increasing which is not okay. Going to the right, this time, the objective function returns v-1, it is good you are going in the correct direction. So you continue like that until you reach the center point where the two arrows are parallel. But then, once you move a little bit you find out that the objective function is increasing again. As you can not move right or left anymore without increasing the objective function, you conclude this point is the minimum.

How does it translate mathematically?

Lagrange told us that to find the minimum of a constrained function, we need to look or points were \nabla f(x,y) = \lambda \nabla g(x,y).

But what is \lambda and where does it come from?

It is what we call a Lagrange multiplier. Indeed, even if the two gradient vectors point in the same direction, they might not have the same length, so there must be some factor \lambda allowing to transform one in the other.

Note that this formula does not require the gradients to have the same direction (multiplying by a negative number will change the direction), but only to be parallel. It is because it can be used to find both maximum and minimum at the same time (see Example 1 of this article if you want to see how).

How do we find points for which \nabla f(x,y) = \lambda \nabla g(x,y) ?

Note that,  \nabla f(x,y) = \lambda \nabla g(x,y) is equivalent to:

\nabla f(x,y) -\lambda \nabla g(x,y) = 0

To make things a little easier, we notice that if we define a function:

L(x,y,\lambda) = f(x,y)-\lambda g(x,y) then its gradient is:

\nabla L(x,y,\lambda) = \nabla f(x,y)-\lambda \nabla g(x,y)

This function L is called the Lagrangian, and solving for the gradient of the Lagrangian (solving \nabla L(x,y,\lambda) = 0) means finding the points where the gradient of f and g are parallels.

Let us solve this example using the Lagrange multiplier method!

Remember, the problem we wish to solve is:

\begin{aligned}&{\underset {x,y}{\operatorname {minimize} }}&&f(x,y)=x^2+y^2\\&\operatorname {subject\;to} &&g_{i}(x,y)=x+y-1=0\end{aligned}

Step 1: We introduce the Lagrangian function

L(x,y,\lambda) = f(x,y)-\lambda g(x,y)

and its gradient is :

\nabla L(x,y,\lambda) = \nabla f(x,y)-\lambda \nabla g(x,y)

Step 2: We solve for its gradient

We solve :

\nabla L(x,y,\lambda) = 0

which means solving the following system of equations:

\begin{cases} \frac{\partial{L}}{\partial x} = 0\\ \frac{\partial{L}}{\partial y} = 0 \\ \frac{\partial{L}}{\partial \lambda} = 0 \end{cases}

\begin{cases} 2x-\lambda = 0\\ 2y-\lambda = 0 \\ -x-y+1= 0 \end{cases}

By multiplying the second equation by -1 and adding the first and second equation we get:
2x-2y=0
so:
x=y

We replace y by x in the third equation:
-x-x+1=0
-2x+1=0
-2x=-1
x=\frac{1}{2}

We replace x by \frac{1}{2} in the first equation:
2\frac{1}{2}-\lambda=0
1-\lambda=0
\lambda=1

We have found a solution : x=\frac{1}{2}y=\frac{1}{2} and \lambda=1

We conclude that the minimum of f under the contraint g(x,y)=0 is obtained for x=\frac{1}{2}y=\frac{1}{2}.

Let us verify if it makes sense with our graphical analysis on the graph below:

function_and_constraint_7

The minimum of f is at x=1/2 and y=1/2

We can see that indeed the point (x,y)=(\frac{1}{2},\frac{1}{2}) is on the feasible set (the blue line) and it is the one we identified earlier.

Conclusion

In this article, we learned an important concept in optimization: duality. Moreover, we discovered that an optimization problem can have equality and inequality constraints. Eventually, we learned what Lagrange multipliers are and how we can use them to solve an optimization problem with one equality constraint.

If you wish to learn more about Lagrange multipliers in the context of SVM, you can read this very good paper showing how to use them with more equality constraints and with inequality constraints.

When will next part be ready?

I do not have a plan yet for writing the next part of this series as I am currently writing a free e-book about Support Vector Machines. It is close to completion, so I will now focus my time on finishing it before continuing anything else. If you wish to be notified when it is released, subscribe to the mailing list below.

EDIT: 15/02/2017

I finished writing the ebook, and it is now being read by one of my coworker. As soon as I have its feedback I will be able to make the last corrections and send the manuscript to my editor. 🙂

EDIT: 07/04/2017

I sent the ebook to my editor and it is currently being reviewed before publication.

EDIT: 24/04/2017
The ebook has been reviewed and approved! It will be published soon (I don't have a date yet sorry). I will announce when it is ready.

EDIT: 27/05/2017
The ebook is now in the editor's hand. It will be added to its publication schedule, and I do not have a publication date yet. I am waiting for a reply on this matter.

EDIT: 06/06/2017
The ebook will be available on 07/08/2017!
If you wish to be notified when it is released, subscribe to the mailing list below.


 


 

I am passionate about machine learning and Support Vector Machine. I like to explain things simply to share my knowledge with people from around the world. If you wish you can add me to linkedin, I like to connect with my readers.

57 thoughts on “SVM - Understanding the math - Duality and Lagrange multipliers

    1. Alexandre KOWALCZYK Post author

      It will be in the book yes, there is a chapter about soft-margin and another about kernels. However, maybe I will continue this series once the book is over because sometimes I cannot go into too many details in the book.

      Reply
          1. istek

            a super awesome piece of work!!

            i want to ask you about one-class SVM methods. are they included on your book? if not, i was wondering if you could tell me. i have known 2 methods (SVDD & v-SVC). thanks in advance

      1. Deepanshu

        Sir, a very nice piece of work.
        Tried to understand SVM theory from many sources but failed to grasp details and logics,
        but here they are explained very clearly and simplified logics which follow mathematics and logic transparently.

        Please publish complete theory of duality and langranges which follow in SVM theory explaining the equations in SVM duality theory.

        Reply
    1. Alexandre KOWALCZYK Post author

      Hello, you just need to put your mail and subscribe at the end of the article. Moreover, the book will be free 🙂

      Reply
  1. Rodney Petrus Balandong

    For the example under Lagrange multiplier method

    Just sharing for those who mb lost in this step
    In order to used the wolframalpha, replace λ to z.

    ∇L(x,y,λ)=∇f(x,y)−λ∇g(x,y)

    ∇L(x,y,λ) = (x^2+y^2) - z( x+y−1)
    ∇L(x,y,λ) = (x)^2+(y)^2-zx-zy+z

    Then, we get

    (d)/(dx)(x^2-z x+y^2-y z+z) = 2 x-z

    (d)/(dy)(x^2-z x+y^2-y z+z) = 2 y-z

    (d)/(dz)(x^2-z x+y^2-y z+z) = -x-y+1

    Check using
    https://www.wolframalpha.com/input/?i=differentiate+(x)%5E2%2B(y)%5E2-zx-zy%2Bz

    Reply
  2. :)

    In many references, they will use α instead of λ and the constraint will be sigma of all N input vectors. What does it mean? I think this is the next step of your post, but I can't wait. I need your help for explaining it. Thanks

    Reply
    1. Alexandre KOWALCZYK Post author

      Hello,the sigma or lambda is just a notational choice. The constraint being sigma of all N input vectors means there are N constraints 😉

      Reply
  3. leegongzi

    could you introduce something about the using of kernal method, and show us different kinds of these in your e-book? your tutorial helps me a lot.

    Reply
  4. Scientist

    Sir, How the perpendicular vector w is minimized so we get maximized margin. In some line you say w is a unit vector. does it value is one.? if yes then in 2/|w|, w is already unit so what is need of minimization. I'am little confuse from basics. could you tell me why we always take w is unit vector. when we have different data set in every training and the distance of support vector to plane is not always 1. so how we take it equal to 1 and how to normalize it. just tell me the basic svm? How to do training and how the w is normalized to 1. Thank you very much for your time.

    Reply
    1. Alexandre KOWALCZYK Post author

      w is not equal to 1. It is the norm of w which is equal to 1. There are a lot of vectors which can have a norm of 1.

      Reply
    1. Alexandre KOWALCZYK Post author

      There is no definitive date yet. I still need to finish it, then send it to the editor which will review it... You can subscribe to the mailing list to be notified when it is available.

      Reply
    1. Alexandre KOWALCZYK Post author

      Not yet indeed, but I can say I finished most of the writing. Now, I need to focus on the editing, and to read it again and correct/improve things if necessary.

      Reply
  5. Ned

    Wonderfully clear and with very helpful images. You are an excellent teacher. I really look forward to the ebook! Thanks, Ned.

    Reply
    1. Alexandre KOWALCZYK Post author

      Hi Wu, thanks for your comment. I do not plan on writing a tutorial for SVR for now. I will take a look at Alex J's tutorial though 😉

      Reply
  6. Lucas

    Perfect quantity of details in the explanations.
    It's not too long but it has all the necessary basis of undestanding.
    If the book follow in that way, it's gonna be awesome.

    Subscription done!!

    Reply
  7. Ahmed ElKhouly

    Hey Alexandre,

    Excellent set of articles. Really appreciate the effort you have put into this.
    I know your ebook is being reviewed. If at all possible, can you send me the draft?

    Thanks!

    Reply
  8. krishna

    Hi Alexandre ,

    Thanks for the great tutorial. I have some doubts cam you please help .

    1. Lagrange multipliers can be used only between the gradients of an objective function f and an "equality" constraint g. But in the case of SVM our constraint is >=1.Can we ignore the > part and just roll with =,since points beyond that margin of =1 will give always anyways give us a g(x)>1.

    2. Now given a labelled data set we dont know its f(x) , so how do we( or libraries) first find an approximate f(x) that represents our data, then take its contour plot projection and then solve for , Lagrange multipliers and minima?

    3.To get an inequality constraint we need to start with some valid hyper plane to plot and then play with the parameters(w) to see which is the best hyperplane. How is the first plane determined.

    Sorry if I my Qs are too stupid.I am just trying hard to imagine step by step real time and these doubts pop-up,i could be missing something obvious.

    Reply
    1. Alexandre KOWALCZYK Post author

      Hello,

      1. To solve for SVM we do not ignore the > part of course. Instead we use the KKT conditions which works for inequality constraints. You can think of the KKT as a more general version of the Lagrange multipliers. As a result the multipliers used to solve an SVM optimization problem should be called "KKT multipliers" but people often call them "Lagrange multipliers". I talk about them in more details in my ebook.

      2. We do not care about its contour projection to solve for the Lagrange multiplier. It is just a useful visualization technique. In the case of SVM we don't know the function f of the labelled data set, and our goal is to find it. That is why we introduce the optimization problem, which contains another function (the one maximizing the margin). Let us call it g. We know this function g and we can then find its minima using the KKT conditions. From this result, we can construct f 🙂

      3. What you say is true for an algorithm like the Perceptron learning algorithm. It start either with a vector w at zero or with random values. The main drawback of this algorithm is that is gives us a different hyperplane every time. As you can see in my answer for question 2. your assumption "To get an inequality constraint we need to start with some valid hyper plane" is incorrect for SVM. Because we use another function to determine f we get the inequality constraints and find the hyperplane for f right away. AS a result the first (and only) plane we find is the optimal one.

      Your questions are pretty pertinent, you wonder how to use the knowledge in this article to solve for SVMs and that is exactly what you should be doing when trying to understand this.
      I did explain a lot more about how to solve the SVM optimization problem in my ebook.

      It is still being reviewed but we are making very good progress on it and the review should be over soon. I hope it will help clear things in your mind.

      Alexandre

      Reply
      1. krishna

        Thanks a lot Alexandre , cant wait for the e-book. How do i know when its released , i mean how do i ensure i dont miss your e-book.

        Reply
    1. Alexandre KOWALCZYK Post author

      Hello. The ebook has been reviewed and will be published soon. (I don't have a date yet :)). It will announce when it is ready.

      Reply
      1. NhuNH

        Thank you.

        And I think you should add a full example (for all step) in this tutorial for easy understanding (sorry for my english). EX: there is (-1, 1), (-2, 2) , (-1, 3) for label -1, (2, 1), (3, 2) , (4, 3) for label 1

        I have been read all 6 parts of the tutorial but i don't realy know what exactly w, x are ?

        Reply
  9. ABHINAV CHAUHAN

    Hi Alexandre
    Thank you for this Great tutorial series. Eagerly looking forward for your book. Is it done? Any tentative date as of yet? Sorry for being too greedy!!

    Reply
  10. tien

    I have to say that you explain everything in details and in a really understandable way.
    Many thanks

    Reply
  11. Zia Rehman

    very good explanation great effort ..... like a blessing for the beginner ..keep it up
    waiting remaining part ...

    Reply
  12. Paul Navin

    Alexandre,
    section "Optimization problem with constraints", the link has broken in the following piece of text:
    "This notation is called the standard form. You should know that there are others notations as well."

    Keep up with book!

    Reply
  13. Xiao-Feng Li

    There is something incorrect when you describe "optimal value" of x in optimization problem:
    ==
    The optimal value $x^∗$ of the optimization problem:
    $x^* = \text{inf}\{f(x) \mid g_i(x)=0,\; i=1,\dots,p,\;h_{i}(x)\leq 0, i=1,\dots,m\}$
    ==

    Actually, it should not be $x^*$, but $p^*$. Instead, the optimal value of x is a feasible point while $f(x) = p^*$

    Reply
    1. Alexandre KOWALCZYK Post author

      Hello, Xiao-Feng,

      I say that the optimal value of the optimization problem is called $x^*$. Choosing to call it $x^*$ or $p^*$ is just a matter of notation.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *