Support Vector Machines Succinctly released

My ebook Support Vector Machines Succinctly is available for free.

Download Support Vector Machines Succinctly

About Support Vector Machines Succinctly

While I was working on my series of articles about the mathematics behind SVMs, I have been contacted by Syncfusion to write an ebook in their "Succinctly" e-book series. The goal is to cover a particular subject in about 100 pages. I gladly accepted the proposition and started working on the book.

I took me almost one year to complete writing this ebook. Hundred of hours spent reading about Support Vector Machines in several books and papers, trying to figure out the complex things in order to give you a clear view. And just as much drawing schemas and writing code.

What you will find in this book?

  • A refresher about some prerequisites to understand the subject more easily
  • An introduction with the Perceptron
  • An explanation of the SVM optimization problem
  • How to solve the SVM optimization problem with a quadratic programming solver
  • A description of kernels
  • An explanation of the SMO algorithm
  • An overview of multi-class SVMs
  • A lot of Python code to show how it works (everything is available in this bitbucket repository )

I hope this book will help all people strugling to understand this subject to have a fresh and clear understanding.

Is it hard to read?

My goal was to try to keep the book as easy to understand as possible. However, because of the space available, I was not able to go into as much details as in this blog.

Chapter 4 about the optimization problem will probably be the most challenging to read. Do not worry if you do not catch everything the first time and feel free to use it as a base to dive deeper into the subject if you wish to understand it fully.

If you struggle on some part, you can ask your question on stats.stackexchange if it is about Machine Learning or on math.stackexchange if it is about the mathematics. Both communities are great.

What now?

If you read the book I would love to hear your feedback!

Do not hesitate to post a comment and to share the book with your friends! 

I am passionate about machine learning and Support Vector Machine. I like to explain things simply to share my knowledge with people from around the world. If you wish you can add me to linkedin, I like to connect with my readers.

15 thoughts on “Support Vector Machines Succinctly released

  1. S.M. Hosseini

    Dear Alexandre, Congratulations!
    This is superb! I had no idea a book could look this good. Smooth and easy to read.

    Reply
  2. Dmitry Lifshitz

    Thank you Alexandre,

    Awesome book. The most comprehensive material ever found in the net.
    It has everything: refreshment of linear algebra, convex optimization basics, code examples and of course SVMs.

    Thank you for sharing your passion,

    Dmitry

    Reply
  3. C

    Hi there Alexandre,

    First and foremost, I read through your book over the course of the last two weeks, and all of the concepts were brilliantly explained. This book is an incredible reference that I would definitely recommend to anyone interested in learning about SVMs.

    When working through your Python examples, I think I may have found a very small issue. Within `smo_algorithm.py`, on the Second Heuristic C (lines 209-216), I noticed that the `randrange` function appears to be causing the algorithm to begin at the same point in a list every single time the program is run. For instance:
    ```
    rand_i = randrange(self.m)
    print(rand_i)
    ```
    will always begin at the following indices every time the program is run:
    ```
    $ python run_smo.py
    8
    13
    13
    12
    6
    ```
    For reference, I am using the `linearly_separable` dataset, and I'm running this on Python version 2.7.13. Is this behavior by design?

    Reply
    1. Alexandre KOWALCZYK Post author

      Hello.

      Thank you for your comment. The randrange function always returns the same numbers because at line 16 of run_smo.py we call the function seed(5). You can read more about the seed function here. I did that so that my results are reproducible, but the algorithm states that we select the numbers randomly :).

      Best,

      Reply
  4. Tetsuya

    Hello Alexandre,

    Thank you so much for the book. I had read some books about SVM before, but your book is the most comprehensive and easy to understand for beginner like me.

    So, far I have finished reading until chapter 3, and here I have a question.

    When one hyperplane is given, I think that there could be 2 w vectors which are orthogonal with respect to the hyperplane.

    For example, in Figure 25, there could be 2 w vectors, as follows:

    (1) w = (1, 1)

    In this case, b is computed as below:

    - b / w_{1} = 12
    b = -12 * w_{1} = -12 * 1 = -12

    As for the given data, I suppose:

    x_{1} = (4, 6), y_{1} = -1 for the blue star
    x_{2} = (8, 6), y_{2} = 1 for the red triangle

    Using these data, the functional margins, f_{1} and f_{2} are computed as follows:

    f_{1} = y_{1}((w, x_{1}) + b) = -1((1, 1), (4, 6)) - 12) = -(4 + 6 - 12) = 2
    f_{2} = y_{2}((w, x_{2}) + b) = 1((1, 1), (8, 6)) - 12) = 8 + 6 - 12 = 2

    Since both functional margins are positive, those points are correctly classified.

    (2) w = (-1, -1)

    In this case, b is computed as below:

    -b / w_{1} = 12
    b = -12 * w_{1} = -12 * -1 = 12

    In the same way, the functional margins, f_{1} and f_{2} are computed as follows:

    f_{1} = y_{1}((w, x_{1}) + b) = -1((-1, -1), (4, 6)) + 12) = -(-4 - 6 + 12) = -2
    f_{2} = y_{2}((w, x_{2}) + b) = 1((-1, -1), (8, 6)) + 12) = (-8 - 6 + 12) = -2

    Since both functional margins are negative, those points are incorrectly classified.

    From above considerations, I think w = (1, 1) is correct, and should be taken for Figure 25. Therefore, the functional margin of the data set for Figure 25 is +2. (Yes, I agree with you on this point.)

    Then, let's compute the functional margin for Figure 26. In the same idea, there could be 2 w vectors for Figure 26.

    (1) w = (-1, 1)

    In this case, b is computed as below:

    -b / w_{1} = 0
    b = 0

    So, the functional margins are computed as follows:

    f_{1} = y_{1}((w, x_{1}) + b) = -1((-1, 1), (4, 6)) = -(-4 + 6) = -2
    f_{2} = y_{2}((w, x_{2}) + b) = 1((-1, 1), (8, 6)) = -8 + 6 = -2

    Since both functional margins are negative, those points are incorrectly classified. Thus, I think we shouldn't take the vector, w = (-1, 1) for Figure 26.

    (2) w = (1, -1)

    In this case, b is computed as below:

    -b / w_{1} = 0
    b = 0

    In the same way, the functional margins are computed as follows:

    f_{1} = y_{1}((w, x_{1}) + b) = -1((1, -1), (4, 6)) = -(4 - 6) = 2
    f_{2} = y_{2}((w, x_{2}) + b) = 1((1, -1), (8, 6)) = 8 - 6 = 2

    Since both functional margins are positive, those points are correctly classified. It means that we should take this vector, w = (1, -1) for Figure 26, as a correct vector, instead of w = (-1, 1). If this is true, the functional margin of the data set for Figure 26 is also +2. (Sorry, I can't agree with you on this point.)

    This is a contradicted conclusion against the description in page 43:
    =====
    Using this formula, we find that the functional margin of the hyperplane in Figure 25 is +2, while in Figure 26 it is -2. Because it has a bigger margin, we will select the first one.
    =====

    To conclude the description, "while in Figure 26 it is -2", I think you chose w = (-1, 1) for Figure 26. So, here I have a question:

    Could you possibly clarify the reason why you chose w = (-1, 1) for Figure 26? Is there any criteria how to choose w, other than being orthogonal to respect with hyperplane?

    I would be grateful, if you could response to my question.

    Thanks a lot in advance.
    Best regards,
    Tetsuya

    Reply
  5. Michael

    Dear Alexandre,
    Congratulations! Thank you for sharing the book. I've been waiting for months and finally got it today. I'm going to read through the book all the way today.

    Best regards,
    - Michael

    Reply
  6. alice

    Alexandre,

    thanks for the great book on svm. May I have a question about Fig26 on page 42? I do not understand why the examples are incorrectly classified. As the hyperplane is x1-x2 = 0. and the red point (8, 6) get a predicted value +1 as its value is 2, which is correctly classified. Any explanation is quite appreciate. Thanks a lot.

    Reply
    1. Alexandre KOWALCZYK Post author

      Hello Alice,

      Yes indeed, it looks like there is a typo, and that the two figures are switched.
      Figure 26 correctly classifiy the data, and Figure 25 does not.

      Thanks for message!

      I used the code below to check:

      import numpy as np

      # We associate each vector x_i with a label y_i,
      # which can have the value +1 or -1
      # (respectively the triangles and the stars in Figure 13).
      # triangle = +1
      # stars = -1

      # Hyperplane of figure 25
      w = np.array([-1, -1])
      bias = 12

      print(np.dot([4,6], w) + bias) # returns 2 so gets classified as +1
      print(np.dot([8,6], w) + bias) # returns -2 so gets classified as -1

      # Hyperplane of figure 26
      w = np.array([1, -1])
      bias = 0

      print(np.dot([4,6], w) + bias) # returns -2 so gets classified as -1
      print(np.dot([8,6], w) + bias) # returns 2 so gets classified as +1

      # [8,6] is a triangle, so its value is +1,
      # it is correctly classified by figure 26
      # and it is incorrectly classified by figure 25

      Reply
  7. Md Humaun Rashid

    Dear Alexandre KOWALCZYK
    I couldn't understand how to say thank you. The way you teach SVM is really fantastic. I didn't find any other tutorial better than your post. I also want to learn others like logistic regression , random forest etc. Do you have also tutorial for those. I am really interested.

    Thank you. Wish you a happy life.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *