Linear Kernel: Why is it recommended for text classification ?

The Support Vector Machine can be viewed as a kernel machine. As a result, you can change its behavior by using a different kernel function.

The most popular kernel functions are :

  • the linear kernel
  • the polynomial kernel
  • the RBF (Gaussian) kernel
  • the string kernel

The linear kernel is often recommended for text classification

It is interesting to note that :

The original optimal hyperplane algorithm proposed by Vapnik in 1963 was a linear classifier [1]

That's only 30 years later that the kernel trick was introduced.

If it is the simpler algorithm, why is the linear kernel recommended for text classification?

Text is often linearly separable

Most of text classification problems are linearly separable [2]

Linear kernel works well with linearly separable data

Linear kernel works well with linearly separable data

Text has a lot of features

The linear kernel is good when there is a lot of features. That's because mapping the data to a higher dimensional space does not really improve the performance. [3]  In text classification, both the numbers of instances (document) and features (words) are large.



As we can see in the image above, the decision boundary produced by a RBF kernel when the data is linearly separable is almost the same as the decision boundary produced by a linear kernel. Mapping data to a higher dimensional space using an RBF kernel was not useful.

Linear kernel is faster

Training a SVM with a linear kernel is faster than with another kernel. Particularly when using a dedicated library such as LibLinear [3]

Less parameters to optimize

When you train a SVM with a linear kernel, you only need to optimize the C regularization parameter.  When training with other kernels, you also need to optimize the \gamma parameter which means that performing a grid search will usually take more time.


Linear kernel is indeed very well suited for text-categorization.

Keep in mind however that it is not the only solution and in some case using another kernel might be better.

The recommended approach for text classification is to try a linear kernel first, because of its advantages.
If however you search to get the best possible classification performance, it might be interesting to try the other kernels to see if they help.


[1]  Support Vector Machines Article
[2] Text Categorization with Support Vector Machines: Learning with Many Relevant Features
[3] A Practical Guide to Support Vector Classification 


I am passionate about machine learning and Support Vector Machine. When I am not writing this blog, you can find me on Kaggle participating in some competition.

4 thoughts on “Linear Kernel: Why is it recommended for text classification ?

  1. Vikram Murthy

    hey Alexandre .. thanks for the simple explanation ..i am new to SVM and want to focus on sentiment analysis and news analytics your experience what has been the best approach to obtain the tf-idf matrix ? i have been trying "bag-of-words" and also have read about using n-grams to extract only words around the key words ..for e.g. if i want to classify a news article as positive for "oil prices" , i am taking a bunch of key words like "oil price","decline","pump","opec" etc and extracting words around these words to form the final sparse matrix to be fed for training ..can you please share your opinion and experiences ? will be grateful !

    1. Alexandre KOWALCZYK Post author

      Hello, I don't know a lot about sentiment analysis but I have done some news analytics. The bag of words approach is usually pretty good and TF-IDF is the way to go to increase performances. When computing TF-IDF keep in mind that you need to save the IDF part to use it when you are predicting the data. Cosine normalization is also something you might want to consider. I wouldn't take a bunch of keywords because this is strongly biased. Maybe your view of what is a good keyword is not the best. I prefer to use information gain or chi-squared to select the more relevant terms if necessary. N-Grams does indeed improve the accuracy of the classifier as it gains more context.


Leave a Reply