SVM2

2 minute read

Published:

SVM(Kernel)

Learning in Feature Space

  • Learning in a high dimensional space could degrade generalization performance
    • This phenomenon is called curse of dimensionality
  • By using a kernel function, that represents the inner product of training example in feature space, we never need to explicity know the nonlinear map.
    • Even do not know the dimensionality of feature space
  • There is no free lunch
    • Deal with a huge and dense kernel matrix
    • Reduced kernel can avoid this difficult

Linear Machine in Feature Space

Let $\phi :X\rightarrow F$ be a nonlinear map from the input space to feature space. The clasifier will be on the form (primal):

\[f(x)=\left(\sum_{j=1}^l w_j\phi_i(x) \right)+b\]

Dual:

\[f(x)=\left(\sum_{i=1}^l \alpha_i y_i \langle \phi(x_i) , \phi(x) \rangle \right)+b\]

Kernel

Similarly, define kernel which is a function $ K: X\times X \rightarrow \mathbb R$ s.t. for all $x,z\in X$

\[K(x,z)=\langle \phi(x),\phi(z)\rangle\]

The classifier will be

\[f(x)=\left(\sum_{i=1}^l \alpha_i y_i K(x_i,x) \right)+b\]

A Simple Example of Kernel

Polynomial kernel of Degree 2:

\[K(x,z)=\langle x,z \rangle^2.\]

Let

\[x=\left[ \begin{matrix} x_1 \\ z_2 \end{matrix} \right] , \quad z=\left[ \begin{matrix} z_1 \\ z_2 \end{matrix} \right] \in \mathbb R^2\]

and the nonlinear map

\[\phi:\mathbb R^2\rightarrow \mathbb R^3 \text{defined by } \phi(x) =\left[ \begin{matrix} x_1^2 \\ x_2^2 \\ \sqrt 2 x_1x_2 \end{matrix} \right] ,\]

Then $\langle \phi(x),\phi(z)\rangle=\langle x,z\rangle^2=K(x,z)$

  • While we compute the dual form $f(x)=\left(\sum_{i=1}^l \alpha_i y_i K(x_i,x) \right)+b$, we only need to know the value $\langle \phi(x),\phi(z)\rangle=K(x_i,x)$, no need to know what exactly is $\phi$

Example of Kernel

\[K(A,B):\mathbb R^{l\times n}\times \mathbb R^{n\times \bar l}\]

For $A\in \mathbb R^{l\times n}, a\in \mathbb R^l ,\mu \in \mathbb R$

  • Polynomial kernel

    \[\left(AA^T+\mu aa^T\right)_.^d \quad \left(\text{linear kernel } \mu=0,d=1 \right).\]

    Here \(\left( a_{ij} \right)_{.}^d=\left( a_{ij}^d \right)\)

  • Gaussian (Radial Basis) kernel:

    \[K(A,A^T)_{ij}=\exp(-\mu\Vert A_i-A_j\Vert^2)\quad i,j=1,\dots,m\]
  • The $ij$-entry of $K(A,A^T)$ represents the “similarity” of data points $A_i$ and $A_j$.

Nonlinear Support Vector Machine

1-Norm Soft Margin Linear SVM

\[\max_{\alpha \in \mathbb R_l} \mathbf 1^T\alpha -\frac{1}{2}\alpha^TDAA^T\alpha \quad \text{s.t.} \quad \mathbf 1^TD\alpha=0, \quad 0\leq \alpha \leq C\mathbf 1\]

Applying the kernel trick and running linear SVM in the feature space without knowing the nonlinear mapping.

\[\max_{\alpha \in \mathbb R_l} \mathbf 1^T\alpha -\frac{1}{2}\alpha^TDK(A,A^T)\alpha \quad \text{s.t.} \quad \mathbf 1^TD\alpha=0, \quad 0\leq \alpha \leq C\mathbf 1\]

All we need to do is replacing $AA^T$ by $k(A,A^T)$