资 源 简 介
The canonical SVMs are based on a single kernel, recent publications
have shown that using multiple kernels instead of a single one can
enhance interpretability of the decision function and promote
classification accuracy. However, most of existing approaches mainly
reformulate the multiple kernel learning as a saddle point
optimization problem which concentrates on solving the dual. In this
paper, we show that the Multiple Kernel Learning (MKL) problem can
be reformulated as a BiConvex optimization and can also be solved in
the primal. While the saddle point method still lacks convergence
results, our proposed method exhibits strong optimization
convergence properties. To solve the MKL problem, a two-stage
algorithm that optimizes canonical SVMs and kernel weights
alternately is proposed. Since standard Newton and gradient methods
are too time-consuming, we employ the truncated-Newton method to
optimize the canonical SVMs. The Hessian matrix need not be stored
explicitly and the Ne