Abstract:
A key algorithm (KA) of learning theory presented recently by Poggio and Smale is claimed to be capable of both nonlinear classification and regression. It avoids the hard quadratic programming, but suffers from the fact that nearly all the training samples are "support vectors". To impose sparsity to KA, a sparse KA algorithm(SKA) is put forward, which can effectively cut off "support vectors"and meanwhile keep good generalization capacity. With comparison to SVM, the superiority of SKA is demonstrated on two UCI datasets.