Safe feature elimination in l1-penalized optimization problems

Laurent El Ghaoui
University of California, Berkeley (UC Berkeley)

Talk 2: Safe feature elimination in l1-penalized optimization problems






L1-penalization has become ubiquitous in machine learning applications where a sparse solution (eg, the vector of coefficients of a classifier) is sought. While the presence of the penalty appears to make the problem more difficult (sometimes preventing a closed-form expression), we argue that it allows a mechanism to reduce the size the problem via a process called "safe feature elimination". In this sense, l1-penalized problems are easier than their non-penalized counterparts.





We describe safe feature elimination methods in the context of supervised learning problems, which involve a convex loss function and a $l_1$-norm penalty. The approach leads to a potentially substantial reduction in the number of variables prior to running the supervised learning algorithm. The method is not heuristic: it only eliminate features that are guaranteed to be absent after solving the learning problem. Our framework applies to a large class of l1-penalized problems, including support vector machine classification, logistic regression and least-squares.






The complexity of the feature elimination step is negligible compared to the typical computational effort involved in the sparse supervised learning problem: it grows linearly with the number of features times the number of examples, with much better count if data is sparse. We apply our method to data sets arising in text classification and observe a dramatic reduction of the dimensionality, hence in computational effort required to solve the learning problem, especially when very sparse classifiers are sought. Our method allows to immediately extend the scope of existing algorithms, allowing us to run them on data sets of sizes that were out of their reach before.


Back to Long Programs