2nd NIPS Workshop on Optimization for Machine Learning
December 11th or 12th, 2009, Whistler, Canada
Deadline for submission: 16th October 2009
It is fair to say that at the heart of every machine learning algorithm is an optimization problem. It is only recently that this viewpoint has gained significant following. Classical optimization techniques based on convex optimization have occupied center-stage due to their attractive
theoretical properties. But, new non-smooth and non-convex problems are being posed by machine learning paradigms such as structured learningand semi-supervised learning. Moreover, machine learning is now very important for real-world problems which often have massive datasets, streaming inputs, and complex models that also pose significant algorithmic and engineering challenges. In summary, machine learning not only provides interesting applications but also challenges the underlying assumptions of most existing optimization algorithms.
Therefore, there is a pressing need for optimization “tuned” to the machine learning context. For example, techniques such as non-convex optimization (for semi-supervised learning), combinatorial optimization and relaxations (structured learning), non-smooth optimization (sparsity
constraints, L1, Lasso, structure learning), stochastic optimization (massive datasets, noisy data), decomposition techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant in this setting. These techniques naturally draw inspiration from other fields, such as operations research, theoretical computer science, and the optimization community.
Motivated by these concerns, we would like to address these issues in the framework of this workshop.
Background and Objectives
This workshop is in continuation to the successful PASCAL2 Workshop on Optimization for Machine Learning, which was held at NIPS*2008, in Whistler, Canada, and was very well-received with packed attendence almost throughout the day.
Other workshops, such as ‘Mathematical Programming in Machine Learning / Data
Mining’ held from 2005–2007 also share the spirit of our workshop. These workshops were quite extensive and provided a solid platform for encouraging exchange between machine learners and optimization researchers. Another relevant workshop was the BigML NIPS*2007 workshop that focused on algorithmic challeges faced for large-scale machine learning tasks, with a focus on
parallelization or online learning.
Our workshop addresses the following major issues, some of which have not been previously tackled as a combined optimization and machine learning effort. In particular, the main aims of our workshop are:
+ Bring together experts from machine learning, optimization, operations research, and statistics to further an exchange of ideas and techniques
+ Focus on problems of interest to the NIPS audience (some basic examples are given below)
+ Identify a set of important open problems and issues that lie at the intersection of both machine learning and optimization
Call for Participation
We invite high quality submissions for presentation as talks or poster presentations during the workshop. We are especially interested in participants who can contribute theory / algorithms, applications, or implementations with a machine learning focus in the following areas:
* Non-Convex Optimization,
– Non-negative matrix and tensor approximation
– Non-convex quadratic programming, including binary QPs
– Convex Concave Decompositions, D.C. Programming
– Training of deep architectures and large hidden variable models
* Optimization with Sparsity constraints
– Combinatorial methods for L0 norm minimization
– L1 and group L1 penalized methods
– Sparse PCA
– Rank minimization methods
* Optimization in Graphical Models
– Structure learning
– MAP estimation in continuous and discrete random fields
* Combinatorial Optimization,
– Clustering and graph-partitioning
– Semi-supervised and multiple-instance learning
– Feature and subspace selection
* Stochastic, Parallel and Online Optimization,
– Large-scale learning, massive data sets
– Distributed learning algorithms
* Algorithms and Techniques,
especially with a focus on an underlying application
– Polyhedral combinatorics, polytopes and strong valid inequalities
– Linear and higher-order relaxations
– Decomposition for large-scale, message-passing and online learning
– Global and Lipschitz optimization
– Algorithms for non-smooth optimization
– Approximation Algorithms
* Deadline for submission of papers: 16th October 2009
* Notification of acceptance: 7th November 2009
* Final version of submission: 20th November 2009
* Workshop date: 12th December 2009
Please note that at least one author of each accepted paper must be available to present the paper at the workshop. Further details regarding the submission process are available at the workshop homepage.
The submission should ideally be 3-4 pages long (with a hard-limit of 6 pages); Submissions should be double blind, using the NIPS format, and should be done via CMT at
For more details, please see the workshop webpage.
The workshop will be a one-day event with a morning and afternoon session. In addition to a lunch break, long coffee breaks will be offered both in the morning and afternoon. There will be a possibility to present posters and demonstrations during these breaks.
A panel discussing future directions and potential workshops that expand upon the topics of this workshop will be held in conclusion. Special focus will be laid on establishing areas, methods, and problems of interest.
* Arkadi Nemirovski — Georgia Institute of Technology
* Nathan Srebro — Toyota Institute of Technology
* Andreas Argyriou, University College London
* Alexandre d’Aspremont, Princeton University
* Léon Bottou, NEC Laboratories America
* Tijl De Bie, University of Bristol
* Chuong Do, Stanford University
* John Duchi, University of California, Berkeley
* Vojtech Franc, Czech Technical University
* Dongmin Kim, University of Texas at Austin
* Sathiya Keerthi, Yahoo! Research
* Gert Lanckriet, University of California, San Diego
* Chih-Jen Lin, National Taiwan University
* Cheng Soon Ong, ETH Zurich
* Pradeep Ravikumar, University of Texas at Austin
* Onur Şeref, University of Florida
* Mark Schmidt, University of British Columbia
* Nathan Srebro, Toyota Technological Institute at Chicago and
University of Chicago
* Sandor Szedmák, University of Southampton
* Sebastian Nowozin, Max Planck Institute for Biological Cybernetics
* Suvrit Sra, Max Planck Institute for Biological Cybernetics
* SVN Vishwanathan, Purdue University, West Lafayette
* Stephen Wright, University of Wisconsin, Madison
The organizers can be contacted through firstname.lastname@example.org.
We gratefully acknowledge MOSEK (http://www.mosek.com) and the EU PASCAL2 network for helping us with the funding of this workhop.