News Archives

NIPS 2009 Workshop on Approximate Learning of Large Scale Graphical Models: Theory and Applications

—————————————————————————————————————–
Call for Participation

NIPS 2009 Workshop on Approximate Learning of Large Scale Graphical Models: Theory and Applications

http://www.cs.toronto.edu/~rsalakhu/workshop_nips2009/

December 12, 2009
Whistler, Canada
—————————————————————————————————————–

DESCRIPTION:

Undirected graphical models provide a powerful framework for representing dependency structure between random variables. Learning the parameters of undirected models plays a crucial role in solving key problems in many machine learning applications, including natural language processing, visual object recognition, speech perception, information retrieval, computational biology, and many others.

Learning in undirected graphical models of large treewidth is difficult because of the hard inference problem induced by the partition function for maximum likelihood learning, or by finding the MAP assignment for margin-based loss functions. Over the last decade, there has been considerable progress in developing algorithms for approximating the partition function and MAP assignment, both via variational approaches (e.g., belief propagation) and sampling algorithms (e.g., MCMC). More recently, researchers have begun to apply these methods to learning large, densely-connected undirected graphical models that may contain millions of parameters. A notable example is the learning of Deep Belief Networks and Deep Boltzmann Machines, that employ MCMC strategy to greedily learn deep hierarchical models.

The goal of this workshop is to assess the current state of the field and explore new directions in both theoretical foundations and empirical applications. In particular, we shall be interested in discussing the following topics:

State of the field: What are the existing methods and what is the relationship between them? Which problems can be solved using existing algorithms and which cannot?

The use of approximate inference in learning: There are many algorithms for approximate inference. In principle all of these can be “plugged-into” learning algorithms. What are the relative merits of using one approximation vs. the other (e.g., MCMC approximation vs. a variational one). Are there effective combined strategies?

Learning with latent variables: Graphical models with latent (or hidden) variables often possess more expressive power than models with only observed variables. However, introducing hidden variables makes learning far more difficult. Can we develop better optimization and approximation techniques that would allow us to learn parameters in such models more efficiently?

Learning in models with deep architectures: Recently, there has been notable progress in learning deep probabilistic models, including Deep Belief Networks and Deep Boltzmann Machines, that contain many layers of hidden variables and millions of parameters. The success of these models heavily relies on the greedy layer-by-layer unsupervised learning of a densely-connected undirected model called a Restricted Boltzmann Machine (RBM). Can we develop efficient and more accurate learning algorithms for RBM’s and deep multilayer generative models? How can learning be extended to semi-supervised setting and be made more robust to dealing with highly ambiguous or missing inputs? What sort of theoretical guarantees can be obtained for such greedy learning schemes?

Scalability and success in real-world applications: How well do existing approximate learning algorithms scale to large-scale problems including problems in computer vision, bioinformatics, natural language processing, information retrieval? How well do these algorithms perform when applied to modeling high-dimensional real-world distributions (e.g. the distribution of natural images)?

Theoretical Foundations: What are the theoretical guarantees of the learning algorithms (e.g. accuracy using the learned parameters with respect to best possible, asymptotic convergence guarantees such as almost sure convergence to the maximum likelihood estimator). What are the tradeoffs between running time and accuracy?

Loss functions: In the supervised learning setting, two popular loss functions are log-loss (e.g., in conditional random fields) and margin-based-loss (e.g., in maximum margin Markov networks). In intractable models these approaches result in rather different approximation schemes (since the former requires partition function estimation, whereas the latter only requires MAP estimates). What can be said about the differences between these schemes? When is one model more appropriate than the other? Can margin-based models be applied in the unsupervised case?

Structure vs. accuracy: Which graph structures are more amenable to approximations and why? How can structure learning be combined with approximate learning to yield models that are both descriptive and learnable with good accuracy?

PROGRAM:

The program will consist entirely of invited talks. The invited speakers are:
Pedro Domingos, University of Washington
Bill Freeman, MIT
Geoffrey Hinton, University of Toronto
Daphne Koller, Stanford University
David McAllester, Toyota Technological Institute at Chicago
Ben Taskar, University of Pennsylvania
Noah Smith, Carnegie Mellon University
Eric Xing, Carnegie Mellon University

ORGANIZERS:

Ruslan Salakhutdinov, MIT
Amir Globerson, Hebrew University
David Sontag, MIT

The workshop is supported by PASCAL (a non-core workshop).

CFC: NIPS workshop “Clustering: Science or Art?”

December 2009, Whistler, Canada
http://clusteringtheory.org/
Submission deadline: Friday October 30th, 2009

Organizers:
————

Shai Ben-David, Ulrike von Luxburg, Avrim Blum, Isabelle Guyon, Robert C. Williamson, Reza Bosagh Zadeh, Margareta Ackerman

Topic of the workshop:
———————

Clustering is one of the most widely used techniques for exploratory data analysis. In the past five decades, many clustering algorithms have been developed and applied to a wide range of practical problems. However, in spite of the abundance of clustering research published every year, we are only beginning to understand some of the most basic issues in clustering. Even though there exist many claims to success, there seems to be a lack of well established methodological procedures. In particular, addressing issues that are independent of any specific clustering algorithm, objective function, or specific data generative model, is only in its infancy. The state of affairs is perhaps not dissimilar to that in computer programming at the time of Donald Knuth’s famous Turing award lecture: “It is clearly an art, but many feel that a science is possible and desirable”.

This workshop aims at initiating a dialog between theoreticians and practitioners, aiming to bridge the theory-practice gap in this area. We want to build our workshop along three main questions:

1. FROM THEORY TO PRACTICE: Which abstract theoretical characterizations / properties / statements about clustering algorithms exist that can be helpful for practitioners and should be
adopted in practice?

2. FROM PRACTICE TO THEORY: What concrete questions would practitioners like to see addressed by theoreticians? Can we identify de-facto practices in clustering in need of theoretical grounding?
Which obscure (but seemingly needed or useful) practices are in need of rationalization?

3. FROM ART TO SCIENCE: In contrast to supervised learning, where there exist rigorous methods to assess the quality of an algorithm, such standards do not exist for clustering – clustering is still
largely an art. How can we progress towards more principled approaches, including the introduction of falsifiable hypotheses and properly designed experimentation? How could one set up a clustering challenge to compare different clustering algorithms? What could be scientific standards to evaluate a clustering algorithm in a paper?

Call for Contributions:
————–

The workshop will consist of a mix of presentations and discussions. Researchers who want to contribute should submit an extended abstract of their work by email to
nips09 at clusteringtheory.org,
at most 4 pages, pdf format, following the NIPS style guide.

*** The deadline is Friday October 30th***

The organizers will review all submissions. Notification of acceptance will be sent out by Friday November 6th.

OPT 2009: Optimization for Machine Learning, First Call for Participation

OPT 2009
2nd NIPS Workshop on Optimization for Machine Learning
NIPS*2009 Workshop
December 11th or 12th, 2009, Whistler, Canada

URL: http://opt.kyb.tuebingen.mpg.de

Deadline for submission: 16th October 2009

Abstract
——–

It is fair to say that at the heart of every machine learning algorithm is an optimization problem. It is only recently that this viewpoint has gained significant following. Classical optimization techniques based on convex optimization have occupied center-stage due to their attractive
theoretical properties. But, new non-smooth and non-convex problems are being posed by machine learning paradigms such as structured learningand semi-supervised learning. Moreover, machine learning is now very important for real-world problems which often have massive datasets, streaming inputs, and complex models that also pose significant algorithmic and engineering challenges. In summary, machine learning not only provides interesting applications but also challenges the underlying assumptions of most existing optimization algorithms.

Therefore, there is a pressing need for optimization “tuned” to the machine learning context. For example, techniques such as non-convex optimization (for semi-supervised learning), combinatorial optimization and relaxations (structured learning), non-smooth optimization (sparsity
constraints, L1, Lasso, structure learning), stochastic optimization (massive datasets, noisy data), decomposition techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant in this setting. These techniques naturally draw inspiration from other fields, such as operations research, theoretical computer science, and the optimization community.

Motivated by these concerns, we would like to address these issues in the framework of this workshop.

Background and Objectives
————————-
This workshop is in continuation to the successful PASCAL2 Workshop on Optimization for Machine Learning, which was held at NIPS*2008, in Whistler, Canada, and was very well-received with packed attendence almost throughout the day.

Other workshops, such as ‘Mathematical Programming in Machine Learning / Data
Mining’ held from 2005–2007 also share the spirit of our workshop. These workshops were quite extensive and provided a solid platform for encouraging exchange between machine learners and optimization researchers. Another relevant workshop was the BigML NIPS*2007 workshop that focused on algorithmic challeges faced for large-scale machine learning tasks, with a focus on
parallelization or online learning.

Our workshop addresses the following major issues, some of which have not been previously tackled as a combined optimization and machine learning effort. In particular, the main aims of our workshop are:

+ Bring together experts from machine learning, optimization, operations research, and statistics to further an exchange of ideas and techniques

+ Focus on problems of interest to the NIPS audience (some basic examples are given below)

+ Identify a set of important open problems and issues that lie at the intersection of both machine learning and optimization

Call for Participation
———————-

We invite high quality submissions for presentation as talks or poster presentations during the workshop. We are especially interested in participants who can contribute theory / algorithms, applications, or implementations with a machine learning focus in the following areas:

* Non-Convex Optimization,
– Non-negative matrix and tensor approximation
– Non-convex quadratic programming, including binary QPs
– Convex Concave Decompositions, D.C. Programming
– Training of deep architectures and large hidden variable models

* Optimization with Sparsity constraints
– Combinatorial methods for L0 norm minimization
– L1 and group L1 penalized methods
– Sparse PCA
– Rank minimization methods

* Optimization in Graphical Models
– Structure learning
– MAP estimation in continuous and discrete random fields

* Combinatorial Optimization,
– Clustering and graph-partitioning
– Semi-supervised and multiple-instance learning
– Feature and subspace selection

* Stochastic, Parallel and Online Optimization,
– Large-scale learning, massive data sets
– Distributed learning algorithms

* Algorithms and Techniques,
especially with a focus on an underlying application
– Polyhedral combinatorics, polytopes and strong valid inequalities
– Linear and higher-order relaxations
– Decomposition for large-scale, message-passing and online learning
– Global and Lipschitz optimization
– Algorithms for non-smooth optimization
– Approximation Algorithms

Important Dates
—————

* Deadline for submission of papers: 16th October 2009
* Notification of acceptance: 7th November 2009
* Final version of submission: 20th November 2009
* Workshop date: 12th December 2009

Please note that at least one author of each accepted paper must be available to present the paper at the workshop. Further details regarding the submission process are available at the workshop homepage.

Submission
———-

The submission should ideally be 3-4 pages long (with a hard-limit of 6 pages); Submissions should be double blind, using the NIPS format, and should be done via CMT at
https://cmt.research.microsoft.com/OPT2009/Default.aspx

For more details, please see the workshop webpage.

Workshop
——–

The workshop will be a one-day event with a morning and afternoon session. In addition to a lunch break, long coffee breaks will be offered both in the morning and afternoon. There will be a possibility to present posters and demonstrations during these breaks.

A panel discussing future directions and potential workshops that expand upon the topics of this workshop will be held in conclusion. Special focus will be laid on establishing areas, methods, and problems of interest.

Invited Speakers
—————-

* Arkadi Nemirovski — Georgia Institute of Technology
* Nathan Srebro — Toyota Institute of Technology
* TBD

Program Committee
—————–

* Andreas Argyriou, University College London
* Alexandre d’Aspremont, Princeton University
* Léon Bottou, NEC Laboratories America
* Tijl De Bie, University of Bristol
* Chuong Do, Stanford University
* John Duchi, University of California, Berkeley
* Vojtech Franc, Czech Technical University
* Dongmin Kim, University of Texas at Austin
* Sathiya Keerthi, Yahoo! Research
* Gert Lanckriet, University of California, San Diego
* Chih-Jen Lin, National Taiwan University
* Cheng Soon Ong, ETH Zurich
* Pradeep Ravikumar, University of Texas at Austin
* Onur Şeref, University of Florida
* Mark Schmidt, University of British Columbia
* Nathan Srebro, Toyota Technological Institute at Chicago and
University of Chicago
* Sandor Szedmák, University of Southampton

Workshop Organizers
——————-

* Sebastian Nowozin, Max Planck Institute for Biological Cybernetics
* Suvrit Sra, Max Planck Institute for Biological Cybernetics
* SVN Vishwanathan, Purdue University, West Lafayette
* Stephen Wright, University of Wisconsin, Madison

The organizers can be contacted through opt@tuebingen.mpg.de.

Acknowledgements
—————-
We gratefully acknowledge MOSEK (http://www.mosek.com) and the EU PASCAL2 network for helping us with the funding of this workhop.

Call for Contributions: NIPS 2009 Workshop on Connectivity Inference in Neuroimaging

Webpage
http://cini2009.kyb.tuebingen.mpg.de

Workshop description

Over the past decade, brain connectivity has become a central theme in the neuroimaging community. At the same time, causal inference has recently emerged as a major research topic in machine learning. Even though the two research questions are closely related, interactions
between the neuroimaging and machine-learning communities have been limited.

The aim of this workshop is to initiate productive interactions between neuroimaging and machine learning by introducing the workshop audience to the different concepts of connectivity/causal inference employed in each of the communities. Special emphasis is placed on discussing commonalities as well as distinctions between various approaches in the context of neuroimaging. Due to the increasing relevance of brain connectivity for analyzing mental states, we also highly welcome contributions discussing applications of brain connectivity measures to real-world problems such as brain-computer interfacing or mental state monitoring.

Topics

We solicit contributions on new approaches to connectivity and/or causal inference for neuroimaging data as well as on applications of connectivity inference to real-world problems. Contributions might address, but are not limited to, the following topics:

* Effective connectivity & causal inference
o Dynamic causal modelling
o Granger causality
o Structural equation models
o Causal Bayesian networks
o Non-Gaussian linear causal models
o Causal additive noise models
* Functional connectivity
o Canonical correlation analysis
o Phase-locking
o Imaginary coherence
o Independent component analysis
* Applications of brain connectivity to real-world problems
o Brain-computer interfaces
o Mental state monitoring

Invited speakers

* Jean Daunizeau, University of Zurich & University College London
* Rainer Goebel, Maastricht University
* Scott Makeig, University of California San Diego

Workshop format

CINI 2009 is a one-day workshop at the Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009). Besides three invited talks, in which the audience will be introduced to current approaches for inferring connectivity in neuroimaging data, there will be several contributed talks and an evening poster session. Special emphasis will be placed on a balanced contribution of talks from the neuroimaging and machine learning communities. To foster interaction between communities, approximately 50% of workshop time is reserved for discussions.

Key dates

* Extended abstract submission deadline: October 9th, 2009, 5 pm (PT)
* Notification of acceptance: October 23rd, 2009
* Workshop: December 11th or 12th, 2009

Submission instructions

Please submit extended abstracts (maximum two pages) in either pdf or doc format through the CINI 2009 submission site at https://cmt.research.microsoft.com/CINI2009/. Upon notification of
acceptance, authors will also be notified whether their contribution has been accepted as a contributed talk or poster.

Workshop location

Westin Resort and Spa / Hilton Whistler Resort and Spa
Whistler, B.C., Canada

Organization committee

* Moritz Grosse-Wentrup (primary contact), MPI for Biological Cybernetics, Tuebingen
* Uta Noppeney, MPI for Biological Cybernetics, Tuebingen
* Karl Friston, University College London
* Bernhard Schoelkopf, MPI for Biological Cybernetics, Tuebingen

Program committee

* Olivier David, Institut National de la Sante et de la Recherche
Medicale, Grenoble
* Justin Dauwels, Massachusetts Institute of Technology, Cambridge
* Michael Eichler, Maastricht University
* Jeremy Hill, Max Planck Institute for Biological Cybernetics, Tuebingen
* Guido Nolte, Fraunhofer FIRST, Berlin
* Will Penny, University College London
* Alard Roebroeck, Maastricht University
* Klaas Enno Stephan, University of Zurich
* Ryota Tomioka, University of Tokyo
* Pedro Valdes-Sosa, Cuban Neuroscience Center, Havana

NIPS Workshop – Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Call for contributions – Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

http://intranet.cs.man.ac.uk/mlo/mock09/

Workshop at the Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009)
Whistler, BC, Canada, December 12, 2009.

—————————————————————————————————————————————

WORKSHOP DESCRIPTION

Accounting for dependencies between outputs has important applications in several areas. In sensor networks, for example, missing signals from temporal failing sensors may be predicted due to correlations with signals acquired from other sensors. In geo-statistics, prediction of the concentration of heavy pollutant metals (for example, Copper concentration), that require expensive procedures to be measured, can be done using inexpensive and oversampled variables
(for example, pH data).

Within machine learning, this framework is known as multi-task learning. Multi-task learning is a general learning framework in which it is assumed that learning multiple tasks simultaneously leads to better modeling results and performance that learning the same tasks individually. Exploiting correlations and dependencies among tasks, it becomes possible to handle common practical situations such as missing data or to increase the amount of potential data when only
few amount of data per task is available.

In the last few years there has been an increased amount of work on Multi-task Learning. From the Bayesian perspective, this problem has been tackled using hierarchical Bayesian together with neural networks. More recently, the Gaussian Processes framework has been considered, where the correlations among tasks can be captured by appropriate choices of covariance functions. Many of these choices have been inspired by the geo-statistics literature, in which a similar area is known as cokriging. From the frequentist perspective, regularization theory has provided a natural framework to deal with multi-task problems: assumptions on the relation of the different
tasks translate into the design of suitable regularizers. Despite the common traits of the proposed approaches, so far different communities have worked independently. For example it is natural to
ask whether the proposed choices of the covariance function can be interpreted from a regularization perspective. Or, in turn, if each regularizer induces a specific form of the covariance/kernel function. By bringing together the latest advances from both communities, we aim at establishing what is the state of the art and the possible future challenges in the context of multiple-task learning.

Although there are several approaches to multi-task learning out there, in this workshop we focus our attention to methods based on constructing covariance functions (kernels) for multiple outputs, to be employed, for example, together with Gaussian processes or
regularization networks.

INVITED SPEAKERS

David Higdon, Los Alamos National Laboratory, USA
Sayan Mukherjee, Duke University, USA
Andreas Argyriou, Toyota Technological Institute, USA
Hans Wackernagel, Ecole des Mines Paris, France

IMPORTANT DATES

Deadline for abstract submission: October 23, 2009
Notification of acceptance: November 6, 2009
Workshop: December 12, 2009

SUBMISSION FORMAT

Submissions will be accepted as 20 minutes talks or Spotlights.
Extended abstracts submitted will use NIPS style with a maximum number of 4 pages.
Abstracts should be sent to: nips.mock09 (at) gmail.com

ORGANISERS

Mauricio A. Álvarez, University of Manchester
Lorenzo Rosasco, MIT
Neil D. Lawrence, University of Manchester

The workshop is sponsored by EU FP7 PASCAL2 Network of Excellence

NIPS workshop on Probabilistic Approaches for Robotics and Control – call for contributions

-Workshop dates
Friday, December 11 or Saturday, December 12, 2009

-Workshop location
Whistler, B.C., Canada, at the Westin Resort and Spa and the
Hilton Whistler Resort and Spa

-Poster submission
Please send an extended abstract of max. 1 page describing the poster you intend to present to
mpd37 (at) cam.ac.uk
Choose a format of your liking, e.g., the standard NIPS template.

The deadline for abstract submissions is October 17, 2009.

The notification will be October 26, 2009.

-Workshop homepage
http://mlg.eng.cam.ac.uk/marc/nipsWS09

-Conference homepage
http://nips.cc

-Workshop Abstract

During the last decade, many areas of Bayesian machine learning have reached a high level of maturity. This has resulted in a variety of theoretically sound and efficient algorithms for learning and inference in the presence of uncertainty. However, in the context of control, robotics, and reinforcement learning, uncertainty has not yet been treated with comparable rigor despite its central role in risk-sensitive control, sensorimotor control, robust control, and cautious control. A consistent treatment of uncertainty is also essential when dealing with stochastic policies, incomplete state information, and exploration strategies.

A typical situation where uncertainty comes into play is when the exact state transition dynamics are unknown and only limited or no expert knowledge is available and/or affordable. One option is to learn a model from data. However, if the model is too far off, this approach can result in arbitrarily bad solutions. This model bias can be sidestepped by the use of flexible model-free methods. The disadvantage of model-free methods is that they do not generalize and
often make less efficient use of data. Therefore, they often need more trials than feasible to solve a problem on a real-world system. A probabilistic model could be used for efficient use of data while alleviating model bias by explicitly representing and incorporating uncertainty.

The use of probabilistic approaches requires (approximate) inference algorithms, where Bayesian machine learning can come into play. Although probabilistic modeling and inference conceptually fit
into this context, they are not widespread in robotics, control, and reinforcement learning. Hence, this workshop aims to bring researchers together to discuss the need, the theoretical properties, and the practical implications of probabilistic methods in control, robotics, and reinforcement learning.

One particular focus will be on probabilistic reinforcement learning approaches that profit recent developments in optimal control which show that the problem can be substantially simplified if certain structure is imposed. The simplifications include linearity of the (Hamilton-Jacobi) Bellman equation. The duality with Bayesian estimation allow for analytical computation of the optimal control laws and closed form expressions of the optimal value functions.

Format

The workshop will consist of short invited presentations and a session with contributed posters (plus poster spotlight). Topics (from a theoretical and practical perspective) to be addressed include, but are not limited to:

– How can we efficiently plan and act in the presence of uncertainty in states/rewards/observations/environment?

– Shall we model the lack of knowledge or can we simply ignore it?

– How can prior knowledge (e.g., expert knowledge and domain knowledge) be incorporated?

– How much manual tuning and human insight (e.g., domain knowledge) is a) required and b) available to achieve good performance?

– Is there a principled way to account for imprecise models and model bias?

– What roles should probabilistic models play in control? Are they needed at all?

– What kinds of probabilistic models are useful?

– In traditional control, hand-crafted control laws often prevail since optimal control laws are mostly too aggressive due to model errors while robust control laws can be too conservative since they always assume the worst case. Can “probabilistic control” bridge the gap between robust and optimal control laws?

– How can we exploit the linearity of the (Hamilton-Jacobi) Bellman equation and the duality with Bayesian estimation?

– Can we compute the optimal control law analytically and is there a closed-form expression of the value function?

– How can existing machine learning methods be applied to efficiently solve stochastic control problems?

*Invited speakers*

Dieter Fox (University of Washington), confirmed
Drew Bagnell (CMU), pending
Evangelos Theodorou (USC), confirmed
Jovan Popovic (MIT), confirmed
Konrad Koerding (Northwestern University), confirmed
Marc Toussaint (TU Berlin), confirmed
Miroslav Karny (Academy of Sciences of the Czech Republic), confirmed
Mohammad Ghavamzadeh (INRIA), pending
Roderick Murray-Smith (University of Glasgow), pending
Bert Kappen (University of Nijmegen), confirmed
Emanuel Todorov (University of Washington), confirmed

*Organizers*

Marc Peter Deisenroth
Bert Kappen
Emanuel Todorov
Duy Nguyen-Tuong
Carl Edward Rasmussen
Jan Peters

NIPS 2009 Workshop on Applications of Topic Models: Text and Beyond CFP

————————————————————–
Call for Contributions and Participation

NIPS 2009 Workshop on Applications of Topic Models: Text and Beyond
December 11 or 12, 2009
http://nips2009.topicmodels.net

Submission Deadline: Friday October 23, 2009
—————————————————————–

Description:
The primary goal of this workshop is to bring together a diverse set of researchers from multiple research areas, all of whom work on topic modeling. Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora. With the increasing availability of other large, heterogeneous data collections, topic models have been adapted to model data from fields as diverse as computer vision,
finance, bioinformatics, cognitive science, music, and the social sciences. While the underlying models are often extremely similar, these communities use topic models in different ways in order to achieve different goals. This one-day workshop will bring together topic modeling researchers from multiple disciplines, providing an opportunity for attendees to meet, present their work and share ideas, as well as inform the wider NIPS community about current research in topic modeling. This workshop will address the following specific goals:

* Identify and formalize open research areas – e.g., how best to evaluate topic modeling performance both within and across different application domains
* Propose, explore, and discuss new application areas
* Discuss how best to facilitate transfer of research ideas between application domains
* Direct future work and generate new application areas, novel modeling approaches, and unexplored collaborative research directions

We encourage researchers to emphasize real-world applications – ranging from specific applications to entire application domains – in their submissions, and welcome the following types of papers:

* Research papers that propose new topic models for specific applications
* Research papers that apply existing topic models to novel application domains
* Position papers and speculative papers that discuss desiderata of existing application domains or propose new domains and approaches for future topic modeling research
* Papers that discuss practical issues relating to topic models, such as parallel computation environments and scalability for massive data collections
* Papers that investigate evaluation methodologies for topic models

The workshop will consist of invited talks (5 or 6) by established researchers from multiple research communities, contributed talks (4 or 5), a poster session, and a panel session.

Invited Speakers (confirmed):

David Blei (Princeton University)
Mark Johnson (Brown University)
Eric Xing (Carnegie Mellon University)
Mark Steyvers (U.C. Irvine)
Li Fei-Fei (Stanford University)

Submission Instructions:

Submissions should be sent to: nips2009-submit@topicmodels.net

They should include a title, authors, and abstract in plain text, and a 2-4 page extended abstract in NIPS pdf format. Final versions of extended abstracts will be posted on the workshop website.

Dates:
Submission Deadline: Friday October 23, 2009
Notifications: Monday November 9, 2009
Final Versions: Friday November 20, 2009
Workshop: December 11 or 12, 2009

Location:
Westin Resort and Spa / Hilton Whistler Resort and Spa
Whistler, B.C., Canada

http://nips.cc/Conferences/2009/

Organizers:
David Blei (Princeton University)
Jordan Boyd-Graber (Princeton University)
Jonathan Chang (Princeton University)
Katherine Heller (University of Cambridge)
Hanna Wallach (University of Massachusetts, Amherst)

Program Committee:
Edo Airoldi (Harvard University)
Hal Daumé (University of Utah)
Tom Dietterich (Oregon State University)
Laura Dietz (Max-Planck-Institut für Informatik)
Jacob Eisenstein (Carnegie Mellon University)
Tom Griffiths (University of California, Berkeley)
John Lafferty (Carnegie Mellon University)
Jia Li (Stanford University)
Andrew McCallum (University of Massachusetts, Amherst)
David Mimno (University of Massachusetts, Amherst)
Dave Newman (University of California, Irvine)
Padhraic Smyth (University of California, Irvine)
Erik Sudderth (Brown University)
Yee Whye Teh (Gatsby Unit, UCL)
Chong Wang (Princeton University)
Max Welling (University of California, Irvine)
Sinead Williamson (University of Cambridge)
Jerry Zhu (University of Wisconsin, Madison)

Sponsor:
PASCAL 2 (non-core workshop)

Contact:
nips2009@topicmodels.net

NIPS Workshop on Grammar Induction, Representation of Language and Language Learning – call for participation

NIPS Workshop on Grammar Induction, Representation of Language and Language Learning
December 11th or 12th, 2009, Whistler, Canada

URL: http://www.cs.ucl.ac.uk/staff/rmartin/grll09/

Deadline for submission: 25th October 2009

Workshop Description

—————————-

Now is the time to revisit some of the fundamental grammar/language learning tasks such as grammar acquisition, language acquisition, language change, and the general problem of automatically inferring generic representations of language structure in a data driven manner.

Though the underlying problems have been known to be computationally intractable for the standard representations of the Chomsky hierarchy, such as regular grammars and context free grammars, progress has been made by modifying or restricting these classes to make them more observable. Generalisations of distributional learning have shown promise in unsupervised learning of linguistic structure using tree based representations, or using non-parametric approaches to inference. More radically, significant advances in this domain have been made by switching to different representations such as the work in Clark, Eyrand & Habrard (2008) that addresses the issue of language acquisition, but has the potential to cross-fertilise a wide range of problems that require data driven representations of language. Such approaches are starting to make inroads into one of the fundamental problems of cognitive science: that of learning complex representations that encode meaning. This adds a further motivation for returning to this topic at this point.

Grammar induction was the subject of an intense study in the early days of Computational Learning Theory, with the theory of query learning largely developing out of this research. More recently the study of new methods of representing language and grammars through complex kernels and probabilistic modelling together with algorithms such as structured output learning has enabled machine learning methods to be applied successfully to a range of language related tasks from simple topic classification through parts of speech tagging to statistical machine translation. These methods typically rely on more fluid structures than those derived from formal grammars and yet are able to compete favourably with classical grammatical approaches that require significant input from domain experts, often in the form of annotated data.

Call for Papers

—————————

Target Audience The target audience of the workshop is machine learners with an interest in text modelling and processing who are interested in extending their work to more complex language tasks, cognitive systems and knowledge representation, moving beyond models that are implicitly or explicitly based on variants of finite state automata/Hidden Markov Models.

Format The workshop will be single-day, comprising of a tutorial introduction, invited talks (30 mins each + 30 mins for discussion), and presentations of contributed work, with time for discussions. Depending on quality and compatibility with workshop aims, slots for brief talks and posters will be allocated.

Contributions should be communicated to the program committee (the organizers) in form of an extended abstract (from 4 to 8 pages in the NIPS conference paper style), sent to: d.glowacka (at) cs.ucl.ac.uk

Important Dates

—————————

25 October 2009 Deadline for abstract submission 2 November 2009 Notification of Acceptance 11/12 December 2009 Workshop taking place at Whistler, Canada

Invited Speakers

—————————

Partha Niyogi University of Chicago

Mark Johnson Brown University

Dan Klein University of California at Berkeley

Workshop Organisers

—————————

Alex Clark Royal Holloway, University of London Dorota Glowacka University College London John Shawe-Taylor University College London Yee Whye Teh University College London Chris Watkins Royal Holloway, University of London

Point of Contact

—————————

Dorota Glowacka, Workshop Organiser d.glowacka (at) cs.ucl.ac.uk
John Shawe-Taylor, Workshop Organiser jst (at) cs.ucl.ac.uk

NIPS 2009 workshop on Learning from Multiple Sources with Applications to Robotics, Call for contributions

CALL FOR CONTRIBUTIONS

NIPS 2009 workshop on Learning from Multiple Sources with Applications to Robotics
Whistler, BC, Canada, December 11 or 12, 2009

http://www.dcs.gla.ac.uk/~srogers/lms09/index.htm

Important Dates:
—————-

Submission of extended abstracts: October 27, 2009

Notification of acceptance: November 6, 2009

Workshop Description:
———————

Learning from multiple sources denotes the problem of jointly learning from a set of (partially) related learning problems / views / tasks. This general concept underlies several subfields receiving increasing interest from the machine learning community, which differ in terms of the assumptions made about the dependency structure between learning problems. In particular, the concept includes topics such as data fusion, transfer learning, multitask learning, multiview learning, and learning under covariate shift. Several approaches for inferring and exploiting complex relationships between data sources have been presented, including both generative and discriminative approaches.

The workshop will provide a unified forum for cutting edge research on learning from multiple sources; the workshop will examine the general concept, theory and methods, and will also examine *robotics* as a natural application domain for learning from multiple sources. The workshop will address methodological challenges in the different subtopics and further interaction
between them. The intended audience is researchers working in fields of multi-modal learning, data fusion, and robotics.

(More detailed background information is available at the workshop website.)

The workshop includes a morning session focused on theory/methods, and an afternoon session focused on the robotics application.

The workshop is a core event of the PASCAL2 Network of Excellence.

PASCAL2 Invited Speakers:
————————-

Morning Session: Chris Williams – University of Edinburgh

Afternoon Session: to be announced

Submission Instructions:
————————

We invite submission of extended abstracts to the workshop. Extended abstracts should be 2-4 pages, formatted in the NIPS style: http://nips.cc/PaperInformation/StyleFiles
Unlike the main NIPS conference, identities of authors do not need to be removed from the extended abstracts.

Extended abstracts should be sent in .PDF or .PS file format by email, to either D.Hardoon (at) cs.ucl.ac.uk or gleen (at) cis.hut.fi. Acceptance to the workshop will be determined based on peer
review of each extended abstract.

Submissions are expected to represent high-quality, novel contributions in theory/methods of learning from multiple sources,or high-quality, novel contributions in application of learning
from multiple sources to robotics (see below).

To encourage participants from the machine learning community to test their algorithms in the domain of robotics, we will make available a dataset, with computed features, representative of
open research issues in robotics. Robotics-oriented papers submitted to the workshop are strongly encouraged to contain an experimental evaluation on the database made available by the organizers. The obtained results will be presented by the organizers during the workshop.

Submitted extended abstracts may be accepted either as an oral presentation or as a poster presentation; there will be only a limited number of oral presentations in the morning and afternoon sessions.

Accepted extended abstracts will be made available online at the workshop website.

Depending on the quality of submissions, we will consider preparing a special issue of a journal or a collected volume on the topic of the workshop. A separate call for papers will then be issued after the workshop for the special issue/collected volume. Last year’s “Learning from Multiple Sources” workshop led to a special issue in Machine Learning (currently in progress).

Organisers
———-

* Barbara Caputo – Idiap Research Institute.
* Nicolò Cesa-Bianchi – Università degli Studi di Milan.
* David Hardoon – Institute for Infocomm Research (I2R).
* Gayle Leen – Helsinki University of Technology.
* Francesco Orabona – Idiap Research Institure.
* Jaakko Peltonen – Helsinki University of Technology.
* Simon Rogers – University of Glasgow.

Update: Pascal Challenge on Large Scale Hierarchical Text Classification

*** Change of important dates ***
*** End of testing: November 16 ***
*** End of scalability tests: December 14 ***

The challenge is well underway now and we are happy to observe very active participation by
various research teams. As mentioned above, we have decided to extend the end of the challenge, which will hopefully give more time to the participants to improve their methods.

For more information about the challenge, please refer to its Web site.

If you have not already done so, you can register for the challenge and gain access to the DMOZ datasets, by creating a new account at the Web site.

Original Announcement
—————————————————-
Website: http://lshtc.iit.demokritos.gr/
Email: lshtc_info (at) iit.demokritos.gr

We are pleased to announce the launch of the Large Scale Hierarchical Text classification (LSHTC) Pascal Challenge. The LSHTC Challenge is a hierarchical text classification competition using large datasets based on the ODP Web directory data (www.dmoz.org).

Hierarchies are becoming ever more popular for the organization of text documents, particularly on the Web. Web directories are an example. Along with their widespread use, comes the need for automated classification of new documents to the categories in the hierarchy. As the size of the hierarchy grows and the number of documents to be classified increases, a number of interesting machine learning problems arise. In particular, it is one of the rare situations where data sparsity remains an issue despite the vastness of available data. The reasons for this are the simultaneous increase in the number of classes and their hierarchical organization. The latter leads to a very high imbalance between the classes at different levels of the hierarchy. Additionally, the statistical dependence of the classes poses challenges and opportunities for the learning methods.

The challenge will consist of four tasks with partially overlapping data. Information regarding the tasks and the challenge rules can be found at challenge Web site, under the “Tasks, Rules and Guidelines” link.

We plan a two-stage evaluation of the participating methods: one measuring classification performance and one computational performance. It is important to measure both, as they are dependent. The results will be included in a final report about the challenge and we also aim at organizing a special NIPS’09 workshop.

In order to register for the challenge and gain access to the datasets, please create a new account at challenge Web site.

Key dates:
Start of testing: July 10, 2009.
End of testing, submission of executables and short papers: September 29, 2009.
End of scalability test and announcement of results: October 25, 2009.
NIPS’09 workshop (subject to approval): December 11-12, 2009

Organisers:
Eric Gaussier, LIG, Grenoble, France
George Paliouras, NCSR “Demokritos”, Athens, Greece
Aris Kosmopoulos, NCSR “Demokritos”, Athens, Greece
Sujeevan Aseervatham, LIG, Grenoble & Yakaz, Paris, France