Extended Deadline: Special issue of JMLR on Grammar Induction, Representation of Language and Language Learning

Grammar Induction, Representation of Language and Language Learning
– A special issue of Journal of Machine Learning Research –
Extended submission deadline: 3 May 2010

Alex Clark, Dorota Glowacka, Colin de la Higuera, Mark Johnson and John Shawe-Taylor,
guest editors

We would like to invite submissions for a special issue of the Journal
of Machine Learning Research on “Grammar Induction, Representation of
Language and Language Learning”.

We believe now is the time to revisit some of the fundamental grammar/
language learning tasks such as grammar acquisition, language
acquisition, language change, and the general problem of automatically
inferring generic representations of language structure in a data driven
manner.

Though the underlying problems have been known to be computationally
intractable for the standard representations of the Chomsky hierarchy,
such as regular grammars and context free grammars, progress has been
made either by modifying or restricting these classes to make them more
observable or by revisiting these classes with the added insights from
statistical machine learning or optimisation. Generalisations of
distributional learning have shown promise in unsupervised learning of
linguistic structure using tree based representations, access to queries
or using non-parametric approaches to inference. Such approaches are
starting to make inroads into one of the fundamental problems of cognitive
science: that of learning complex representations that encode meaning.

Grammar induction, also known as grammatical inference, was the subject
of an intense study in the early days of Computational Learning Theory,
with the theory of query learning largely developing out of this
research. More recently the study of new methods of representing language
and grammars through complex kernels and probabilistic modelling together
with algorithms such as structured output learning has enabled machine
learning methods to be applied successfully to a range of language
related tasks from simple topic classification through parts of speech
tagging to statistical machine translation.

These methods sometimes rely on more fluid structures than those derived
from formal grammars and yet are able to compete favourably with
classical grammatical models. Furthermore, new approaches have appeared
requiring less significant input from domain experts, often in the form
of annotated data.

Guest editors:
Alex Clark, Royal Holloway, University of London
Dorota Glowacka, University College London
Colin de la Higuera, Nantes University
Mark Johnson, Macquarie University
John Shawe-Taylor, University College London

Submissions:
The journal special issue is aimed at machine learners with an interest
in text modelling and processing who are interested in extending their
work to more complex language tasks, cognitive systems and knowledge
representation moving beyond models that are implicitly or explicitly
based on variants of finite state automata.

Submissions are expected to represent high-quality, significant
contributions in the area of machine learning algorithms and/or
applications of machine learning. Application papers are expected to
describe the application in detail and to present novel solutions that
have some general applicability (beyond the specific application). The
authors should follow standard formatting guidelines for Journal of
Machine Learning Research manuscripts
(http://jmlr.csail.mit.edu/author-info.html). Submissions and reviewing
will be handled electronically using standard procedures for Journal of
Machine Learning Research (http://jmlr.csail.mit.edu/manudb).