Goals of the organizers

The goal of the “BCI Competition IV” is to validate signal processing and classification methods for Brain-Computer Interfaces (BCIs). Compared to the past BCI Competitions, new challanging problems are addressed that are highly relevant for practical BCI systems, such as

  • classification of continuous EEG without trial structure (data sets 1).
  • classification of EEG signals affected by eye movement artifacts (data sets 2).
  • classification of the direction of wrist movements from MEG (data sets 3).
  • discrimination requiring fine grained spatial resolution in ECoG (data sets 4).


The organizers are aware of the fact that by such a competition it is impossible to validate BCI systems as a whole. But nevertheless we envision interesting contributions to ultimately improve the full BCI and to provide a challenging data base for the research community.

Goals for the participants

For each data set specific goals are given in the respective description. Technically speaking, each data set consists of single-trials of spontaneous brain activity, one part labeled (calibration or training data) and another part unlabeled (evaluation or test data), and a performance measure. The goal is to infer labels (or their probabilities) for the evaluation data sets from calibration data that maximize the performance measure for the true (but to the competitors unknown) labels of the evaluation data. Results will be announced at a workshop of the NIPS 2008 conference and on this web site. For each data set, the competition winner gets a chance to publish the algorithm in an individual article that will appear in a volume devoted to this competition.

Data sets

Data sets 1: ‹motor imagery, uncued classifier application› (description)
provided by the Berlin BCI group: Technische Universität Berlin (Machine Learning Laboratory) and Fraunhofer FIRST (Intelligent Data Analysis Group) (Klaus-Robert Müller, Benjamin Blankertz, Carmen Vidaurre, Guido Nolte), and Campus Benjamin Franklin of the Charité – University Medicine Berlin, Department of Neurology, Neurophysics Group (Gabriel Curio)
EEG, motor imagery (2 classes of left hand, right hand, foot); evaluation data is continuous EEG which contains also periods of idle state
[64 EEG channels (0.05-200Hz), 1000Hz sampling rate, 2 classes (+ idle state), 7 subjects]

Data sets 2a: ‹4-class motor imagery› (description)
provided by the Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, (Clemens Brunner, Robert Leeb, Gernot Müller-Putz, Alois Schlögl, Gert Pfurtscheller)
EEG, cued motor imagery (left hand, right hand, feet, tongue)
[22 EEG channels (0.5-100Hz; notch filtered), 3 EOG channels, 250Hz sampling rate, 4 classes, 9 subjects]

Data sets 2b: ‹motor imagery› (description)
provided by the Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, (Robert Leeb, Clemens Brunner, Gernot -Müller-Putz, Alois Schlögl, Gert Pfurtscheller)
EEG, cued motor imagery (left hand, right hand)
[3 bipolar EEG channels (0.5-100Hz; notch filtered), 3 EOG channels, 250Hz sampling rate, 2 classes, 9 subjects]

Data sets 3: ‹hand movement direction in MEG› (description)
provided by the Brain Machine Interfacing Initiative, Albert-Ludwigs-University Freiburg, the Bernstein Center for Computational Neuroscience Freiburg and the Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen (Stephan Waldert, Carsten Mehring, HubertPreissl, Christoph Braun)
The data set contains directionally modulated low-frequency MEG activity that was recorded while subjects performed wrist movements in four different directions.
[10 MEG channels (filtered to 0.5-100Hz), 400Hz sampling rate, 4 classes, 2 subjects]

Data sets 4: ‹finger movements in ECoG› (description)
provided by Departments of Physics and Medicine of the University of Washington, Seattle (Kai J. Miller) and Wadsworth Center, NYS Department of Health (Gerwin Schalk)
ECoG data during individual flexions of the five fingers; movements acquired with a data glove.
[48 – 64 ECoG channels (0.15-200Hz), 1000Hz sampling rate, 5 classes, 3 subjects]

[ top ]

Schedule

July 3rd 2008: launching of the competition
November 21st 2008, midnight CET to Nov 22nd: deadline for submissions
December 12th 2008: announcement of the results at a workshop of NIPS 2008 and on this web site

[ top ]

Submission

Submissions to a data set are to be sent to the responsible contact person as stated in the data set description. The submission has to comprise the estimated labels, names and affiliations of all involved researchers and a short note on the involved processing techniques. We send confirmations for each submission we get. If you do not receive a confirmation within 2 days please resend your email with CC to the other organizing committee members, ‹benjamin.blankertz@tu-berlin.de›, ‹alois.schloegl@tugraz.at›, ‹waldert@bccn.uni-freiburg.de›, ‹kjmiller@u.washington.edu›.

One researcher may NOT submit multiple results to one data set. She/he has to decide for her/his favorite one. However: From one research group multiple submissions to one data set are possible. The sets of involved researchers do not have to be disjoint, but (1) the ‘first author’ (contributor) should be distinct, and (2) approaches should be substantially different.

For details on how to submit your results please refer to the description of the respective data set. If questions remain unanswered send an email to the responsable contact person for the specific data set which is indicated in the description.

Submissions are evaluated for each group of data sets separately. There is no need to submit for all data sets of the competition in order to participate, however each submission must provide results for all data sets of one group (e.g., for all subjects provided in Data sets 2a).

Each participant agrees to deliver an extended description (4-6 pages) of the used algorithm for publication until Februar 1st 2009 in case she/he is the winner for one of the data sets.

Organizers

Berlin: Benjamin Blankertz, Carmen Vidaurre, Michael Tangermann, Klaus-Robert Müller

Graz: Clemens Brunner, Robert Leeb, Gernot Müller-Putz, Alois Schlögl, Gert Pfurtscheller

Freiburg/Tübingen: Stephan Waldert, Carsten Mehring, Ad Aertsen, Niels Birbaumer

Washington/Albany: Kai J. Miller, Gerwin Schalk

[ top ]

References

Short History of past BCI Competitions

The first BCI Competition was announced at NIPS 2001, and the second at NIPS 2002. The first competition was a first try to see how such a thing would work and it was only announced in a smaller community. Accordingly there were not such much submissions, but nevertheless many researchers showed great interest when the results were published (first in the internet and then in IEEE Trans Neural Sys Rehab Eng, 2003, vol 11(2), pp. 184-185 [ draft ]). For the second competition data sets were provided by four of the leading groups in EEG-based BCIs. Here we received 59 submissions. A review of the 2nd competition appeared in IEEE Trans Biomed Eng, 51(6):1044-1051, 2004 [ draft ] and articles of all winning teams of the competition were published in the same issue which provides a good overview of the state of art in classification techniques for BCI. The 3rd BCI Competition involved data sets from five BCI labs and we received 99 submissions. It was reviewed in IEEE Trans Neural Sys Rehab Eng, 14(2):153-159, 2006 [ draft ] and individual articles of the competition winners appeared in different journals.

References to papers that analyze competition data sets

can be found here. Please help us to make the list complete and keep it up to date by reporting unlisted papers to ‹benjamin.blankertz@tu-berlin.de›, preferably PubMed ID (PMID) or in BibTex format.

References to papers about past BCI Competitions

 

References to overviews of BCI research.

  • Gerwin Schalk. Brain-Computer symbiosis J Neural Eng 5 P1-P15, 2008.
  • Guido Dornhege, José del R. Millán, Thilo Hinterberger, Dennis McFarland, and Klaus-Robert Müller, editors. Toward Brain-Computer Interfacing. MIT Press, Cambridge, MA, 2007.
  • Kübler A, Kotchoubey B. Brain-computer interfaces in the continuum of consciousness. Curr Opin Neurol 20(6):643-9, 2007.
  • B.Z. Allison, E.W. Wolpaw, and Wolpaw J.R.. Brain-computer interface systems: progress and prospects. Expert Rev Med Devices, 4(4):463-474, 2007.
  • Eleanor A. Curran and Maria J. Stokes. Learning to control brain activity: A review of the production and control of EEG components for driving brain-computer interface (BCI) systems. Brain Cogn., 51:326-336, 2003.
  • Jonathan R. Wolpaw, Niels Birbaumer, Dennis J. McFarland, Gert Pfurtscheller, and Theresa M. Vaughan. Brain-computer interfaces for communication and control. Clin. Neurophysiol., 113:767-791, 2002.
  • José del R. Millán. Brain-computer interfaces. In M.A. Arbib (ed.), “Handbook of Brain Theory and Neural Networks, 2nd ed.” Cambridge: MIT Press, 2002.
  • Andrea Kübler, Boris Kotchoubey, Jochen Kaiser, Jonathan Wolpaw, and Niels Birbaumer. Brain-computer communication: Unlocking the locked in. Psychol. Bull., 127(3):358-375, 2001.

 

References to BCI Special Issues.

  • IEEE Signal Proc Magazine, 25(1), 2008.
  • IEEE Trans. Biomed. Eng., 51(6), 2004.
  • IEEE Trans. Neural Sys. Rehab. Eng., 11(2), 2003.
  • IEEE Trans. Rehab. Eng., 8(2), 2000.


Links to General Interest BCI Sites.

 

We are pleased to announce the launch of the Large Scale Hierarchical Text classification  (LSHTC) Pascal Challenge. The LSHTC Challenge is a hierarchical text classification competition using large datasets based on the ODP Web directory data (www.dmoz.org).

Hierarchies are becoming ever more popular for the organization of text documents, particularly on the Web. Web directories are an example. Along with their widespread use, comes the need for automated classification of new documents to the categories in the hierarchy. As the size of the hierarchy grows and the number of documents to be classified increases, a number of interesting machine learning problems arise. In particular, it is one of the rare situations where data sparsity remains an issue despite the vastness of available data. The reasons for this are the simultaneous increase in the number of classes and their hierarchical organization. The latter leads to a very high imbalance between the classes at different levels of the hierarchy. Additionally, the statistical dependence of the classes poses challenges and opportunities for the learning methods.

The challenge will consist of four tasks with partially overlapping data. Information regarding the tasks and the challenge rules can be found at the “Tasks, Rules and Guidelines” link.

We plan a two-stage evaluation of the participating methods: one measuring classification performance and one computational performance. It is important to measure both, as they are dependent. The results will be included in a final report about the challenge and we also aim at organizing a special workshop.

In order to register for the challenge and gain access to the datasets, please create a new account at http://lshtc.iit.demokritos.gr/.

Organisers:

Eric Gaussier, LIG, Grenoble, France
George Paliouras, NCSR “Demokritos” , Athens, Greece
Aris Kosmopoulos, NCSR “Demokritos” and AUEB, Athens, Greece
Sujeevan Aseervatham, LIG, Grenoble, France

We organized the KDD cup 2009 around a marketing problem with the goal of identifying data mining techniques capable of rapidly building predictive models and scoring new entries on a large database. Customer Relationship Management (CRM) is a key element of modern marketing strategies. The KDD Cup 2009 o ered the opportunity to work on large marketing databases from the French Telecom company Orange to predict the propensity of customers to switch provider (churn), buy new products or services (appetency), or buy upgrades or add-ons proposed to them to make the sale more pro table (up-selling). The challenge started on March 10, 2009 and ended on May 11, 2009. This challenge attracted over 450 participants from 46 countries. We attribute the popularity of the challenge to several factors: (1) A generic problem relevant to the Industry (a classi cation problem), but presenting a number of scienti c and technical challenges of practical interest including: a large number of training examples (50,000) with a large number of missing values (about 60%) and a large number of features (15,000), unbalanced class proportions (fewer than 10% of the examples of the positive class), noisy data, presence of categorical variables with many di erent values. (2) Prizes (Orange o ered 10,000 Euros in prizes). (3) A well designed protocol and web site (we bene tted from past experience). (4) An e ective advertising campaign using mailings and a teleconference to answer potential participants questions. The results of the challenge were discussed at the KDD conference (June 28,2009). The principal conclusions are that ensemble methods are very e ective and that ensemble of decision trees o er o -the-shelf solutions to problems with large numbers of samples and attributes, mixed types of variables, and lots of missing values. The data and the platform of the challenge remain available for research and educational purposes at http://www.kddcup-orange.com/.
Background
Customer Relationship Management (CRM) is a key element of modern marketing strategies. The KDD Cup 2009 o ered the opportunity to work on large marketing databases from the French Telecom company Orange to predict the propensity of customers to switch provider (churn), buy new products or services (appetency), or buy upgrades or add-ons proposed to them to make the sale more pro table (up-selling). The most practical way to build knowledge on customers in a CRM system is to produce scores. A score (the output of a model) is an evaluation for all target variables to explain (i.e., churn, appetency or up-selling). Tools producing scores provide quanti able information on a given population. The score is computed using customer records represented by a number of variables or features. Scores are then used by the information system (IS), for example, to personalize the customer relationship.
The rapid and robust detection of the most predictive variables can be a key factor in a marketing application. An industrial customer analysis platform developed at Orange Labs, capable of building predictive models for datasets having a very large number of input variables (thousands) and instances (hundreds of thousands), is currently in use by Orange marketing. A key requirement is the complete automation of the whole process. The system extracts a large number of features from a relational database, selects a subset of informative variables and instances, and efficiently builds in a few hours an accurate classiffier. When the models are deployed, the platform exploits sophisticated indexing structures and  parallelization in order to compute the scores of millions of customers, using the best representation.
The challenge was to beat the in-house system developed by Orange Labs. It was an opportunity for participants to prove that they could handle a very large database, including heterogeneous noisy data (numerical and categorical variables), and unbalanced class distributions. Time eciency is often a crucial point. Therefore part of the competition was time-constrained to test the ability of the participants to deliver solutions quickly. The fast track of the challenge lasted ve days only. To encourage participation, the slow track of the challenge allowed participants to continue working on the problem for an additional month. A smaller database was also provided to allow participants with limited computer resources to enter the challenge.
Background
This challenge uses important marketing problems to benchmark classi cation methods in a setting, which is typical of large-scale industrial applications. A large database was made available by the French Telecom company, Orange with tens of thousands of examples and variables. This dataset is unusual in that it has a large number of variables making the problem particularly challenging to many state-of-the-art machine learning algorithms. The challenge participants were provided with masked customer records and their goal was to predict whether a customer will switch provider (churn), buy the main service (appetency) and/or buy additional extras (up-selling), hence solving three binary classification problems. Churn is the propensity of customers to switch between service providers, appetency is the
propensity of customers to buy a service, and up-selling is the success in selling additional good or services to make a sale more pro table. Although the technical difficulty of scaling up existing algorithms is the main emphasis of the challenge, the dataset proposed offers a variety of other difficulties: heterogeneous data (numerical and categorical variables), noisy data, unbalanced distributions of predictive variables, sparse target values (only 1 to 7 percent of the examples examples belong to the positive class) and many missing values.

The GRavitational lEnsing Accuracy Testing 2008 (GREA T08) Challenge focuses on a problem that is of crucial importance for future observations in cosmology. The shapes of distant galaxies can be used to determine the properties of dark energy and the nature of gravity, because ligh t from those galaxies is bent by gravity from the intervening dark matter. The observed galaxy images appear distorted, although only slightly, and their shapes must be precisely disentangled from the effects of pixelisation,con volution and noise. The worldwide gravitational lensing community has made signi fficant progress in techniques to measure these distortions via the ShearTEsting Program (STEP). Via STEP, we have run

Challenges within our own community, and come to recognise that this particular image analysis problem is ideally matched to experts in statistical inference, inverse problems and computational learning. Thus, in order to continue the progress seen in recent years, we are seeking an infusion of few ideas from these communities.

 

 

Introduction

The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are:

  • Person: person
  • Animal: bird, cat, cow, dog, horse, sheep
  • Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
  • Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor

There will be three main competitions: classification, detection, and segmentation; and a single smaller scale “taster” competition: person layout:

Classification/Detection Competitions

  1. Classification: For each of the twenty classes, predicting presence/absence of an example of that class in the test image.
  2. Detection: Predicting the bounding box and label of each object from the twenty target classes in the test image.
    20 classes
    aeroplane bicycle bird boat bottle bus car cat chair cow
    dining table dog horse motorbike person potted plant sheep sofa train tv/monitor

Participants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the twenty object classes. The challenge allows for two approaches to each of the competitions:

  1. Participants may use systems built or trained using any methods or data excluding the provided test sets.
  2. Systems are to be built or trained using only the provided training/validation data.

The intention in the first case is to establish just what level of success can currently be achieved on these problems and by what method; in the second case the intention is to establish which method is most successful given a specified training set.

Segmentation Competition

  • Segmentation: Generating pixel-wise segmentations giving the class of the object visible at each pixel, or “background” otherwise.
    Image Objects Class

Person Layout Taster Competition

  • Person Layout: Predicting the bounding box and label of each part of a person (head, hands, feet).
    Image Person Layout

Data

To download the training/validation data, see the development kit.

The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Some segmentation examples can be viewed online.

Annotation was performed according to a set of guidelines distributed to all annotators.

The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission.

In the second stage, the test set will be made available for the actual competition. As in the VOC2008 challenge, no ground truth for the test data will be released.

The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 14,743 images. Further statistics are online.

Example images

Example images and the corresponding annotation for the classification/detection/segmentation tasks, and and person layout taster can be viewed online:

Development Kit

The development kit consists of the training/validation data, MATLAB code for reading the annotation data, support files, and example implementations for each competition.

Test Data

The test data is now available. Note that the only annotation in the data is for the layout taster challenge – disjoint from the main challenge. As in 2008, there are no current plans to release full annotation – evaluation of results will be provided by the organizers.

The test data can now be downloaded from the evaluation server. You can also use the evaluation server to evaluate your method on the test data.

Useful Software

Below is a list of software you may find useful, contributed by participants to previous challenges.

Timetable

  • 15 May 2009: Development kit (training and validation data plus evaluation software) made available.
  • 15 June 2009: Test set made available.
  • 14 September 2009 (extended). Deadline for submission of results. There will be no further extensions.
  • 3 October 2009: Workshop in assocation with ICCV 2009, Kyoto, Japan.

Submission of Results

Participants are expected to submit a single set of results per method employed. Participants who have investigated several algorithms may submit one result per method. Changes in algorithm parameters do not constitute a different method – all parameter tuning must be conducted using the training and validation data alone.

Results must be submitted using the automated evaluation server:

It is essential that your results files are in the correct format. Details of the required file formats for submitted results can be found in the development kit documentation. The results files should be collected in a single archive file (tar/tgz/tar.gz). An example of the correct format is available:

The format of your results must match that specified in the development kit and in the example file, including both file names and directory structure. If you are not entering a particular competition, just omit the corresponding files.

Participants submitting results for several different methods (noting the definition of different methods above) should produce a separate archive for each method.

In addition to the results files, participants will need to additionally specify:

  • contact details and affiliation
  • list of contributors
  • brief description of the method

If you would like to submit a more detailed description of your method, for example a relevant publication, this can be included in the results archive.

Prizes

The following prizes were announced at the challenge workshop:

Classification
Winner: NEC/UIUC
Yihong Gong, Fengjun Lv, Jinjun Wang, Chen Wu, Wei Xu, Jianchao Yang, Kai Yu, Xi Zhou, Thomas Huang
NEC Laboratories America; University of Illinois at Urbana-Champaign
Honourable mentions: UVA/SURREY
Koen van de Sande, Fei Yan, Atif Tahir, Jasper Uijlings, Mark Barnard, Hongping Cai, Theo Gevers, Arnold Smeulders, Krystian Mikolajczyk, Josef Kittler
University of Amsterdam; University of SurreyCVC
Fahad Shahbaz Khan, Joost van de Weijer, Andrew Bagdanov, Noha Elfiky, David Rojas, Marco Pedersoli, Xavier Boix, Pep Gonfaus, Hany salahEldeen, Robert Benavente, Jordi Gonzalez, Maria Vanrell
Computer Vision Centre Barcelona
Detection
Joint Winners: UoC/TTI Chicago
Pedro Felzenszwalb, Ross Girshick, David McAllester
University of Chicago; Toyota Technological Institute at ChicagoOxford/MSR India
Andrea Vedaldi, Varun Gulshan, Manik Varma, Andrew Zisserman
University of Oxford; Microsoft Research India
Segmentation
Winner: Bonn
Joao Carreira, Fuxin Li, Cristian Sminchisescu
University of Bonn
Runner-up: CVC
Xavier Boix, Josep Maria Gonfaus, Fahad Kahn, Joost van de Weijer, Andrew Bagdanov, Marco Pedersoli, Jordi Gonzalez, Joan Serrat
Computer Vision Center Barcelona

Publication Policy

The main mechanism for dissemination of the results will be the challenge webpage.

The detailed output of each submitted method will be published online e.g. per-image confidence for the classification task, and bounding boxes for the detection task. The intention is to assist others in the community in carrying out detailed analysis and comparison with their own methods. The published results will not be anonymous – by submitting results, participants are agreeing to have their results shared online.

Citation

If you make use of the VOC2009 data, please cite the following reference (to be prepared after the challenge workshop) in any publications:

@misc{pascal-voc-2009,
	author = "Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.",
	title = "The {PASCAL} {V}isual {O}bject {C}lasses {C}hallenge 2009 {(VOC2009)} {R}esults",
	howpublished = "http://www.pascal-network.org/challenges/VOC/voc2009/workshop/index.html"}	

Database Rights

The VOC2009 data includes images obtained from the “flickr” website. Use of these images must respect the corresponding terms of use:

For the purposes of the challenge, the identity of the images in the database, e.g. source and name of owner, has been obscured. Details of the contributor of each image can be found in the annotation to be included in the final release of the data, after completion of the challenge. Any queries about the use or ownership of the data should be addressed to the organizers.

Organizers

  • Mark Everingham (University of Leeds), me@comp.leeds.ac.uk
  • Luc van Gool (ETHZ, Zurich)
  • Chris Williams (University of Edinburgh)
  • John Winn (Microsoft Research Cambridge)
  • Andrew Zisserman (University of Oxford)

Acknowledgements

We gratefully acknowledge the following, who spent many long hours providing annotation for the VOC2009 database: Jan Hendrik Becker, Patrick Buehler, Kian Ming Adam Chai, Miha Drenik, Chris Engels, Hedi Harzallah, Nicolas Heess, Sam Johnson, Markus Mathias, Alastair Moore, Maria-Elena Nilsback, Patrick Ott, Florian Schroff, Alexander Sorokin, Paul Sturgess, David Tingdahl. We also thank Andrea Vedaldi for additional assistance.

The objective of the Challenge is to design a statistical machine learning algorithm that discovers which morphemes (smallest individually meaningful units of language) words consist of. Ideally, these are basic vocabulary units suitable for different tasks, such as text understanding, machine translation, information retrieval, and statistical language modeling.

The scientific goals are:

  • To learn of the phenomena underlying word construction in natural languages
  • To discover approaches suitable for a wide range of languages
  • To advance machine learning methodology

Morpho Challenge 2009 is a follow-up to our previous Morpho Challenge 2005, 2007 and 2008. The task of Morpho Challenge 2009 is similar to the Morpho Challenge 2008, where the aim was to find the morpheme analysis of the word forms in the data. For this challenge, new Machine Translation tasks are added (from Finnish to English and from German to English) to evaluate the performance of the morpheme analysis.

Participation in the previous challenges is by no means a prerequisite for participation in Morpho Challenge 2009. Everyone is welcome and we hope to attract many participating teams. The results will be presented in a workshop. Please read the rules and see the schedule. The datasets are available for download. Submit your analyses (result files) by sending them by email to the organizers, or by indicating a location where the organizers can download your files. Remember also to describe your algorithm in a paper. Please read the formatting instructions in rules.

If you plan to participate in Morpho Challenge, please contact the organizers using the email address in contact and ask to be added in our mailing list. We will use this mailing list to provide news about the tasks, data and evaluations.

The results from the evaluation runs are now in the Results page.
The Workshop was held in September 30, 2009

References

Mathias Creutz and Krista Lagus (2005). Unsupervised Morpheme Segmentation and Morphology Induction from Text Corpora Using Morfessor 1.0. Publications in Computer and Information Science, Report A81, Helsinki University of Technology, March.
[ Article (PDF) ]Teemu Hirsimäki, Mathias Creutz, Vesa Siivola, Mikko Kurimo, Janne Pylkkönen, and Sami Virpioja (2006). Unlimited vocabulary speech recognition with morph language models applied to Finnish.Computer Speech and Language, Volume 20, Issue 4, October, pages 515–541.
[ Article (PDF) ]

Sami Virpioja, Jaakko J. Väyrynen, Mathias Creutz, and Markus Sadeniemi (2007). Morphology-aware statistical machine translation based on morphs induced in an unsupervised manner. In Proceedings of the Machine Translation Summit XI, pages 491–498.
[ Article (PDF) ]

Mikko Kurimo and Matti Varjokallio (2008). Unsupervised morpheme analysis evaluation by a comparison to a linguistic Gold Standard – Morpho Challenge 2008. In Working Notes for the CLEF 2008 Workshop.
[ Article (PDF) ]

Mikko Kurimo and Ville Turunen (2008). Unsupervised morpheme analysis evaluation by IR experiments – Morpho Challenge 2008. In Working Notes for the CLEF 2008 Workshop.
[ Article (PDF) ]

Rules

Submission of large data files

Send an email to the organizers morphochallenge2008<>mail.cis.hut.fi and tell where they can download the data files. Small data files (but not larger than a few MBs) can be emailed directly. Please, follow carefully the format of the result files described in datasets.

Acceptance

The organizers retain all rights to the Challenge data, which is given to the participants for use in this challenge only. The organizers may use the data submitted to the Challenge freely, without restrictions.

Eligibility

Anyone is allowed to participate. A participant may be either a single person or a group. A single person can participate in at most two groups. A participant is allowed to submit at most three different solutions, where each solution corresponds to a particular morpheme analysis method. Each of these methods may naturally be applied to each of the test languages. If a participant submits more than three solutions, the organizers decide which of the three will be accepted.

Test languages

Data sets are provided for five languages: Arabic, English, Finnish, German and Turkish. Participants are encouraged to apply their algorithm to all of these test languages, but are free to leave some languages out, if they wish to do so.

(New languages may be added, if interested co-organizers, suitable data and evaluation analyses become available in time.)

Task

The task is to develop a system that conducts unsupervised morpheme analysis for every word form contained in a word list supplied by the organizers for each test language.

The participants will be pointed to corpora in which the words occur, so that the algorithms may utilize information about word context.

Solutions, in which a large number of parameters must be “tweaked” separately for each test language, are of little interest. This challenge aims at the unsupervised (or very minimally supervised) morpheme analysis of words. The abstracts submitted by the participants must contain clear descriptions of which steps of supervision or parameter optimization are involved in the algorithms.

Competitions

The segmentations will be evaluated in three complementary ways:

  • Competition 1: The proposed morpheme analyses will be compared to a linguistic “gold standard”.
  • Competition 2: Information retrieval (IR) experiments will be performed, where the words in the documents and queries will be replaced by their proposed morpheme representations. The search will then be based on morphemes instead of words.
  • Competition 3: Machine Translation (MT) model is trained, where the words in the source language documents will be replaced by their proposed morpheme representations. The words in the source language evaluation data will then also be replaced by their proposed morpheme representations and the translation will be based on morphemes instead of words.

Competition 1 will include all five test languages. Winners will be selected separately for each language. As a performance measure, the F-measure of accuracy of suggested morpheme analyses is utilized. Should two solutions produce the same F-measure, the one with higher precision will win.

Competition 2 will include three of the test languages. The organizers will perform the IR experiments based on the morpheme analyses submitted by the participants.

Competition 3 will include two of the test languages. Translation will be done from the test language to English. The organizers will train the translation models and perform the evaluation of the translations using an automatic metric such as BLEU.

Workshop and publication

All good results will be acknowledged with fame and glory. Presentations for the challenge workshop will be selected by the organizers based on the results and a paper of at most 10 pages describing the algorithm and the data submission. However, all groups who have submitted results and a paper are welcome to participate in the workshop to listen to the talks and join the discussions.

Workshop papers

For your paper submission (due August 15), please use the single-column CLEF 2007 Notebook Proceedings format. Here are a sample PDF file and a template Latex document. Email your paper submission to the organizers.
Formatting instructions:

size: A4
format: pdf (if difficult, ps or MS Word (rtf) are acceptable) Do NOT lock the pdf file.
borders: top, left right 2.5 cm ; bottom 3 cm
text size: 16 x 24 cm
length: 10 pages maximum
title: Times 14 pt bold centered
author(s): Times 10 pt centered
abstract: Times 10 pt justified
ACM Categories and Subject Descriptors: Times 10 pt left aligned
Free Keywords: Times 10 pt left aligned
body text: Times 10 pt justified
Section Headings: Times 12 pt bold left aligned
Emphasis: Times 10 pt italic

Arbitration

In the case of disagreement the organizers will decide the final interpretation of the rules.

Competition 1

NEW: The evaluation measures of competition 1 are updated for Morpho Challenge 2009. Some bugs related to the handling of alternative analyses are fixed from the scripts, and points are now measured as one per word, not one per word pair. The new evaluation scripts are now available:

 

 

The old scripts are found from Challenge 2008.

In Competition 1, for each language, the morpheme analyses proposed by the participants’ algorithm will be compared against a linguistic gold standard. Samples of the gold standards used are available for download on the datasets page.

Since the task at hand involves unsupervised learning, it cannot be expected that the algorithm comes up with morpheme labels that exactly correspond to the ones designed by linguists. That is, no direct comparison will take place between labels as such (the labels in the proposed analyses vs. labels in the gold standard). What can be expected, however, is that two word forms that contain the same morpheme according to the participants’ algorithm also have a morpheme in common according to the gold standard. For instance, in the English gold standard, the words “foot” and “feet” both contain the morpheme “foot_N”. It is thus desirable that also the participants’ algorithm discovers a morpheme that occurs in both these word forms (be it called “FOOT”, “morpheme784”, “foot” or something else).

In practice, the evaluation will take place by sampling a large number of word pairs, such that both words in the pair have at least one morpheme in common. As the evaluation measure, we will use F-measure, which is the harmonic mean of Precision and Recall:

F-measure = 1/(1/Precision + 1/Recall).

Precision is here calculated as follows: A number of word forms will be randomly sampled from the result file provided by the participants; for each morpheme in these words, another word containing the same morpheme will be chosen from the result file by random (if such a word exists). We thus obtain a number of word pairs such that in each pair at least one morpheme is shared between the words in the pair. These pairs will be compared to the gold standard; a point is given for each word pair that really has a morpheme in common according to the gold standard. The maximum number of points for one sampled word is normalized to one. The total number of points is then divided by the total number of sampled words.

For instance, assume that the proposed analysis of the English word “abyss” is: “abys +s”. Two word pairs are formed: Say that “abyss” happens to share the morpheme “abys” with the word “abysses”; we thus obtain the word pair “abyss – abysses”. Also assume that “abyss” shares the morpheme “+s” with the word “mountains”; this produces the pair “abyss – mountains”. Now, according to the gold standard the correct analyses of these words are: “abyss_N”, “abyss_N +PL”, “mountain_N +PL”, respectively. The pair “abyss – abysses” is correct (common morpheme: “abyss_N”), but the pair “abyss – mountain” is incorrect (no morpheme in common). Precision for the word “abyss” is thus 1/2 = 50%.

Recall is calculated analogously to precision: A number of word forms are randomly sampled from the gold standard file; for each morpheme in these words, another word containing the same morpheme will be chosen from the gold standard by random (if such a word exists). The word pairs are then compared to the analyses provided by the participants; a point is given for each sampled word pair that has a morpheme in common also in the analyses proposed by the participants’ algorithm. Points per word is normalized to one and the total number of points is divided by the total number of words.

For words that have several alternative analyses, as well as for word pairs that have more than one morpheme in common, normalization of the points is carried out. In short, an equal weight is given for each alternative analysis, as well as each word pair in an analysis. E.g., if a word has three alternative analyses, the first analysis has four morphemes, and the first word pair in that analysis has two morphemes in common, each of the two common morphemes will amount to 1/3*1/4*1/2=1/24 of the one point available for that word.

Evaluation of a sample (development test set)

You can evaluate your morphological analyses against the available gold standards (separately for each test language). The program to use for this is the Perl script: eval_morphemes_v2.pl. The evaluation program is invoked as follows:

eval_morphemes_v2.pl [-trace] wordpairsfile_goldstd wordpairsfile_result goldstdfile resultfile

Four files are given as arguments to eval_morphemes_v2.pl:

  1. wordpairsfile_goldstd: this is the “random word pairs file” available for download on the datasets page. This file is needed in the calculation of an estimate of the recall of the proposed morpheme analyses.
  2. wordpairsfile_result: this file has to be generated using another program (see below). It is needed in the calculation of a rough estimate of the precision of the proposed morpheme analyses.
  3. goldstdfile:this is the sample of the gold standard available for download on the datasets page. This file contains the correct morpheme analyses for circa 500 words.
  4. resultfile: this is the result file that your algorithm produces, i.e., a list of words and their proposed morpheme analyses.

The -trace argument is optional and produces output for every evaluated word separately. Regardless of the status of the trace argument, the evaluation program produces output of the following kind:

PART0. Precision: 69.00% (96/139); non-affixes: 81.55% (51/63); affixes: 58.73% (45/76)
PART0. Recall:    25.59% (142/556); non-affixes: 49.78% (105/211); affixes: 10.78% (37/345)
PART0. F-measure: 37.33%; non-affixes: 61.82%; affixes: 18.22%
#
TOTAL. Precision: 69.00%; non-affixes: 81.55%; affixes: 58.73%
TOTAL. Recall:    25.59%; non-affixes: 49.78%; affixes: 10.78%
TOTAL. F-measure: 37.33%; non-affixes: 61.82%; affixes: 18.22%

Note that results are displayed for partition 0 (PART0) and for the entire data (TOTAL). The total scores are here the same as the scores of PART0, since there is only one partition. It is, however, possible to split the data into several partitions and compute results for each partition separately. The overall scores are then calculated as the mean over the partitions. Splitting into partitions is a feature reserved for the final evaluation, when we will assess the statistical significance of the differences between the participants’ algorithms.

The figures that count in the final evaluation are the first precision, recall, and F-measure values on the TOTAL lines. These values pertain to all morphemes, but there are also separate statistics for morphemes classified as non-affixes vs. affixes. What counts as an affix is a morpheme with a label starting with a plus sign, e.g., “+PL”, “+PAST”. This naming convention is applied in the gold standard, which means that you do not have to do anything in order to get the non-affixes/affixes statistics right as far as recall is concerned. However, if you want the same kind of information also for precision, your algorithm must have a means of discovering which morphemes are likely affixes and tag these morphemes with an initial plus sign. Note that it is fully up to you whether you do this or not; it will not affect your position in the competition in any way.

Sampling word pairs for the calculation of an estimate of the precision

In order to get an estimate of the precision of the algorithm, you need to provide the evaluation script eval_morphemes_v2.pl with a file containing word pairs sampled from your result file. Unfortunately, the estimate is likely to be fairly rough. The reason for this is that you do not have the entire gold standard at your disposal. Thus, if you sample pairs of words that are not included in the 500-word gold standard that you can access, it is impossible to know whether the proposed morphemes are correct or not. What you can do, however, is to make sure that each word that goes into a word pair actually does occur in the 500-word gold standard sample. The problem here is that your algorithm might not propose that many common morphemes for the words within this limited set, and thus the estimate will be based on rather few observations.

Anyway, this is how to do it: First, make a list of relevant words, that is, words that are present in the gold standard sample available:

cut -f1 goldstdfile > relevantwordsfile

Then sample word pairs for 100 words selected by random from your results file:

sample_word_pairs_v2.pl -refwords relevantwordsfile < resultfile > wordpairsfile_result

The necessary Perl program is sample_word_pairs_v2.pl. The output file wordpairsfile_result is used as input to eval_morphemes_v2.pl (see above).

Competition 2

Competition 2 does not necessarily require any extra effort by the participants. The organizers will use the analyses provided by the participants in information retrieval experiments. Data from CLEF will be used.
However, those participants who wish to submit morpheme analysis for words in their actual context (competition 2b), please contact the organizers for more information how to register to CLEF to obtain the full texts.

In the competition 2 (and 2b) the words in the queries and documents will be replaced by the corresponding morpheme analyses provided by the participants. We will perform the IR evaluation using the state-of-the-art Okapi (BM25) retrieval method (the latest version of the freely available LEMUR toolkit. The most common morphemes in each participant’s submission will be left out from the index. The size of this stoplist will be proportional to the amount of the text data in each language and the stoplist size will be the same for each participant’s submission. The evaluation criterion will be Uninterpolated Average Precision. The segmentation with the highest Average Precision will win. The winner is selected separately for competitions 2 and 2b in each language.

Competition 3

In competition 3, the morpheme analyses proposed by the participants’ algorithm will be evaluated in a statistical machine translation (SMT) framework. The translation models will be trained to translate from a morphologically complex source language to English. The words of the source language will be replaced by their morpheme analyses before training. The translations from this morpheme-to-word model will be combined with translations from a standard word-to-word translation model. For all models, we will use a state-of-the-art phrase-based SMT system. Evaluation of the translations will be performed by applying an automatic metric such as BLEU on a held-out test set.

Data is from the Europarl corpus. The participants should apply their algorithms to the list of the word forms in the corpus. It is also possible to use the context information of the words by downloading the full corpus. (See datasets for details.)

This workshop presented and discussed the results of the PASCAL Visual Object Classes Challenge (VOC2008). As in previous years’ challenges there are two main competitions, one testing image classification (“does the image contain an instance of this class?”), and one testing object detection (“provide a bounding box for each instance of the class, if any”). In addition there are two ‘taster’ competitions: the first evaluates the object layout in more detail (“detect the hands, feet etc for a person”), the second evaluates object segmentation at the pixel level.

A new database has been prepared consisting of 20 classes with around 25000 annotated instances in total. The images are obtained from flickr. The classes include people, cats, dogs, cars, motorbikes, bottles and sofas. The annotation includes a rectangular bounding box and flags to indicate pose and level of difficulty.

Organizers

  • Mark Everingham (University of Leeds), me@comp.leeds.ac.uk
  • Luc van Gool (ETHZ, Zurich)
  • Chris Williams (University of Edinburgh)
  • John Winn (Microsoft Research Cambridge)
  • Andrew Zisserman (University of Oxford)

Challenge Rules

  • Conditions of participation: Anybody who complies with the rules of the challenge is welcome to participate. There are two modes of participation:
    • As a data donor by making an entry in the Repository.
    • As a problem solver by submitting results on at least one of the proposed tasks.
  • Dissemination: The challenge results will be presented at a NIPS 2008 conference workshop, December 12, 2008. To present at the workshop, abstracts must be submitted before October 24, 2008 to causality@clopinet.com. Participants are not required to attend the workshop and the workshop is open to non-challenge participants. The proceedings of the competition will be published by the Journal of Machine Learning Research (JMLR).
  • Anonymity: The participants who do not submit a paper to the workshop can elect to remain anonymous. Their results will be published, but their name will remain confidential.
  • Tasks: A number of datasets on which tasks have been defined are available, see the Task page. More tasks will be added from time to time, if new data are donated. Data donated show up immediately in the Repository, but they become part of the challenge only after beeing reviewed by the organizers, who then add them to the Task page. To be informed of task updates, request to be added to our mailing list by sending email to causality@clopinet.com.
  • Milestone and final results: Results must be submitted between the start and the termination of the challenge. The challenge starts on September 15, 2008 and is scheduled to terminate on November 12, November 19, 2008. Each participating team is allowed to submit one set of results per task for the final evaluation. If more than one set of results is submitted, the last one will be taken into account. The results of the final evaluation will be publicly released at the workshop. In addition, optionally, each participating team may submit one set of results before October 15, 2008 to be part of the milestone evaluation, whose results will be publicly but anonymously released.
  • Submission method: The results on each task must be sent to the designated contact person, see the Task page. In case of problem, send email to causality@clopinet.com.
  • Evaluation and rewards: To compete towards the prizes, the participants must submit a 6-page paper describing their donated dataset (if they entered as a donor) or their task-solving method(s) and result(s) (if they entered as a solver), before November 21, 2008, to causality@clopinet.com (A sample paper and a Latex style file are provided). The challenge participants must append their fact sheet to their paper, see template provided in Latex (sample paper appendix), MS Word, or Acrobat formats. Each participant is allowed to submit several papers, if they address or propose distinct problems. The contributions of the participants will be evaluated by the organizers on the basis of their challenge performance results, the post-challenge tests (see reproducibility), AND the paper, using the following criteria: Performance in challenge and general Usefulness, Novelty and Originality, Sanity, Insight, Reproducibility, and Clarity of presentation. The data donors may provide solutions to their own problems, however such contribution will not count towards winning the prizes. Close collaborators of data donors having access to information, which may give them an unfair advantage, should disclose this fact to the organizers. The best papers will be selected for presentation at the workshop and several Prizes will be awarded.
  • Reproducibility: Participation is not conditioned on delivering your code nor publishing your methods. However, we will ask the top ranking participants to voluntarily cooperate to reproduce their results. This will include filling out a fact sheet about their methods (get template in Latex, MS Word, or Acrobat formats) and eventually participating to post-challenge tests and sending us their code, including the source code. The outcome of our attempt to reproduce your results will be published and add credibility to your results.

Challenge Tasks

The Pot-luck challenge datasets are a selection of the Repository datasets. Presently, we propose the following tasks:

  • CYTO: Causal Protein-Signaling Networks in human T cells. The task is to learn a protein signaling network from multicolor flow cytometry data, recording the molecular activity of 11 proteins. There is on average 800 samples per experimental condition, corresponding to various perturbations of the system (manipulations). The authors used a Bayesian network approach and demonstrated that they recover most of the known signaling network structure, while discovering some new hypothetical regulations (causal relationships). The tasks suggested to the challenge participants include reproducing the results of the paper and finding a method to assess the confidence of the causal relationships uncovered. The evaluation is via submitted papers.
  • LOCANET: LOcal CAusal NETwork We regroup under the name LOCANET a number of tasks consisting in finding the local causal structure around a given target variable (depth 3 network). The following datasets lend themselves to performing such a task: REGED, CINA, SIDO, MARTI. The results are evaluated by the organizers upon submition of the local structure in a designated format. In addition, the toy dataset LUCAS can be used for self evaluation.
  • PROMO: Simple causal effects in time series. The task is to identify which promotions affect sales. This is an artificial dataset of about 1000 promotion variables and 100 product sales. The goal is to predict a 1000×100 boolean influence matrix, indicating for each (i,j) element whether the ith promotion has a causal influence of the sales of the jth product. Data is provided as time series, with a daily value for each variable for three years (i.e., 1095 days). The ground truth for the influence matrix is provided, so the participants can self-evaluate their results, and submit a paper to compete for the prizes.
  • SIGNET: Abscisic Acid Signaling Network. The objective is to determine the set of 43 boolean rules that describe the interactions of the nodes within a plant signaling network. The dataset includes 300 separate boolean pseudodynamic simulations of the true rules, using an asynchronous update scheme. This is an artificial dataset inspired by a real biological system. The results are evaluated by the contact person upon submission of the results in a designated format.
  • TIED: Target Information Equivalent Dataset. This is an artificial simulated dataset constructed to illustrate that there may be many minimal sets of features with optimal predictivity (i.e., Markov boundaries) and likewise many sets of features that are statistically indistinguishable from the set of direct causes and direct effects of the target. The tasks suggested include determining all statistically undistinguishable sets of direct causes and effects, or Markov boundaries of the target variable, and predicting the target variable on test data. The results are evaluated by the contact person upon submission of the results in a designated format.

Note that the participants are ultimately judged on their paper(s). For each dataset, the tasks proposed are only suggestions. The participants are invited to use the data is a creative way and propose their own task(s).

New: tasks proposed by participants

October 30: The following tasks proposed by participants are now included in the challenge.

  • CauseEffectPairs: Distinguishing between cause and effect. The data set consists of 8 N x 2 matrices, each representing a cause-effect pair and the task is to identify which variable is the cause and which one the effect. The origin of the data is hidden for the participants but known to the organizers. The data sets are chosen such that we expect common agreement on which one is the cause and which one the effect. Even though part of the statistical dependences may also be due to hidden common causes, common sense tells us that there is a significant cause-effect-relation.
  • STEMMATOLOGY: Computer-assisted stemmatology. Stemmatology (a.k.a. stemmatics) studies relations among different variants of a document that have been gradually built from an original text by copying and modifying earlier versions. The aim of such study is to reconstruct the family tree (causal graph) of the variants. We provide a dataset to evaluate methods for computer-assisted stemmatology. The ground truth is provided, as are evaluation criteria to allow the ranking of the results of different methods. We hope this will facilitate the development of novel approaches, including but not restricted to hierarchical clustering, graphical modeling, link analysis, phylogenetics, string-matching, etc.

Data donation

Another way to participate in the challenge is to donate data. You will need to first Register . Then you will need to fill out a form to Deposit your data.Our repository does not actually store data, it points to a web page YOUR_DATA.html, which you maintain, and from which your data is accessible. If you do not have a web server allowing you to maintain a web page for your data, you may use the UCI Machine Learning Repository, which physically archives data or contact us at causality@clopinet.com.

Your entry can be edited after submission, but we recommend that you prepare your submission in a text file before filling out the form.

Tips to fill out your submission form:

  • Contact name/URL: Select a person, which will be available to answer questions and evaluate results.
  • Resource type: Choose “data” (eventually you can submit a generative model of data, then choose “model”).
  • Resource name: A short easy-to-memorize acronym.
  • Resource url: This is the web page you maintain YOUR_DATA.html.
  • Title: A title describing your dataset.
  • Authors: A comma separated list of authors.
  • Key facts: Data dimensions (number of variables, number of entries), variable types, missing data, etc.
  • Keywords: A comma separated list of keywords.
  • Abstract: A brief description of your dataset and the task to be solved, including:
    • Background.
    • Data description.
    • Task description.
    • Result format.
    • Result submission method.
    • Evaluation metrics.
    • Suggestion of other tasks.

    If you provide a web page YOUR_DATA.html, more details can be given there.

  • Supplements 1, 2, and 3: Use these fields to provide a direct pointer to a zip archive to download data, a published paper or a report on the data, some slides, etc.

 

With the exceptional increase in computing power, storage capacity and network bandwidth of the past decades, ever growing datasets are collected in fields such as bioinformatics (Splice Sites, Gene Boundaries, etc), IT-security (Network traffic) or Text-Classification (Spam vs. Non-Spam), to name but a few. While the data size growth leaves computational methods as the only viable way of dealing with data, it poses new challenges to ML methods.

This PASCAL Challenge is concerned with the scalability and efficiency of existing ML approaches with respect to computational, memory or communication resources, e.g. resulting from a high algorithmic complexity, from the size or dimensionality of the data set, and from the trade-off between distributed resolution and communication costs.

Indeed many comparisons are presented in the literature; however, these usually focus on assessing a few algorithms, or considering a few datasets; further, they most usually involve different evaluation criteria, model parameters and stopping conditions. As a result it is difficult to determine how does a method behave and compare with the other ones in terms of test error, training time and memory requirements, which are the practically relevant criteria.

We are therefore organizing a competition, that is designed to be fair and enables a direct comparison of current large scale classifiers aimed at answering the question “Which learning method is the most accurate given limited resources?”. To this end we provide a generic evaluation framework tailored to the specifics of the competing methods. Providing a wide range of datasets, each of which having specific properties we propose to evaluate the methods based on performance figures, displaying training time vs. test error, dataset size vs. test error and dataset size vs. training time.

General

The overall goal is to develop a 2-class classifier such that it achieves a low error in the shortest possible time using as few datapoints as possible.

  • All participants are required to identify themselves with a login name, full name and a valid Email address. Your email address will not be made public and is only used for registration and announcements (test dataset release, workshop date, awards). The login name and your full name will appear on the evaluation page.
  • In order to be included in the final evaluation process, every participant should compete on at least five datasets in a single track, i.e. submit results on the validation/test set of five problems for the Wild track, or for one of the SVM tracks.
  • Every participant will provide computational time (training time and separately time required to do predictions for the optimal model-parameter setting excluding data loading times) associated to every submission, accompanied with an estimation of their computing power. This estimation is provided by running a calibration program provided by the challenge organizers.
  • In order to assess more precisely how the accuracy depends on the computational time, every participant is kindly asked to provide the predictions obtained for various fractions of the total computational time T, on the biggest dataset he or she will consider. Preferred fractions: T/10,T/9…,T
  • After the end of the competition, every participant will provide an extended abstract (4 pages) describing the algorithm.
  • All participants will be requested to provide executable or source code, allowing to re-run the algorithm under the same computing conditions, for a later timing re-calibration and re-run of the top-ten methods.

In order to fairly assess the performances of the SVM-QP solvers, the participants to the SVM tracks are asked to comply with two requirements:

  • NO DATA PREPARATION ALLOWED (no feature construction, no feature selection, …).
  • NO PARAMETER TUNING ALLOWED (penalization factor C, and Gaussian kernel radius tau, are set to a prescribed value)

Challenge Tasks

The overall goal is to tune your method such that achieves a low test error measured by the area over the precision recall curve in the shortest possible time and using as few datapoints as possible.

Wild Competition Track

In this track you are free to do anything that leads to more efficient, more accurate methods, e.g. perform feature selection, find effective data representations, use efficient program-code, tune the core algorithm etc. It may be beneficial to use the raw data representation (described below). For each dataset your method competes in you are requested to:

  • Train on 10^[2,3,4,5,6,7] and the maximum number of available datapoints. If your method cannot cope with all dataset sizes you are allowed to skip over too large datasets. For all of the training sessions the training time and time required to compute test outputs has to be recorded.
  • Additionally for the biggest dataset your method can deal with, we ask you to provide ten intermediate time/output-recordings. These can be obtained, by for example training twice and computing time and output after time T/10, T/9, …T (where T is the overall training time). Do not forget to include test-times in each of the recordings.

SVM Tracks

To allow for a comparison w.r.t. to non-SVM solvers, all SVM methods are required to compete in both tracks and therefore for SVMs step 1 is to attack the wild-track task. In addition to what has to be done in the wild track we ask you to do the following experiments.

  1. To measure convergence speed, we ask you to re-do the wild-track experiment for a fixed setting of C=0.01, epsilon=0.01 (and rbf kernel width tau=10) but measuring primal objective only.
  2. To simulate model selection you are requested to train SVMs for different C and rbf-widths keeping epsilon=0.01 fixed, again only measuring objective value.

Here epsilon denotes the relative duality gap (obj_primal-obj_dual)/obj_primal<epsilon=0.01. If your solver has a different stopping condition, choose it reasonably, i.e. such that you expect it to have similar duality gap. While early stopping is OK it may hurt your method in evaluation: If the objective of your svm solver deviates too much from others, i.e. by more than 5% (your_obj-obj_min)/your_obj<0.05, it will get low scores. Note that you have to use the data representation obtained by running the svmlight-conversion script in the second part (model selection) of the experiment. The following values for C/rbf-width shall be used:

  • for the Linear SVM Track it is required to train SVMs for Cs=[0.0001, 0.001, 0.01, 0.1, 1, 10]
  • for the RBF Kernel SVM Track it is required to train SVMs for fixed C=0.01 and tau=[0.01, 0.1, 1, 10, 100, 1000], whereK(x,y)= exp(-||x-y||^2 / tau).

Objective values are computed as follows:

  • for the Linear SVM Track
  • for the RBF Kernel SVM Track

Finally, if possible please include all parameters, rbf-tau, epsilon, SVM-C for all experiments in the result file.

Parallel Track

The recent paradigm shift of learning from single to multi-core shared memory architectures is the focus of the parallel track. For the parallel track methods have to be trained on 8 CPUs following the rules outlined in the wild track. To assess the parallelization quality, we in addition ask participants to train their method using 1,2,4,8 CPUs on the biggest dataset they can process. Note that the evaluation criteria are specifically tuned to parallel shared memory algorithms, i.e. instead of training time on each CPU you should measure wall-clock-time (including data loading time). In addition data loading time must be specified in a separate field.

Datasets

The raw datasets can be downloaded from ftp://largescale.ml.tu-berlin.de/largescale. The provided script convert.py may be used to obtain a simple (not necessary optimal) feature representation in svm-light format.

Overview of the Raw Datasets

Dataset Training Validation Dimensions Format Description
alpha 500000 100000 500
Files are in ascii format, where each line corresponds to 1 example,
i.e.

  val1 val2 ... valdim\n
  val1 val2 ... valdim\n
  ...
beta 500000 100000 500
gamma 500000 100000 500
delta 500000 100000 500
epsilon 500000 100000 2000
zeta 500000 100000 2000
fd 5469800 532400 900
  Files are binary, to obtain 1 example read 900 or 1156 bytes
  respectively (values 0..255)
ocr 3500000 670000 1156
dna 50000000 1000000 200
Files are ascii, each line contains a string of length 200 (symbols
ACGT), i.e.

  CATCATCGGTCAGTCGATCGAGCATC...A\n
  GTGTCATCGTATCGACTGTCAGCATC...T\n
  ...
webspam 350000 50000 variable
Files contains strings (webpages) separated with 0, i.e.
        html foo bar .../html\0

Submission Format

We provide an evaluation script that parses outputs and computes performance scores. We use this exact same script to do the live evaluation. It is suggested to run this script locally on a subset of the training data to test whether the submission format is correctly generated and to evaluate the results (note that the script can only be used on data where labels are available, e.g. subsets of the training data). It requires python-numpy and scipy to be installed. Additionally if matplotlib is installed, the performance figures will be drawn.

Additionally the data submission format is described below:

    • Wild Competition (download example submission)
              dataset_size0 -1 traintime0 testtime0 calibration0 objective C epsilon rfb-tau output0 output1 ...
              ...
              dataset_sizeN -1 traintimeN testtimeN calibrationN objective C epsilon rfb-tau output0 output1 ...
              dataset_sizeN index0 traintime0 testtime0 calibration0 objective C epsilon rfb-tau output0 output1 ...
              ...
              dataset_sizeN index9 traintime9 testtime0 calibration9 objective C epsilon rfb-tau output0 output1 ...
      
    • SVM Tracks (download example submission)A submission to the SVM track requires to take part in the wild competition, i.e. the submission file must start with the lines required for the wild competition. Then a single empty line announces the SVM-Track specific data:
      >a singly empty line here distinguishes wild from model specific track
              dataset_size0 -1 traintime0 testtime0 calibration0 objective C epsilon rfb-tau
              ...
              dataset_sizeN -1 traintimeN testtimeN calibrationN objective C epsilon rfb-tau
              dataset_sizeN index1 traintime1 testtime1 calibration1 objective C epsilon rfb-tau
              ...
              dataset_sizeN index9 traintime9 testtime0 calibration9 objective C epsilon rfb-tau
              dataset_sizeN -2 traintime0 testtime0 calibration0 objective C1 epsilon rfb-tau
              ...
              dataset_sizeN -2 traintimeK testtimeK calibration9 objective CK epsilon rfb-tau
      
      (or -3 and rbf-tau 1 ... K)
      

 

  • Parallel Track

 

A submission to the parallel track consists of two parts: The first one has the exact same syntax as the wild track (with time = wall-clock-time). The second part contains the 1,2,4,8 CPU experiment run on the biggest dataset (here C denotes the number of CPUs)

>a singly empty line here distinguishes wild from model specific track
        dataset_sizeN -4 walltimeN dataloadingtimeN calibrationN objective 1 epsilon rfb-tau
        dataset_sizeN -4 walltimeN dataloadingtimeN calibrationN objective 2 epsilon rfb-tau
        dataset_sizeN -4 walltimeN dataloadingtimeN calibrationN objective 4 epsilon rfb-tau
        dataset_sizeN -4 walltimeN dataloadingtimeN calibrationN objective 8 epsilon rfb-tau

Explanation of values (Column by Column)

  1. Dataset size – Values must match size of dataset or 10^[2,3,4,5,6,7]
  2. Index values of 0…9 (to distinguish values obtained while optimizing) only for the biggest dataset, -1 otherwise; for the SVM track -2 for the C experiment and -3 for the rbf-tau experiment
  3. Traintime – Time required for training (without data loading) in seconds / wall-clock time (including data loading) for the parallel track
  4. Testtime – Time required for applying the classifier to the validation/test data (without data loading) in seconds / data loading time for the parallel track
  5. Calibration – Score obtained using the provided calibration tool (values should be the same if run on the same machine)
  6. Method specific value: SVM Objective
  7. SVM-C / number of CPUs for the parallel track
  8. SVM-rbf-tau
  9. SVM epsilon
    • use 0 (for SVM Objective, SVM-C, SVM-rbf-tau, SVM epsilon) if not applicable
    • for SVM please fill in the different parameters, i.e. SVM objective, SVM-C, rbf-tau and stopping condition epsilon, as the meaning of epsilon may differ please explain in the description what epsilon stands for
    • for the linear svm track index -2 should be used when doing the experiment for C
    • for the gaussian svm track index -3 should be used when modifying rbf tau
  • The following values in columns 10-… must match the size of the validation/test set.

Assessment of the submissions

Different procedures will be considered depending on the track.

Wild Competition

For the Wild track, the ideal goal would be to determine the best algorithm in terms of learning accuracy, depending on the time budget allowed. Accordingly, the score of a participant is computed as the average rank of its contribution wrt the six scalar measures:

Time vs. Error This figure measures training time vs. area over the precision recall curve (aoPRC). It is obtained by displaying the different time budgets and their corresponding aoPRC on the biggest dataset. We compute the following scores based on that figure:

  • Minimum aoPRC
  • Area under Time vs. aoPRC Curve
  • The time t for which the aoPRC x falls below a threshold (x-overall_minimum_aoPRC)/x<0.05.
Size vs. Error This figure measures dataset size vs. area over the precision recall curve (aoPRC). It is obtained by displaying the different dataset sizes and their corresponding aoPRC that the methods achieve. We compute the following scores based on that figure:

  • Area under Size vs. aoPRC Curve
  • The size s for which the aoPRC x falls below a threshold (x-overall_minimum_aoPRC)/x<0.05.
Size vs Time This figure measures dataset size vs. training time. It is obtained by displaying the different dataset sizes and the corresponding training time that the methods achieve. We compute the following scores based on that figure:

  • Slope of the Curve b using a least squares fit to a*x^b.

SVM Tracks

For the SVM track, the point is to determine the best tradeoff between the computational effort and the learning accuracy. Accordingly, the score of a participant is computed as the average rank of its contribution wrt the five scalar measures

  • Minimal objective
  • Area under the Time vs. Objective Curve
  • Time to reach objective within 5% tolerance, i.e. minimal t for (t,obj) with (obj-overal_min_objective)/obj<0.05
  • Average Training Time for all C/Sigma
  • Computational Effort (scaling with dataset size)

 

Validation Results

 

Overall Ranking

Rank Score Submitter Title Date
1 4.40 chap – Olivier Chapelle Newton SVM 02.05.2008 20:57 CET
2 5.20 yahoo – Olivier Keerthi SDM SVM L2 19.06.2008 13:49 CET
3 5.60 yahoo – Olivier Keerthi SDM SVM L1 18.06.2008 19:28 CET
4 5.80 antoine – Antoine Bordes SgdQn 16.06.2008 14:21 CET
5 7.90 kristian – Kristian Woodsend IPM SVM 2 23.06.2008 15:10 CET
5 7.90 kristian – Kristian Woodsend Interior Point SVM 16.06.2008 21:56 CET
7 8.60 beaker – Gavin Cawley LR 19.06.2008 13:21 CET
8 9.90 ker2 – Porter Chang CTJ LSVM01 20.06.2008 03:31 CET
9 10.70 rofu – Rofu yu test 10.06.2008 16:31 CET
10 11.70 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers final 20.06.2008 12:06 CET
11 11.90 beaker – Gavin Cawley ORRR 12.06.2008 16:13 CET
11 11.90 rofu – Rofu yu liblinear 25.06.2008 11:57 CET
13 12.60 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers 27.05.2008 14:56 CET
14 14.00 garcke – Jochen Garcke AV SVM 02.06.2008 13:41 CET
15 14.10 aiiaSinica – Han-Shen Huang CTJ LSVM 19.06.2008 04:16 CET
16 14.70 fravoj – Vojtech Franc Random 18.02.2008 15:29 CET
17 15.20 beaker – Gavin Cawley WRRR 24.06.2008 13:12 CET
18 15.50 rofu – Rofu yu L1 13.06.2008 13:17 CET
19 16.30 antoine – Antoine Bordes LaRankConverged 13.06.2008 19:11 CET
20 17.20 beaker – Gavin Cawley ORRR Ensemble 15.06.2008 16:54 CET
21 17.40 ker2 – Porter Chang CTJ LSVM02 25.06.2008 14:38 CET
22 18.80 fravoj – Vojtech Franc ocas 25.06.2008 14:28 CET
22 18.80 antoine – Antoine Bordes Larank 13.06.2008 14:37 CET
24 19.20 beaker – Gavin Cawley rr 06.06.2008 21:14 CET
24 19.20 garcke – Jochen Garcke AV SVM single 25.06.2008 15:58 CET
26 19.30 aiiaSinica – Han-Shen Huang linearSVM01 11.06.2008 03:42 CET
26 19.30 xueqinz – Xueqin Zhang CHSVM 07.01.2009 01:27 CET
28 19.60 garcke – Jochen Garcke AV SVM 500k 10.06.2008 11:49 CET
28 19.60 garcke – Jochen Garcke AV SVM rbf track 23.06.2008 14:23 CET
30 19.70 fravoj – Vojtech Franc Stochastic SVM 18.02.2008 14:29 CET
31 20.30 aiiaSinica – Han-Shen Huang linearSVM03 16.06.2008 05:58 CET
32 20.40 rofu – Rofu yu Coordinate descent dual l1 linear svm 10.06.2008 15:21 CET
33 20.50 aiiaSinica – Han-Shen Huang sLsvm 17.06.2008 08:51 CET
34 20.70 ker2 – Porter Chang CTJsvm 21.01.2009 12:42 CET
35 20.80 aiiaSinica – Han-Shen Huang lsvm04 16.06.2008 06:57 CET
36 21.00 ker2 – Porter Chang anSGD 05.01.2009 08:21 CET
37 21.10 xueqinz – Xueqin Zhang CHSVM4 14.01.2009 06:13 CET
38 21.30 fravoj – Vojtech Franc random2 25.06.2008 14:48 CET
39 21.40 bernhardP – Bernhard Pfahringer Random model trees 11.06.2008 11:29 CET
40 21.70 fravoj – Vojtech Franc ocas new 26.02.2009 16:28 CET
41 21.80 beaker – Gavin Cawley LR3 31.07.2008 11:55 CET
41 21.80 beaker – Gavin Cawley LR2 17.07.2008 15:25 CET
41 21.80 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers CR rows 12.06.2008 11:31 CET
44 21.90 rofu – Rofu yu linear 25.06.2008 15:17 CET
45 22.00 fravoj – Vojtech Franc ocas old 27.02.2009 09:07 CET
46 22.10 aiiaSinica – Han-Shen Huang linearSVM02 12.06.2008 08:14 CET
47 22.20 xueqinz – Xueqin Zhang CHSVM2 11.01.2009 06:35 CET
48 22.30 dialND – Karsten Steinhaeuser alpha 3 15.04.2008 14:06 CET
49 22.40 xueqinz – Xueqin Zhang CHSVM3 14.01.2009 03:25 CET
49 22.40 dialND – Karsten Steinhaeuser alpha 1 25.03.2008 20:13 CET
51 25.21 fravoj – Vojtech Franc test1 08.01.2009 09:37 CET

 

Test Results

 

Overall Ranking

Rank Score Submitter Title Date
1 3.40 antoine – Antoine Bordes SgdQn 16.06.2008 14:21 CET
2 3.90 chap – Olivier Chapelle Newton SVM 02.05.2008 20:57 CET
3 5.00 yahoo – Olivier Keerthi SDM SVM L2 19.06.2008 13:49 CET
4 5.20 yahoo – Olivier Keerthi SDM SVM L1 18.06.2008 19:28 CET
5 7.40 beaker – Gavin Cawley LR 19.06.2008 13:21 CET
6 7.60 kristian – Kristian Woodsend Interior Point SVM 16.06.2008 21:56 CET
7 8.00 kristian – Kristian Woodsend IPM SVM 2 23.06.2008 15:10 CET
8 8.50 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers final 20.06.2008 12:06 CET
9 9.50 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers 27.05.2008 14:56 CET
10 9.90 garcke – Jochen Garcke AV SVM single 25.06.2008 15:58 CET
11 12.20 garcke – Jochen Garcke AV SVM 02.06.2008 13:41 CET
12 12.30 rofu – Rofu yu liblinear 25.06.2008 11:57 CET
13 12.40 ker2 – Porter Chang CTJsvm 21.01.2009 12:42 CET
14 13.10 rofu – Rofu yu Coordinate descent dual l1 linear svm 10.06.2008 15:21 CET
15 13.60 ker2 – Porter Chang CTJ LSVM01 20.06.2008 03:31 CET
16 14.20 antoine – Antoine Bordes LaRankConverged 13.06.2008 19:11 CET
17 14.50 beaker – Gavin Cawley ORRR 12.06.2008 16:13 CET
18 15.30 beaker – Gavin Cawley WRRR 24.06.2008 13:12 CET
19 15.60 fravoj – Vojtech Franc Random 18.02.2008 15:29 CET
20 15.70 ker2 – Porter Chang CTJ LSVM02 25.06.2008 14:38 CET
21 16.20 beaker – Gavin Cawley ORRR Ensemble 15.06.2008 16:54 CET
22 16.70 fravoj – Vojtech Franc ocas 25.06.2008 14:28 CET
23 17.50 beaker – Gavin Cawley rr 06.06.2008 21:14 CET
23 17.50 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers CR rows 12.06.2008 11:31 CET
25 17.70 aiiaSinica – Han-Shen Huang linearSVM01 11.06.2008 03:42 CET
26 17.80 beaker – Gavin Cawley LR2 17.07.2008 15:25 CET
26 17.80 beaker – Gavin Cawley LR3 31.07.2008 11:55 CET
28 18.00 ker2 – Porter Chang anSGD 05.01.2009 08:21 CET
29 18.10 aiiaSinica – Han-Shen Huang lsvm04 16.06.2008 06:57 CET
30 18.20 fravoj – Vojtech Franc random2 25.06.2008 14:48 CET
31 18.30 aiiaSinica – Han-Shen Huang linearSVM03 16.06.2008 05:58 CET

 

Final Results

 

Final Ranking

Rank Score Submitter Title Date
1 2.60 chap – Olivier Chapelle Newton SVM 02.05.2008 20:57 CET
2 2.70 antoine – Antoine Bordes SgdQn 16.06.2008 14:21 CET
3 3.20 yahoo – Olivier Keerthi SDM SVM L2 19.06.2008 13:49 CET
4 3.50 yahoo – Olivier Keerthi SDM SVM L1 18.06.2008 19:28 CET
5 5.00 MB – Marc Boulle Averaging of Selective Naive Bayes Classifiers final 20.06.2008 12:06 CET
6 5.80 beaker – Gavin Cawley LR 19.06.2008 13:21 CET
7 6.80 kristian – Kristian Woodsend IPM SVM 2 23.06.2008 15:10 CET
8 7.30 ker2 – Porter Chang CTJ LSVM01 20.06.2008 03:31 CET
9 7.70 antoine – Antoine Bordes LaRankConverged 13.06.2008 19:11 CET
10 8.00 rofu – Rofu yu liblinear 25.06.2008 11:57 CET

The objective of the Challenge is to design a statistical machine learning algorithm that discovers which morphemes (smallest individually meaningful units of language) words consist of. Ideally, these are basic vocabulary units suitable for different tasks, such as text understanding, machine translation, information retrieval, and statistical language modeling.

The scientific goals are:

  • To learn of the phenomena underlying word construction in natural languages
  • To discover approaches suitable for a wide range of languages
  • To advance machine learning methodology

Morpho Challenge 2008 is a follow-up to our previous Morpho Challenge 2005 (Unsupervised Segmentation of Words into Morphemes) and Morpho Challenge 2007 (Unsupervised Morpheme Analysis). The task of Morpho Challenge 2008 is similar to the Morpho Challenge 2007, where the aim is to find the morpheme analysis of the word forms in the data. For this challenge, Arabic is added as one of the evaluated languages. For the IR task, there is also a new possibility to provide the morpheme analysis of the words in their context.

Participation in the previous challenges is by no means a prerequisite for participation in Morpho Challenge 2008. Everyone is welcome and we hope to attract many participating teams. The results will be presented in a workshop. Please read the rules and see the schedule. The datasets are available for download. Submit your analyses (result files) by sending them by email to the organizers, or by indicating a location where the organizers can download your files. Remember also to describe your algorithm in a paper. Please read the formatting instructions in rules.

The results from the evaluation runs are now in the Results page.
The Workshop was held in September 17, 2008

Mikko Kurimo, Ville Turunen and Matti Varjokallio
Adaptive Informatics Research Centre, Helsinki University of Technology

Rules

Acceptance

The organizers retain all rights to the Challenge data, which is given to the participants for use in this challenge only. The organizers may use the data submitted to the Challenge freely, without restrictions.

Eligibility

Anyone is allowed to participate. A participant may be either a single person or a group. A single person can participate in at most two groups. A participant is allowed to submit at most three different solutions, where each solution corresponds to a particular morpheme analysis method. Each of these methods may naturally be applied to each of the test languages. If a participant submits more than three solutions, the organizers decide which of the three will be accepted.

Test languages

Data sets are provided for five languages: Arabic, English, Finnish, German and Turkish. Participants are encouraged to apply their algorithm to all of these test languages, but are free to leave some languages out, if they wish to do so.

(New languages may be added, if interested co-organizers, suitable data and evaluation analyses become available in time.)

Task

The task is the unsupervised morpheme analysis of every word form contained in a word list supplied by the organizers for each test language.

The participants will be pointed to corpora in which the words occur, so that the algorithms may utilize information about word context.

Solutions, in which a large number of parameters must be “tweaked” separately for each test language, are of little interest. This challenge aims at the unsupervised (or very minimally supervised) morpheme analysis of words. The abstracts submitted by the participants must contain clear descriptions of which steps of supervision or parameter optimization are involved in the algorithms.

Competitions

The segmentations will be evaluated in two complementary ways:

  • Competition 1: The proposed morpheme analyses will be compared to a linguistic “gold standard”.
  • Competition 2: Information retrieval (IR) experiments will be performed, where the words in the documents and queries will be replaced by their proposed morpheme representations. The search will then be based on morphemes instead of words.

Competition 1 will include all five test languages. Winners will be selected separately for each language. As a performance measure, the F-measure of accuracy of suggested morpheme analyses is utilized. Should two solutions produce the same F-measure, the one with higher precision will win.

Competition 2 will include three of the test languages. The organizers will perform the IR experiments based on the morpheme analyses submitted by the participants.

Workshop and publication

All good results will be acknowledged with fame and glory. Presentations for the challenge workshop will be selected by the organizers based on the results and a paper of at most 10 pages describing the algorithm and the data submission.

Workshop papers

For your paper submission (due August 1st), please use the single-column CLEF 2007 Notebook Proceedings format. Here are a sample PDF file and a template Latex document. Detailed formatting instructions can be requested from the Morpho Challenge organizers. The maximum length of your paper is 10 pages (including references and figures). Email your paper submission to the organizers.

Arbitration

In the case of disagreement the organizers will decide the final interpretation of the rules.

Datasets

There are a number of data files involved in this challenge. Each type of file is available for each language.

Word list (input)

First and foremost, there is a list of word forms. The words have been extracted from a text corpus, and each word in the list is preceded by its frequency in the corpus used.

For instance, a subset of the supplied English word list looks like this:

...
1 barefoot's
2 barefooted
6699 feet
653 flies
2939 flying
1782 foot
64 footprints
...

Result file (output, i.e., what to submit)

The participants’ task is to return a list containing exactly the same words as in the input, with morpheme analyses provided for each word. The list returned shall not contain the word frequency information.

A submission for the above English words may look like this:

...
barefoot's      BARE FOOT +GEN
barefooted      BARE FOOT +PAST
feet            FOOT +PL
flies           FLY_N +PL, FLY_V +3SG
flying          FLY_V +PCP1
foot            FOOT
footprints      FOOT PRINT +PL
...

There are a number of things to note about the result file: Each line of the file contains a word (e.g., “feet”) separated from its analysis (e.g., “FOOT +PL”) by one TAB character. The word needs to look exactly as it does in the input; no capitalization or change of character encoding is allowed. The analysis contains morpheme labels separated using space. The order in which the labels appear does not matter; e.g., “FOOT +PL” is equivalent to “+PL FOOT”. The labels are arbitrary: e.g., instead of using “FOOT” you might use “morpheme784” and instead of “+PL” you might use “morpheme2”. However, we strongly recommend you to use intuitive labels, when possible, since they make it easier for anyone to get an idea of the quality of the result by looking at it.

If a word has several interpretations, all interpretations should be supplied: e.g., the word “flies” may be the plural form of the noun “fly” (insect) or the third person singular present tense form of the verb “to fly”. The alternative analyses must be separated using a comma, as in: “FLY_N +PL, FLY_V +3SG”. The existence of alternative analyses makes the task challenging, and we leave it to the participants to decide how much effort they will put into this aspect of the task. In English, for instance, in order to get a perfect score, it would be necessary to distinguish the different functions of the ending “-s” (plural or person ending) as well as the different parts-of-speech of the stem “fly” (noun or verb). As the results will be evaluated against reference analyses (our so-called gold standard), it is worth reading about the guiding principles used when constructing the gold standard.

As far as we understand, you can use any characters in your morpheme labels except whitespace and comma (,). However, we cannot guarantee that the evaluation scripts will work properly, if your labels contain some “strange” characters.

Text corpus for English, Finnish, German and Turkish

The word list (input data) has been constructed by collecting word forms occurring in a text corpus. The text corpora have been obtained from the Wortschatz collection at the University of Leipzig (Germany). We used the plain text files (sentences.txt for each language); the corpus sizes are 3 million sentences for English, Finnish and German, and 1 million sentences for Turkish. For English, Finnish and Turkish we use preliminary corpora, which have not yet been released publicly at the Wortschatz site. The corpora have been preprocessed for the Morpho Challenge (tokenized, lower-cased, some conversion of character encodings).

If the participants like to do so, they can use the corpora in order to get information about the context in which the different words occur.

We are most grateful to the University of Leipzig for making these resources available to the Challenge, and in particular we thank Stefan Bordag for his kind assistance.

Text corpus for Arabic

The text data (135K sentences with 3.9M words) used same as in this paper:
Habash, Nizar and Fatiha Sadat. Arabic Preprocessing Schemes for Statistical Machine Translation, In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL), New York, 2006. [PDF]
Unfortunately this text data is not freely available.

We are most grateful to the Nizar Habash from the University of Columbia for his kind assistance and making the word frequency list available to the Challenge.

Gold standard morpheme analyses

The desired “correct” analyses for a random sample of circa 500 words are supplied for each language. These samples can be used for visual inspection and as a development test set (in order to get a rough estimate of the performance of the participants’ morpheme-analyzing algorithm).

The format of the gold standard file is exactly the same as that of the result file to be submitted. That is, each line contains a word and its analysis. The word is separated from the analysis by a TAB character. Morpheme labels in the analysis are separated from each other by a space character. For some words there are multiple correct analyses. These alternative analyses are separated by a comma (,). Examples:

Language Examples
English baby-sitters       baby_N sit_V er_s +PL
indoctrinated      in_p doctrine_N ate_s +PAST
Finnish linuxiin           linux_N +ILL
makaronia          makaroni_N +PTV
German choreographische   choreographie_N isch +ADJ-e
zurueckzubehalten  zurueck_B zu be halt_V +INF
Turkish kontrole           kontrol +DAT
popUlerliGini      popUler +DER_lHg +POS2S +ACC, popUler +DER_lHg +POS3 +ACC3
Arabic Algbn              gabon POS:N Al+ +SG +MASC

The English and German gold standards are based on the CELEX data base. The Finnish gold standard is based on the two-level morphology analyzer FINTWOL from Lingsoft, Inc. The Turkish gold-standard analyses have been obtained from a morphological parser developed at Boğaziçi University; it is based on Oflazer’s finite-state machines, with a number of changes. We are indebted to Ebru Arısoy for making the Turkish gold standard available to us.

For Arabic the gold standard analyses are based on representation of lexeme and features used the Aragen system (a wrapper using publicly availble BAMA-1 databases):
Habash, Nizar. Large Scale Lexeme Based Arabic Morphological Generation. In Proceedings of Traitement Automatique du Langage Naturel (TALN-04). Fez, Morocco, 2004. [PDF]
The first part of an analysis is a lexeme followed by list of features.

The morphological analyses are morpheme analyses. This means that only grammatical categories that are realized as morphemes are included. For instance, for none of the languages will you find a singular morpheme for nouns or a present-tense morpheme for verbs, because these grammatical categories do not alter or add anything to the word form, in contrast to, e.g., the plural form of a noun (house vs. house+s), or the past tense of verbs (help vs. help+ed, come vs. came).

The morpheme labels that correspond to inflectional (and sometimes also derivational) affixes have been marked with an initial plus sign (e.g., +PL, +PAST). This is due to a feature of the evaluation script: in addition to the overall performance statistics, evaluation measures are also computed separately for the labels starting with a plus sign and those without an initial plus sign. It is thus possible to make an approximate assessment of how accurately affixes are analyzed vs. non-affixes (mostly stems). If you use the same naming convention when labeling the morphemes proposed by your algorithm, this kind of statistics will be available for your output (see the evaluation page for more information).

The morpheme labels that have not been marked as affixes (no initial plus sign) are typically stems. These labels consist of an intuitive string, usually followed by an underscore character (_) and a part-of-speech tag, e.g., “baby_N”, “sit_V”. In many cases, especially in English, the same morpheme can function as different parts-of-speech; e.g., the English word “force” can be a noun or a verb. In the majority of these cases, however, if there is only a difference in syntax (and not in meaning), the morpheme has been labeled as either a noun or a verb, throughout. For instance, the “original” part-of-speech of “force” is a noun, and consequently both noun and verb inflections of “force” contain the morpheme “force_N”:

force force_N
force's force_N GEN
forced force_N +PAST
forces force_N +3SG, force_N +PL
forcing force_N +PCP1

Thus, there is not really a need for your algorithm to distinguish between different meanings or syntactic roles of the discovered stem morphemes. However, in some rare cases, if the meanings of the different parts-of-speech do differ clearly, there are two variants, e.g., “train_N” (vehicle), “train_V” (to teach), “fly_N” (insect), “fly_V” (to move through the air). But again, if there are ambiguous meanings within the same part-of-speech, these are not marked in any way, e.g., “fan_N” (device for producing a current of air) vs. “fan_N” (admirer). This notation is a consequence of using CELEX and FINTWOL as the sources for our gold standards. We could have removed the part-of-speech tags, but we decided to leave them there, since they carry useful information without significantly making the task more difficult. There are no part-of-speech tags in the Turkish gold standard.

Random word pairs file

If you want to carry out a small-scale evaluation yourself using the gold standard sample, you need to download a randomly generated so-called word pairs file for each language to be tested. Read more about this on the evaluation page.

Character encoding

In the source data used for the different languages, there is variation in how accurately certain distinctions are made when letters are rendered. This makes it hard to apply a unified character encoding scheme for all the languages (such as UTF-8). Thus, the following encodings have been used, in which all letters are encoded as one-byte (8-bit) characters:

English
Standard text. All words are lower-cased, also proper names.
Finnish
ISO Latin 1 (ISO 8859-1). The Scandinavian special letters å, ä, ö (as well as other letters occuring in loan words, e.g., ü, é, à) are rendered as one-byte characters. All words are lower-cased, also proper names.
German
Standard text. All words are lower-cased, also all nouns. The German umlaut letters are rendered as the corresponding non-umlaut letter followed by “e”, e.g., “laender” (Länder), “koennte” (könnte), “fuer” (für). Double-s is rendered as “ss”, e.g., “strasse” (Straße). This coarse encoding is due to the fact that CELEX, the source for the morphological gold standard, utilizes this scheme. Note, however, that in the data you may see special letters encoded using ISO Latin 1 in some loan words, e.g., “société”, “l’unità” (these words are not included in CELEX and their analyses will not be evaluated).
Turkish
Standard text. All words are lower-cased. The letters specific to the Turkish language are replaced by capital letters of the standard Latin alphabet, e.g., “açıkgörüşlülüğünü” is spelled “aCIkgOrUSlUlUGUnU”.
Arabic
All words are presented in Buckwalter transliteration.

Download data for Competition 1

Language Word list Text corpus Sample of gold standard Random word pairs file
English Text Text gzipped Text gzipped Text Text
Finnish Text Text gzipped Text gzipped Text Text
German Text Text gzipped Text gzipped Text Text
Turkish Text Text gzipped Text gzipped Text Text
Arabic Text Text gzipped n.a. Text Text

Download data for Competition 2

Participation in competition 2 does not necessarily require any extra effort by the participants. The organizers will use the analyses provided by the participants for competition 1 in information retrieval experiments. Data from CLEF will be used.

However, because the information retrieval evaluation texts are different from the training texts of competition 1, a slightly better IR performance may be obtained, by submitting also the analyses of the words that do not exist in the word lists of competition 1. The joined word lists can be downloaded below.

Language Word list Text corpus
English (Text) (Text gzipped) Fixed version See the paragraph below
Finnish Text Text gzipped See the paragraph below
German Text Text gzipped See the paragraph below

Those participants who wish to use the full text corpora in order to get information about the context in which the different words occur, please contact the organizers for more information how to register to CLEF to obtain the full texts. If there are participants who wish to submit morpheme analysis for words in their actual context (competition 2b), they will need to request the full texts, too. If you need the full texts, please contact the organizers for details how to fill in and submit the CLEF Registration Form and CLEF End-User Agreement. The DL for this registration is 1 May, 2008.

NOTE: If you do not participate in competition 2b and do not need the full texts for to submit the unsupervised morpheme analysis for competition 2, it is enough to just download the data available at this page.

Competition 1

In Competition 1, for each language, the morpheme analyses proposed by the participants’ algorithm will be compared against a linguistic gold standard. Samples of the gold standards used are available for download on the datasets page.

Since the task at hand involves unsupervised learning, it cannot be expected that the algorithm comes up with morpheme labels that exactly correspond to the ones designed by linguists. That is, no direct comparison will take place between labels as such (the labels in the proposed analyses vs. labels in the gold standard). What can be expected, however, is that two word forms that contain the same morpheme according to the participants’ algorithm also have a morpheme in common according to the gold standard. For instance, in the English gold standard, the words “foot” and “feet” both contain the morpheme “foot_N”. It is thus desirable that also the participants’ algorithm discovers a morpheme that occurs in both these word forms (be it called “FOOT”, “morpheme784”, “foot” or something else).

In practice, the evaluation will take place by sampling a large number of word pairs, such that both words in the pair have at least one morpheme in common. As the evaluation measure, we will use F-measure, which is the harmonic mean of Precision and Recall:

F-measure = 1/(1/Precision + 1/Recall).

Precision is here calculated as follows: A number of word forms will be randomly sampled from the result file provided by the participants; for each morpheme in these words, another word containing the same morpheme will be chosen from the result file by random (if such a word exists). We thus obtain a number of word pairs such that in each pair at least one morpheme is shared between the words in the pair. These pairs will be compared to the gold standard; a point is given for each word pair that really has a morpheme in common according to the gold standard. The total number of points is then divided by the total number of word pairs.

For instance, assume that the proposed analysis of the English word “abyss” is: “abys +s”. Two word pairs are formed: Say that “abyss” happens to share the morpheme “abys” with the word “abysses”; we thus obtain the word pair “abyss – abysses”. Also assume that “abyss” shares the morpheme “+s” with the word “mountains”; this produces the pair “abyss – mountains”. Now, according to the gold standard the correct analyses of these words are: “abyss_N”, “abyss_N +PL”, “mountain_N +PL”, respectively. The pair “abyss – abysses” is correct (common morpheme: “abyss_N”), but the pair “abyss – mountain” is incorrect (no morpheme in common). Precision here is thus 1/2 = 50%.

Recall is calculated analogously to recall: A number of word forms are randomly sampled from the gold standard file; for each morpheme in these words, another word containing the same morpheme will be chosen from the gold standard by random (if such a word exists). The word pairs are then compared to the analyses provided by the participants; a point is given for each sampled word pair that has a morpheme in common also in the analyses proposed by the participants’ algorithm. The total number of points is then divided by the total number of word pairs.

For words that have several alternative analyses, as well as for word pairs that have more than one morpheme in common, some normalization of the points is carried out in order not to give these words considerably more weight in the evaluation than “less complex” words. We will spare the participants from the gory details. (The passionately interested reader may have a look at the source code of the evaluation script.)

Evaluation of a sample (development test set)

You can evaluate your morphological analyses against the available gold standards (separately for each test language). The program to use for this is the Perl script: eval_morphemes.pl. The evaluation program is invoked as follows:

eval_morphemes.pl [-trace] wordpairsfile_goldstd wordpairsfile_result goldstdfile resultfile

Four files are given as arguments to eval_morphemes.pl:

  1. wordpairsfile_goldstd: this is the “random word pairs file” available for download on the datasets page. This file is needed in the calculation of an estimate of the recall of the proposed morpheme analyses.
  2. wordpairsfile_result: this file has to be generated using another program (see below). It is needed in the calculation of a rough estimate of the precision of the proposed morpheme analyses.
  3. goldstdfile:this is the sample of the gold standard available for download on the datasets page. This file contains the correct morpheme analyses for circa 500 words.
  4. resultfile: this is the result file that your algorithm produces, i.e., a list of words and their proposed morpheme analyses.

The -trace argument is optional and produces output for every evaluated word separately. Regardless of the status of the trace argument, the evaluation program produces output of the following kind:

PART0. Precision: 69.00% (96/139); non-affixes: 81.55% (51/63); affixes: 58.73% (45/76)
PART0. Recall:    25.59% (142/556); non-affixes: 49.78% (105/211); affixes: 10.78% (37/345)
PART0. F-measure: 37.33%; non-affixes: 61.82%; affixes: 18.22%
#
TOTAL. Precision: 69.00%; non-affixes: 81.55%; affixes: 58.73%
TOTAL. Recall:    25.59%; non-affixes: 49.78%; affixes: 10.78%
TOTAL. F-measure: 37.33%; non-affixes: 61.82%; affixes: 18.22%

Note that results are displayed for partition 0 (PART0) and for the entire data (TOTAL). The total scores are here the same as the scores of PART0, since there is only one partition. It is, however, possible to split the data into several partitions and compute results for each partition separately. The overall scores are then calculated as the mean over the partitions. Splitting into partitions is a feature reserved for the final evaluation, when we will assess the statistical significance of the differences between the participants’ algorithms.

The figures that count in the final evaluation are the first precision, recall, and F-measure values on the TOTAL lines. These values pertain to all morphemes, but there are also separate statistics for morphemes classified as non-affixes vs. affixes. What counts as an affix is a morpheme with a label starting with a plus sign, e.g., “+PL”, “+PAST”. This naming convention is applied in the gold standard, which means that you do not have to do anything in order to get the non-affixes/affixes statistics right as far as recall is concerned. However, if you want the same kind of information also for precision, your algorithm must have a means of discovering which morphemes are likely affixes and tag these morphemes with an initial plus sign. Note that it is fully up to you whether you do this or not; it will not affect your position in the competition in any way.

Sampling word pairs for the calculation of an estimate of the precision

In order to get an estimate of the precision of the algorithm, you need to provide the evaluation script eval_morphemes.pl with a file containing word pairs sampled from your result file. Unfortunately, the estimate is likely to be fairly rough. The reason for this is that you do not have the entire gold standard at your disposal. Thus, if you sample pairs of words that are not included in the 500-word gold standard that you can access, it is impossible to know whether the proposed morphemes are correct or not. What you can do, however, is to make sure that each word that goes into a word pair actually does occur in the 500-word gold standard sample. The problem here is that your algorithm might not propose that many common morphemes for the words within this limited set, and thus the estimate will be based on rather few observations.

Anyway, this is how to do it: First, make a list of relevant words, that is, words that are present in the gold standard sample available:

cut -f1 goldstdfile > relevantwordsfile

Then sample word pairs for 100 words selected by random from your results file:

sample_word_pairs.pl -refwords relevantwordsfile < resultfile > wordpairsfile_result

The necessary Perl program is sample_word_pairs.pl. The output file wordpairsfile_result is used as input to eval_morphemes.pl (see above).

Competition 2

Competition 2 does not necessarily require any extra effort by the participants. The organizers will use the analyses provided by the participants in information retrieval experiments. Data from CLEF will be used.
However, those participants who wish to submit morpheme analysis for words in their actual context (competition 2b), please contact the organizers for more information how to register to CLEF to obtain the full texts.

In the competition 2 (and 2b) the words in the queries and documents will be replaced by the corresponding morpheme analyses provided by the participants. We will perform the IR evaluation using the state-of-the-art Okapi (BM25) retrieval method (the latest version of the freely available LEMUR toolkit. The most common morphemes in each participant’s submission will be left out from the index. The size of this stoplist will be proportional to the amount of the text data in each language and the stoplist size will be the same for each participant’s submission. The evaluation criterion will be Uninterpolated Average Precision. The segmentation with the highest Average Precision will win. The winner is selected separately for competitions 2 and 2b in each language.

Results

The full evaluation reports and the descriptions of the participating methods have been published at the Workshop.

Competition 1

The segmentation with the highest F-measure is the best. The winner is selected separately for each language.
These results are preliminary. Download the report manuscript on the final results from here.

Arabic
English
Finnish
German
Turkish

Competition 2

In Competition 2, the organizers applied the analyses provided by the participants in information retrieval experiments. The words in the queries and source documents were replaced by the corresponding morpheme analyses provided by the participants, and the search was then based on morphemes instead of words. The evaluation was perfomed using a state-of-the-art retrieval method (the latest version of the freely available LEMUR toolkit. The evaluation criterion was Uninterpolated Average Precision. The evaluation was performed only for the parcipitants that submitted the segmentation for the extended wordlists gathered from the IR evaluation corpora. Okapi (BM25) term weighting was used for all index terms excluding an automatic stoplist consisting of terms that have collection frequency higher than 75000 (Finnish) or 150000 (German and English). The stoplist was used with the Okapi weighting, because it did not perform well with the indexes that had many very common terms.

Download the report manuscript on the final results from here.

Finnish
German
English

Proceedings

The proceedings are available in CLEF working notes.
The manuscripts and abstracts can be downloaded from here.

Reference methods

To study how the different morpheme analysis performed in the IR tasks, we attempted the same tasks with different reference methods. This also revealed us whether the unsupervised morpheme analysis (or even a supervised one) could really be useful in the IR tasks compared to simple word based indexing.

  1. Morfessor Categories-Map: The same Morfessor Categories-Map as described in Competition 1 was used for the unsupervised morpheme analysis. The stem vs. suffix tags were kept, but did not receive any special treatment in the indexing as we wanted to keep the IR evaluation as unsupervised as possible.
  2. Morfessor Baseline: All the words were simply split into smaller pieces without any morpheme analysis. This means that the obtained subword units were directly used as index terms as such. This was performed using the Morfessor Baseline algorithm as in Morpho Challenge 2005. We expected that this would not be optimal for IR, but because the unsupervised morpheme analysis is such a difficult task, this simple method would probably do quite well.
  3. dummy: No words were split nor any morpheme analysis provided except hyphens were replaced by spaces so that hyphenated words were indexed as separate words (changed from last year). This means words were directly used as index terms as such without any stemming or tags. We expected that although the morpheme analysis should provide helpful information for IR, all the submissions would not probably be able to beat this brute force baseline. However, if some morpheme analysis method would consistently beat this baseline in all languages and task, it would mean that the method were probably useful in a language and task independent way.
  4. grammatical: The words were analysed using the gold standard in each language that were utilised as the “ground truth” in the Competition 1. Besides the stems and suffixes, the gold standard analyses typically consist of all kinds of grammatical tags which we decided to simply include as index terms, as well. For many words the gold standard analyses included several alternative interpretations that were all included in the indexing. However, we decided to also try the method adopted in the morpheme segmentation for Morpho Challenge 2005 that only the first interpretation of each word is applied. This was here called “grammatical first” whereas the default was called “grammatical all”. Words that were not in the gold standard segmentation were indexed as such. Because our gold standards are quite small, 60k (English) – 600k (Finnish), compared to the amount of words that the unsupervised methods can analyse, we did not expect “grammatical” to perform particularly well, even though it would probably capture some useful indexing features to beat the “dummy” method, at least.
  5. snowball: No real morpheme analysis was performed, but the words were stemmed by stemming algorithms provided by snowball libstemmer library. Porter stemming algoritm was used for English. Finnish and German stemmers were used for the other languages. Hyphenated words were first split to parts that were then stemmed separately. Stemming is expected to perform very well for English but not necessarily for the other languages because it is harder to find good stems.
  6. TWOL: Two-level morphological analyzer was used to find the normalized forms of the words. These forms were then used as index terms. Some words may have several alternative normalized forms and two cases were studied similarly to the grammatical case. Either all alternatives were used (“all”) or only the first one (“first”). Compound words were split to parts. Words not recognized by the analyzer were indexed as such. German analyzer was not available.