The problem of attributing causes to effects is pervasive in science, medicine, economy and almost every aspects of our everyday life involving human reasoning and decision making. One important goal of causal modeling is to unravel enough of the data generating process to be able to make predictions under manipulations of the system of interest by an external agent (experiments).
Being able to predict the result of actual or potential experiments is very useful because experiments are often costly and sometimes impossible or unethical to perform. For instance, in policy-making, one may want to predict "the effect on a population health status" of "forbidding to smoke in public places", before passing a law. This example illustrates the case of an experiment, which is possible, but expensive. Forcing people to smoke would constitute an unethical experiment.
The need for assisting policy making and the availability of massive amounts of “observational” data prompted the proliferation of proposed causal discovery techniques. Each scientific discipline has its favorite approach (e.g. Bayesian networks in biology and structural equation modeling in social sciences), not necessarily reflecting better match of techniques to domains, but rather historical tradition. Standard benchmarks are needed to foster scientific progress.
Beyond our original motivation, there are many "classical" machine learning problems, which require devising experiments. Hence the "virtual lab" of the causality workbench in a great resource to organize machine learning challenges.
June 2-3, 2008: WCCI 2008, workshop to discuss the results of the first challenge.