Research on reproducing kernel Hilbert spaces and kernel-based methods has witnessed a major impetus during the last two decades. Recent advances include kernels on structured data, powerful learning guarantees for kernel-based methods, and Hilbert-space embeddings of distributions. Some of the most lively NIPS and ICML workshops in recent years have dealt with applications where kernel approaches are popular, most notably multiple kernel learning, transfer learning, and multi-task learning. While kernel-based methods are well established in the machine learning practice, certain results in the underlying theory of RKHS remain relatively inaccessible to the ML community. Moreover, powerful tools for RKHS developed in other branches of mathematics, for instance in numerical analysis and probability, are less well known to machine learning researchers.
The proposed workshop represents an opportunity to bring together researchers in probability theory, mathematicians, and machine learning researchers working on RKHS methods. The goals of the workshop are threefold: first, to provide an accessible review and synthesis of classical results in RKHS theory from the point of view of functional analysis, probability theory, and numerical analysis. Second, to cover recent advances in RKHS theory relevant to machine learners (for instance, operator valued RKHS, kernels on time series, kernel embeddings of conditional probabilities). Third, to provide a forum for open problems, to elucidate misconceptions that sometimes occur in the literature, and to discuss technical challenges.