Announcing the PASCAL Classifying Heart Sounds Workshop

=======================================================
For more details see: http://www.peterjbentley.com/heartworkshop/

Co-located with AISTATS 2012, the Classifying Heart Sounds Workshop is the first international workshop to focus on the use of statistical machine learning techniques to segment and classify real-world heart audio. This exciting one-day event will feature leading experts on auscultation, signal processing and machine learning, and will include presentations by researchers who have attempted the Classifying Heart Sounds Challenge, see below.

Free Registration! Attendance at the workshop is free, however we only have a limited amount of space, so please register if you plan to attend so that we can guarantee your place. To register, please send an email to Yiqi Deng with your details. Please register now – it is free!

Reminder of the PASCAL Heart Sounds Challenge
=============================================

The PASCAL-sponsored Heart Sounds Challenge is your chance to prove your machine learning technique can outperform those of everyone else – and win an iPad for your efforts! (Also come to the Canary Islands to present your results in a workshop after AISTATS!)

For more details see: http://www.peterjbentley.com/heartchallenge/

According to the World Health Organisation, cardiovascular diseases (CVDs) are the number one cause of death globally: more people die annually from CVDs than from any other cause. An estimated 17.1 million people died from CVDs in 2004, representing 29% of all global deaths. Of these deaths, an estimated 7.2 million were due to coronary heart disease. Any method which can help to detect signs of heart disease could therefore have a significant impact on world health. This challenge is to produce methods to do exactly that. Specifically, we are interested in creating the first level of screening of cardiac pathologies both in a Hospital environment by a doctor (using a digital stethoscope) and at home by the patient (using a mobile device).

For this challenge we have two datasets comprising several hundred real heart sounds, gathered from an iphone app by the general public, and by a digital stethoscope in a noisy hospital environment.

Challenge 1 is segmentation – can your method correctly identify the “lub dub” (S1 and S2) components of the sound?

Challenge 2 is classification – can your method correctly classify the heart sounds into categories such as Normal, Murmur, Extra Heart Sound, and Artifact?

This problem is of particular interest to machine learning researchers as it involves classification of audio sample data, where distinguishing between classes of interest is non-trivial. Data is gathered in real-world situations and frequently contains background noise of every conceivable type. The differences between heart sounds corresponding to different heart symptoms can also be extremely subtle and challenging to separate. Success in classifying this form of data requires extremely robust classifiers. Despite its medical significance, to date this is a relatively unexplored application for machine learning.

Enquiries and submission, email: Yiqi Deng