The Mind Reading Wizards

Telepathy was once nothing more than a parlour trick played by illusionists to entertain us. Names would seemingly be pulled out of our heads, numbers would be correctly guessed, our hiding places revealed. It was all done through trickery – reading our body language, tone of voice, and movement of eyes. Magic doesn’t really exist, and neither did mind reading. At least that used to be true, until the mind-reading wizards arrived. Now something resembling telepathy is becoming a reliable reality. Being poker-faced will not help you any more. Even if you control every movement of your muscles or flicker of your eyes, you will never hide your brain activity. The magic word is not abracadabra, but Hex-o-Spell!

15The art of mind reading is much more useful than the party tricks might have you believe. Being able to read the mind of a severely physically disabled person might be the only way to enable them to communicate with us. A machine that reads brain activity would give everyone a chance to express themselves and control their lives.

The computer wizards who specialise in Brain-Computer Interfaces had these aims in mind as they developed the Hex-o-Spell system. Their computer doesn’t just read a name or word, it allows a user to communicate any message by thought alone. It’s a brain-reading, machine-learning, text-input system that allows the user to type using their minds.

Simple brain-computer interfaces (BCIs) have been demonstrated for several decades. Using sensors placed on the head, often using gel to improve contact, the electrical impulses produced by large groups of neurons in different regions of the brain could be measured. The broad pattern or frequency of electroencephalogram (EEG) signals would then be used as the input to the computer. The problem is that our brains are hugely complex organs, each with its own individual design. It is possible for people to develop normally with just half a brain although their brains look very different to ours, and as Benjamin Blankertz explains, “also in normally developed humans, functions are differently located in the brain. And due to the foldings of the cortex, small changes in location may cause strong differences in the EEG.” Doing something as predictable as moving a finger or a foot, different people will have very different patterns of brain activity. Even worse, brains throw out new and different patterns of signals all the time, so the pattern of brain activity is different when you test the same person repeatedly. Even the equipment used to read the signals is unreliable – the gel dries out between sensor and head, or sensors are placed incorrectly or slip – so at different times the sensors may misread the brain and report completely different results from the same brain activity.

To overcome these problems, the standard approaches relied on the adaptability of our brains, and required users to learn how to relax and cause their brains to produce more predictable signals. For example, the more a person falls into a state resembling meditation, the more regular and slow the frequency of the EEG waves becomes. This method can be successful in some cases, but it requires lengthy training periods for users and so can be very limited.

In normally developed humans, functions are differently located in the brain …due to the foldings of the cortex, small changes in location may cause strong differences in the EEG.

What was needed was a way to pick out the activity in the precise regions of the brain that were relevant, and somehow overcome all of the variability inherent in EEG signals.

The problem is rather like listening to the muffled sound of a huge orchestra through a thick wall, and trying to interpret the melody from a single violin – when the orchestra change their seats and play a slightly different tune every time you listen. The traditional approach was to train the whole orchestra to all play more or less the same note. The solution as used in Hex-o-Spell was to train the listener to pick out that elusive violin. To make this work, the field of brain-computer interaction had to combine forces with machine learning.

The Berlin BCI group is the first to achieve this new fusion of sciences. They have become the world leaders in the area, organising three machine learning competitions, supported by PASCAL (the European funded network of scientists who specialise on pattern analysis, statistical modelling and machine learning). But the latest success has come from a PASCAL “pump-priming” project, in collaboration with Dr John Williamson from Glasgow University, to create a system that can work without the users needing to be trained. Hex-o-Spell comprises three elements: EEG measurement of brain activity, machine learning to interpret that activity, and an intelligent hexagonal grid of the alphabet that uses a language model in order to simplify the picking of letters.

Users of Hex-o-Spell are asked, for example, to move a finger of their right hand or a finger of their left hand. The brain signals in the sensorimotor cortex corresponding to the motor command for each movement occur in slightly different places. (Imagined movements can also be used, but since the technology is intended for users who may be amputees or paralyzed, the use of actual motor commands in the brain is better. In these patients, a ‘phantom command’ may be there but the movement is not.)

The 128 different channels of EEG data are filtered to help clarify the rhythms of interest and are then distilled down by a machine learning method known as common spatial patterns (CSP). This algorithm is trained on past data to produce filters that extract those values that vary the most for the regions of the brain likely to be most useful. Finally linear discriminant analysis (LDA) is used to distinguish the distilled values. This is a surprisingly old method, dating back to 1936, which seeks to group data into different classes by separating them with a line (or plane or hyperplane). To achieve this, the data is transformed (rotated) until the distance between data in the same class is minimised and between different classes is maximised. In this way, new EEG data can be classified as belonging to one or another class, and so the patterns corresponding to the intention to move the left or right finger can be identified.

BCI vs general HCI can be compared to Formula 1 cars vs mass-production cars

Once a signal has been identified by the computer, it is then fed into the Hex-o-Spell interface. This comprises six hexagons in a circle, each containing five symbols (letters of the alphabet, backspace and simple punctuation). If, say, the intention to move the right finger is detected, then an arrow in the middle will rotate, pointing to each hexagon in turn. If the intention to move the left finger is detected, then that arrow will grow in length until the current hexagon and its symbols are selected. Those symbols are then used to replace the contents of the six hexagons, and the arrow can once again be used to pick the single letter or punctuation required.  The clever part in the system is the way the symbols are arranged – their presentation is automatically changed according to a predictive language model, which works out which letters are most likely to be chosen next, and places those closest to the arrow. This ensures that the user never has to rotate the arrow far to choose the next letter, and speeds up the whole process dramatically.

Hex-o-Spell was demonstrated at the CeBIT fair 2006 in Hannover with remarkable success. Two users who had little or no practise with any similar device successfully used the system for many hours. One person managed between 2.3 and 5 characters a minute and the other achieved between 4.6 and 7.6 characters a minute – world class performance for this type of BCI.

The Berlin BCI group is continuing its groundbreaking research. Collaborator Roderick Murray-Smith at Glasgow University explains the wider context of the work: “BCI vs general HCI can be compared to Formula 1 cars vs mass-production cars. BCI gives very extreme challenges to interaction designers, which at times force them to reconsider the basics of their field, because many standard techniques have implicit assumptions of fairly reliable input mechanisms. That means that techniques which prove their value in BCI might find application in a different form on, for example, mobile phones with novel sensors such as GPS & accelerometers.”

Work investigating how well patients can use the technology is underway now. If the wizards at the Berlin BCI group are successful, then one day anyone who can think, will be able to communicate, no matter what their physical disability might be.