Receive free Neurotechnology updates
We’ll send you a myFT Daily Digest email rounding up the latest Neurotechnology news every morning.
“All in all, it was just a brick in the wall” — the chorus of Pink Floyd’s classic song emerged from speakers in a neuroscience lab at the University of California, Berkeley, the rhythms and words sounding muddy but recognisable.
The track was not a recording by the rock band but had been generated using artificial intelligence techniques from the brainwaves of people listening to it, in the world’s first scientific experiment to reconstruct a recognisable song from neural signals.
The findings will be invaluable both for scientists seeking to understand how the brain responds to music and for neurotechnologists who want to help people with severe neurological damage to communicate through brain-computer interfaces in a way that sounds more natural, whether they are speaking or singing.
“Music has prosody [patterns of rhythm and sound] and emotional content,” said Robert Knight, UC Berkeley professor of psychology and neuroscience, who led the research and whose findings were published in the journal PLOS Biology on Tuesday.
“As the whole field of brain-machine interfaces progresses, this provides a way to add human tone and rhythm to future brain implants for people who need speech or voice outputs . . . that’s what we’ve really begun to crack the code on,” added Knight.
The electro-encephalography (EEG) recordings used in the research were obtained in about 2012, at a time when people with severe epilepsy often had large arrays of electrodes — typically 92 each — placed over their brain surface to pinpoint the location of intractable seizures.
The patients volunteered to help scientific research at the same time by allowing researchers to record their brainwaves while they listened to speech and music.
Previous studies based on these experiments gave scientists enough data to reconstruct individual words that people were hearing from recordings of their brain activity. But only now, a decade later, has AI become powerful enough to reconstruct passages of song.
The Berkeley researchers analysed recordings from 29 patients who heard Pink Floyd’s “Another Brick in the Wall (Part 1)”, part of a trilogy of songs from the 1979 album The Wall. They pinpointed areas of the brain involved in detecting rhythm and found that some parts of the auditory cortex, located just behind and above the ear, responded at the onset of a voice or synthesiser while others responded to sustained vocals.
The findings supported longstanding ideas about the roles played by the brain’s two hemispheres. Although they work closely together, language is processed predominantly on the left side, while “music is more distributed, with a bias towards [the] right”, said Knight.
His colleague Ludovic Bellier, who led the analysis, said that devices used to help people communicate when they cannot speak tend to vocalise words one by one. The sentences spoken by machine have a robotic quality reminiscent of the way the late Stephen Hawking sounded on a speech-generating device.
“We want to give more colour and expressive freedom to the vocalisation, even when people are not singing,” said Bellier.
The Berkeley researchers said brain-reading technology could be extended to the point where musical thoughts could be decoded from someone wearing an EEG cap on the scalp rather than requiring electrodes under the skull on the brain. It might then be possible to imagine or compose music, relay the musical information and hear it played on external speakers.
“Non-invasive techniques are just not accurate enough today,” said Bellier. “Let’s hope that in the future we could, just from electrodes placed outside on the skull, read activity from deeper regions of the brain with a good signal quality.”
Read the full article here