“It’s a wonderful result,” said Robert Knight, a neurologist and UC Berkeley professor of psychology at the Helen Wills Neuroscience Institute, who conducted the new research.
“One of the things for me about music is it has prosody and emotional content. As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who’s got ALS or some other disabling neurological or developmental disorder compromising speech output.”
He continued: “It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect. I think that’s what we’ve really begun to crack the code on.”
It is a significant step forward for brain-computer interface technology, which aims to connect humans to machines in order to treat neurological disorders or even create new abilities.
The scientists behind the research claim that advances in brain recording techniques could soon allow them to make detailed recordings using non-invasive methods such as ultra-sensitive electrodes attached to the scalp.
Ludovic Bellier, who was part of the research team, said: “Non-invasive techniques are just not accurate enough today.
“Let’s hope, for patients, that in the future we could, from just electrodes placed outside on the skull, read activity from deeper regions of the brain with a good signal quality. But we are far from there.”
The findings were published in a study titled ‘Music can be reconstructed from human auditory cortex activity using nonlinear decoding models’ – you can read it here via PLoS Biology.
Source From: www.nme.com
Source link